Groq
OperationalFastest inference on custom LPU hardware
Lightning-fast inference powered by custom Language Processing Units.
Company
- Headquarters
- Coming soon
- Founded
- Coming soon
Capabilities
- Models Hosted
- Curated Deck
- Inference Speed
- 500+ tokens/sec
- Specialties
- Coming soon
- Unique Features
- Custom LPU hardware
- API Style
- OpenAI-compatible
- Est. Compute Region
- US (Dallas, Houston), Canada, Saudi Arabia, Finland [source]
Compute locations are estimated from public sources and may be outdated. Verify directly with the provider for compliance decisions.
Models
Coming soon
We are standardizing model listings across providers.
Why Use Groq
Details
Get the signal, skip the noise.
Weekly digest of new models and provider updates across 40+ compute providers. Curated for AI builders who ship.