Ollama
OperationalRun AI models anywhere
Run large language models locally or via Ollama's cloud API. Simple, fast, and developer-friendly.
Company
- Headquarters
- San Francisco, CA
- Founded
- 2023
Capabilities
- Models Hosted
- Curated Deck
- Deployment
- Local + Cloud
- Specialties
- Easy setup, CLI-first
- API Style
- Native + OpenAI-compatible
- Compute Location
- Undisclosed (Contacted 16 Feb 2026)
Models
Coming soon
We are standardizing model listings across providers.
Why Use Ollama
Local-First
Run models on your own hardware with simple CLI.
Cloud Fallback
Seamlessly offload to cloud for larger models.
Developer Experience
One-line install, easy model management.
Details
About Ollama
Ollama makes it easy to run large language models locally or in the cloud. With a simple CLI and growing model library, developers can quickly experiment with and deploy AI models without complex infrastructure setup.
Cloud Models
Ollama’s cloud models run without requiring a powerful GPU. They’re automatically offloaded to Ollama’s cloud service while offering the same API as local models.
Get the signal, skip the noise.
Weekly digest of new models and provider updates across 41+ compute providers. Curated for AI builders who ship.