Ollama logo

Ollama

Operational

Run AI models anywhere

Run large language models locally or via Ollama's cloud API. Simple, fast, and developer-friendly.

uptime

Company

Headquarters
San Francisco, CA
Founded
2023

Capabilities

Models Hosted
Curated Deck
Deployment
Local + Cloud
Specialties
Easy setup, CLI-first
API Style
Native + OpenAI-compatible
Compute Location
Undisclosed (Contacted 16 Feb 2026)

Models

Coming soon

We are standardizing model listings across providers.

Why Use Ollama

Local-First

Run models on your own hardware with simple CLI.

Cloud Fallback

Seamlessly offload to cloud for larger models.

Developer Experience

One-line install, easy model management.

Details

About Ollama

Ollama makes it easy to run large language models locally or in the cloud. With a simple CLI and growing model library, developers can quickly experiment with and deploy AI models without complex infrastructure setup.

Cloud Models

Ollama’s cloud models run without requiring a powerful GPU. They’re automatically offloaded to Ollama’s cloud service while offering the same API as local models.

Newsletter

Get the signal, skip the noise.

Weekly digest of new models and provider updates across 41+ compute providers. Curated for AI builders who ship.

New model releases
Capability updates
Provider status
bots.so
The AI Inference Model Index
© bots.so — The AI Inference Model Index

bots.so aggregates publicly available model deployment information from official provider sources. We are not affiliated with any model provider. Model availability changes rapidly; always verify on official sites.