NVIDIA logo

NVIDIA

Operational

AI infrastructure and inference at scale

Enterprise-grade AI inference powered by NVIDIA's GPU infrastructure.

uptime

Company

Headquarters
Santa Clara, CA
Founded
1993

Capabilities

Models Hosted
Frontier Deck
Specialties
Enterprise AI, GPU-optimized
Unique Features
NIM microservices, GPU-native
API Style
OpenAI-compatible
Compute Location
US + Global

Models

Coming soon

We are standardizing model listings across providers.

Why Use NVIDIA

GPU-Native Performance

Optimized inference on NVIDIA hardware.

Enterprise Ready

Production-grade APIs with NIM microservices.

Details

About NVIDIA

NVIDIA provides enterprise AI inference through their NIM (NVIDIA Inference Microservices) platform, offering GPU-optimized models from major providers.

Newsletter

Get the signal, skip the noise.

Weekly digest of new models and provider updates across 41+ compute providers. Curated for AI builders who ship.

New model releases
Capability updates
Provider status
bots.so
The AI Inference Model Index
© bots.so — The AI Inference Model Index

bots.so aggregates publicly available model deployment information from official provider sources. We are not affiliated with any model provider. Model availability changes rapidly; always verify on official sites.