Lilypad Network logo

Lilypad Network

Down

Decentralized AI inference

Open, decentralized compute network for AI inference powered by distributed GPU nodes.

uptime

Company

Headquarters
Decentralized
Founded
2023

Capabilities

Models Hosted
Curated Deck
API Style
OpenAI-compatible
Compute Location
Decentralized
Specialties
Decentralized inference
Infrastructure
Distributed GPU network

Models

Coming soon

We are standardizing model listings across providers.

Why Use Lilypad Network

Decentralized

Runs on distributed GPU nodes, not centralized data centers.

OpenAI Compatible

Standard chat completions API with streaming support.

Open Source Models

Access to Llama, Qwen, DeepSeek, Gemma, and more.

Details

About Lilypad Network

Lilypad Network is an open, decentralized compute network that enables AI inference on distributed GPU nodes. Their Anura API provides OpenAI-compatible endpoints for running LLM inference jobs.

The network currently supports 27+ models including DeepSeek V3 685B, Llama 4, Qwen3, Gemma 3, and other popular open-source models. All requests are routed through their testnet to available GPU providers.

Newsletter

Get the signal, skip the noise.

Weekly digest of new models and provider updates across 41+ compute providers. Curated for AI builders who ship.

New model releases
Capability updates
Provider status
bots.so
The AI Inference Model Index
© bots.so — The AI Inference Model Index

bots.so aggregates publicly available model deployment information from official provider sources. We are not affiliated with any model provider. Model availability changes rapidly; always verify on official sites.