Lilypad Network
DownDecentralized AI inference
Open, decentralized compute network for AI inference powered by distributed GPU nodes.
Company
- Headquarters
- Decentralized
- Founded
- 2023
Capabilities
- Models Hosted
- Curated Deck
- API Style
- OpenAI-compatible
- Compute Location
- Decentralized
- Specialties
- Decentralized inference
- Infrastructure
- Distributed GPU network
Models
Coming soon
We are standardizing model listings across providers.
Why Use Lilypad Network
Decentralized
Runs on distributed GPU nodes, not centralized data centers.
OpenAI Compatible
Standard chat completions API with streaming support.
Open Source Models
Access to Llama, Qwen, DeepSeek, Gemma, and more.
Details
About Lilypad Network
Lilypad Network is an open, decentralized compute network that enables AI inference on distributed GPU nodes. Their Anura API provides OpenAI-compatible endpoints for running LLM inference jobs.
The network currently supports 27+ models including DeepSeek V3 685B, Llama 4, Qwen3, Gemma 3, and other popular open-source models. All requests are routed through their testnet to available GPU providers.
Get the signal, skip the noise.
Weekly digest of new models and provider updates across 41+ compute providers. Curated for AI builders who ship.