
Received
Requested
Received
A community-powered LLM inference service. The system will support a range of models (and languages), powered by an open network of distributed resource providers, and is available to users worldwide.
The construction and utilization of LLM inference infrastructure is costly and centralized, placing control in the hands of major corporations and making it inaccessible to global populations.
Team
Team Connections