A self-hosted Ollama instance deployed on Koyeb, providing access to open-source AI models through OpenAI-compatible API.
Koyeb Ollama Proxy offers 1 LLM API models.
Speed benchmark average: 2 tok/s.
https://sore-caitlin-flyingpot-402fcea7.koyeb.app