You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There are plenty of LLM providers that are compatible with OpenAI APIs (e.g. Deepseek). It would add more versatility to have a generic OpenAI Compatible provider that accepts base URL, API key, and model as inputs, and calls accordingly. This approach is implemented by VS Code's extension Cline, as well as others
Code of Conduct
I agree to follow this project's Code of Conduct
The text was updated successfully, but these errors were encountered:
Thanks for the idea, we are thinking on changing our internal logic to rely on the new {ellmer}. That should allow us to support (hopefullly) everything that package can handle. I see that we can extend the chat_openai() method with a custom URL and API KEY very easily. The model might be a bit more trickier. We'll be sharing any updates on this thread.
Btw, if you have Ollama + Deepseek running locally, it should be already supported
@calderonsamuel
That's interesting. Thanks for the hints.
Yes, I have Ollama, but Deepseek is kinda slow to run locally, even on an M2 Ultra, so I primarily rely on the APIs
What would you like to have?
There are plenty of LLM providers that are compatible with OpenAI APIs (e.g. Deepseek). It would add more versatility to have a generic OpenAI Compatible provider that accepts base URL, API key, and model as inputs, and calls accordingly. This approach is implemented by VS Code's extension Cline, as well as others
Code of Conduct
The text was updated successfully, but these errors were encountered: