Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding support for Ollama/local LLMs #5

Open
NumberChiffre opened this issue Aug 26, 2024 · 2 comments
Open

Adding support for Ollama/local LLMs #5

NumberChiffre opened this issue Aug 26, 2024 · 2 comments

Comments

@NumberChiffre
Copy link

Hey guys,

Thanks for making your work public. I'm wondering if you have or will be exploring LLMs other than GPT4 for your evaluations. For instance, you've used Llama 3 in your benchmarks, would you be using some sort of Llama 3.1 with Ollama or even Claude/DeepSeek? Am wondering if you'll support those APIs in your code soon.

@l4b4r4b4b4
Copy link

cant you do that with setting OPENAI_API_BASE to your localhost?

You might need to point at a tool calling compatible endpoint like functionary ;)

@dmccrearytg
Copy link

We are also huge fans of Ollama. It allows us to run our text-to-graph NLP pipelines on a local consumer grade GPU with reasonable quality.

We are inspired by this open-source project:

They fine-tuned Phi-3 to get results as good as GPT-4 but on a much smaller model:
Phi-3 Graph on HuggingFace

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants
@NumberChiffre @l4b4r4b4b4 @dmccrearytg and others