-
Notifications
You must be signed in to change notification settings - Fork 150
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[PR revoked] add llama-3 and mixtral model for ai chat #225
Conversation
You might want to add this too diff --git a/duckduckgo_search/cli.py b/duckduckgo_search/cli.py
index deccd8c..1b609f6 100644
--- a/duckduckgo_search/cli.py
+++ b/duckduckgo_search/cli.py
@@ -137,7 +137,7 @@ def version():
def chat(save, proxy):
"""CLI function to perform an interactive AI chat using DuckDuckGo API."""
cache_file = "ddgs_chat_conversation.json"
- models = ["gpt-3.5", "claude-3-haiku"]
+ models = ["gpt-3.5", "claude-3-haiku", "llama-3", "mixtral"]
client = DDGS(proxy=proxy)
print("DuckDuckGo AI chat. Available models:") |
yes, I forgot to add it, thanks! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There will probably be some other models appearing there, so it is advisable to name them more specifically: "llama-3-70b", "mixtral-8x7b"
Ok, I have already renamed them, please review it. |
Changes reverted, not working. Need to add tests for specific models as well. |
Hi,the duckduckgo now support two more models
I added them and tested locally.