Replies: 1 comment 2 replies
-
Check out the latest release (new video in readme) I just pushed a settings window, which should cover the things you've mentioned in your message :) Edit: Appreciate the kind words :) |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I spinned up LLocalSearch with the default nomic-embed-text:v1.5 and hermes-2-Pro-Mistral-7B models for preliminary testing on a CPU with four threads and an 8GB vRAM card. The outcomes significantly surpassed my expectations. I feel a substantial potential for further enhancement, particularly with the employment of larger models. I have a Threadripper workstation with two Ada6000 cards, prompting me to contemplate which embedding and inference models would optimally enhance performance? Additionally, I am curious about the possibility of adjusting the context length for the search results input into the inference model, especially if it can accommodate much longer contexts, such as 32k or more. I greatly appreciate the effort invested in this project and eagerly anticipate future developments.
Beta Was this translation helpful? Give feedback.
All reactions