PDF/Image Upload with ollama/llama3 without external services #3293
jowin202
started this conversation in
Help Wanted
Replies: 1 comment
-
I had a similar error. I fixed it using this page specifically: https://www.librechat.ai/docs/configuration/rag_api also i made a point of installing poppler tools and running pdftotext on the pdf instead of trying to load the pdf into it. It was very quick, but it worked as intended. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello,
I would like to analyse my pdfs/images/textfiles with ollama and llama3. Therefore, I successfully installed Librechat instance with ollama and llama3 package. Then I deactivated all external services with api keys (google, bing, openai, ...), such that my local ollama instance is my very only endpoint and no connection to the internet and external services. There is llama3 installed at ollama docker container and the "nomic-embed-text" from the tutorial Use RAG with Ollama Local Embedding in this link:
https://www.librechat.ai/docs/configuration/rag_api
Log outputs of rag_api container when trying to upload a file before following the tutorial:
which is strange, because I haven't activated any openai support or connection
Log outputs of rag_api container when trying to upload a file after following the tutorial:
with the .env file adjusted like in the given tutorial
I get
In both cases, the GUI give an "Error processing file" and the docker logs of the librechat container gives
Furthermore, I enabled this in librechat.yml to allow file uploads
and the rag_api successfully installed from the docker-compose.override.yml file, as described in the tutorial.
I don't have any ideas why file upload does not work. However, most of the tutorials are written for bing and openai and not for the local AIs.
thank you in advance
Beta Was this translation helpful? Give feedback.
All reactions