This repository contains ollama based logic for open source AI-powered search engine.
To use this implementation, you will need to have Python >= 3.10 installed on your system, as well as the following Python libraries:
git clone https://github.com/tomar840/Jigyasa.git
cd Jigyasa.ai
pip install -r requirements.txt
To install ollama, follow instructions from this repo.
You can start the ollama inference server using instructions from this repo.
In a standalone terminal, run ollama inference server for your model. We are using Llama3.1-Instruct model at the moment.
ollama run llama3:8b-instruct-q8_0
Run the below commands to run the streamlit app
python libs/modules/route/answer.py
streamlit run ./libs/modules/route/app.py
then go to http://localhost:8501/
- Tuning prompts for Llama-3.1-Instruct
Contributions to improve the project are welcome. Please follow these steps to contribute:
Fork the repository.
Create a new branch for each feature or improvement.
Submit a pull request with a comprehensive description of changes.