This project utilizes Ollama, an open-source library, to query the open-source LLMs (Llama Language Models) using Streamlit.
-
Clone the repository:
git clone https://github.com/nebulaa/Streamlit_Ollama_Chat.git
-
Navigate to the project directory:
cd Streamlit_Ollama_Chat
-
Install the required dependencies:
pip install -r requirements.txt
-
Download and run that Ollama application. Ensure that Ollama is running at
http://localhost:11434/
.ollama pull llama2-uncensored
-
Run the Streamlit app:
streamlit run chat.py
-
Open your web browser and visit
http://localhost:8501
to access the Ollama chat interface. -
Start interacting with the LLM by entering your queries and receiving responses in real-time.