A ChatGPT-like AI Assistant powered by Ollama, built with FastAPI backend and Streamlit frontend.
- 🚀 Separate backend (FastAPI) and frontend (Streamlit) architecture
- 💬 ChatGPT-like chat interface with document context
- ⚡ Real-time streaming responses
- 🤖 Multiple Ollama model support (chat & embeddings)
- 📁 Document upload support (PDF, TXT, CSV, DOCX)
- 🔍 Context-aware answers using document content
- 📚 Multiple document processing
- ⚙️ Sidebar configuration panel with model selection
- 🔄 Session persistence
- 🌐 CORS-enabled API
- 🔌 Environment configuration support
- Python >= 3.10
- Ollama installed and running
- At least one Ollama model pulled (e.g.,
llama3.2:latest
) - Recommended embedding model:
nomic-embed-text:latest
- Clone the repository
git clone https://github.com/iammuhammadnoumankhan/OLLAMA_CHAT.git
cd OLLAMA_CHAT
- Set up backend
cd backend
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements.txt
- Set up frontend
cd ../frontend
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements.txt
- Install document processing dependencies
pip install pdfplumber docx2txt langchain-community
- Ollama Setup
# Start Ollama service (in separate terminal)
ollama serve
# Pull models (examples)
ollama pull llama3.2:latest
ollama pull nomic-embed-text:latest
- Environment Variables
Create .env
files:
backend/.env
OLLAMA_HOST=http://localhost:11434
frontend/.env
BACKEND_URL=http://localhost:8000
OLLAMA_HOST=http://localhost:11434
- Start Backend
cd backend
uvicorn app.main:app --reload
- Start Frontend
cd frontend
streamlit run app.py
- Using the Application
- Upload documents via sidebar (200MB/file limit)
- Select chat and embedding models from dropdown
- Chat normally or ask questions about documents
- Toggle streaming responses
- Access Points
- Frontend: http://localhost:8501
- Backend API: http://localhost:8000
- Ollama: http://localhost:11434
- Backend:
- Python FastAPI
- Ollama Python Client
- CORS Middleware
- Frontend:
- Streamlit
- Requests
- LangChain (document processing)
- AI:
- Ollama LLMs
- Ollama Embeddings
- Document loaders (PDF, CSV, DOCX, TXT)
Common Issues:
-
Document processing errors:
- Verify file size < 200MB
- Check supported file formats
- Ensure required dependencies are installed
-
Embedding model issues:
- Pull embedding model:
ollama pull nomic-embed-text
- Check embedding model selection in sidebar
- Pull embedding model:
-
Ollama connection issues:
- Verify
ollama serve
is running - Check
OLLAMA_HOST
in.env
files - Test connection:
curl http://localhost:11434/api/tags
- Verify
-
Model not found:
- Pull the model:
ollama pull <model-name>
- Refresh model list in frontend
- Pull the model:
Contributions are welcome! Please follow these steps:
- Fork the repository
- Create your feature branch (
git checkout -b feature/your-feature
) - Commit your changes (
git commit -am 'Add some feature'
) - Push to the branch (
git push origin feature/your-feature
) - Open a Pull Request
This project is licensed under the MIT License. See the LICENSE file for details.
Key changes made:
1. Added document-related features to Features section
2. Added embedding model requirements
3. Added document processing dependencies installation step
4. Updated configuration instructions for embedding models
5. Added usage instructions for document uploads
6. Enhanced troubleshooting section with document-related issues
7. Updated tech stack with document processing components
8. Added note about file size limits and supported formats