- ddbc80f: Fix missing brave and duckduckgo search tools
- 06fdd7f: Add support for LLM Hub v2
- ce2f24d: Update loaders and tools config to yaml format
- e8db041: Let user select multiple datasources (URLs, files and folders)
- 78ded9e: Add Dockerfile template
- 99e758f: Merge non-streaming and streaming template to one
- 56faee0: Added windows e2e tests
- 60ed8fe: Added missing environment variable config for URL data source
- 60ed8fe: Fixed tool usage by freezing llama-index package versions
- 3af6328: Add support for llamaparse using Typescript
- dd92b91: Add fetching llm and embedding models from server
- bac1b43: Add Milvus vector database
- edd24c2: Add observability with openllmetry
- 403fc6f: Minor bug fixes to improve DX (missing .env value and updated error messages)
- 0f79757: Ability to download community submodules
- 89a49f4: Add more config variables to .env file
- fdf48dd: Add "Start in VSCode" option to postInstallAction
- fdf48dd: Add devcontainers to generated code
- 2d29350: Add LlamaParse option when selecting a pdf file or a folder (FastAPI only)
- b354f23: Add embedding model option to create-llama (FastAPI only)
- 09d532e: feat: generate llama pack project from llama index
- cfdd6db: feat: add pinecone support to create llama
- ef25d69: upgrade llama-index package to version v0.10.7 for create-llama app
- 50dfd7b: update fastapi for CVE-2024-24762
- d06a85b: Add option to create an agent by selecting tools (Google, Wikipedia)
- 7b7329b: Added latest turbo models for GPT-3.5 and GPT 4
- ba95ca3: Use condense plus context chat engine for FastAPI as default
- c680af6: Fixed issues with locating templates path
- 6dd401e: Add an option to provide an URL and chat with the website data (FastAPI only)
- e9b87ef: Select a folder as data source and support more file types (.pdf, .doc, .docx, .xls, .xlsx, .csv)
- 27d55fd: Add an option to provide an URL and chat with the website data
- 3a29a80: Add node_modules to gitignore in Express backends
- fe03aaa: feat: generate llama pack example
- 88d3b41: fix packaging
- fa17f7e: Add an option that allows the user to run the generated app
- 9e5d8e1: Add an option to select a local PDF file as data source
- a73942d: Fix: Bundle mongo dependency with NextJS
- 9492cc6: Feat: Added option to automatically install dependencies (for Python and TS)
- f74dea5: Feat: Show images in chat messages using GPT4 Vision (Express and NextJS only)
- 8e124e5: feat: support showing image on chat message
- 2e6b36e: fix: re-organize file structure
- 2b356c8: fix: relative path incorrect
- Added PostgreSQL vector store (for Typescript and Python)
- Improved async handling in FastAPI
- 9c5e22a: Added cross-env so frontends with Express/FastAPI backends are working under Windows
- 5ab65eb: Bring Python templates with TS templates to feature parity
- 9c5e22a: Added vector DB selector to create-llama (starting with MongoDB support)
- 2aeb341: - Added option to create a new project based on community templates
- Added OpenAI model selector for NextJS projects
- Added GPT4 Vision support (and file upload)
- Bugfixes (thanks @marcusschiesser)
- acfe232: Deployment fixes (thanks @seldo)
- 8cdb07f: Fix Next deployment (thanks @seldo and @marcusschiesser)
- 9f9f293: Added more to README and made it easier to switch models (thanks @seldo)
- 4431ec7: Label bug fix (thanks @marcusschiesser)
- 25257f4: Fix issue where it doesn't find OpenAI Key when running npm run generate (#182) (thanks @RayFernando1337)
- 031e926: Update create-llama readme (thanks @logan-markewich)
- 91b42a3: change version (thanks @marcusschiesser)
- e2a6805: Hello Create Llama (thanks @marcusschiesser)