From d97a0c53358507e55ed1a4be807e23b52c39236f Mon Sep 17 00:00:00 2001 From: Rahul Thapa Date: Tue, 4 Jun 2024 10:49:02 -0700 Subject: [PATCH] added CLIP credit --- README.md | 1 + 1 file changed, 1 insertion(+) diff --git a/README.md b/README.md index f1b8f50..0ee5666 100644 --- a/README.md +++ b/README.md @@ -186,6 +186,7 @@ Describe the content in the image.<|eot_id|><|start_header_id|>assistant<|end_he We would like to acknowledge the following resources that were instrumental in the development of Dragonfly: - [META LLAMA 3](https://huggingface.co/meta-llama/Meta-Llama-3-8B): We utilized the Llama3 model as our foundational language model. +- [CLIP](https://huggingface.co/openai/clip-vit-base-patch32): Our vision backbone is CLIP model from OpenAI. - Our codebase is built upon the following two codebases: - [Otter: A Multi-Modal Model with In-Context Instruction Tuning](https://github.com/Luodian/Otter) - [LLaVA-UHD: an LMM Perceiving Any Aspect Ratio and High-Resolution Images](https://github.com/thunlp/LLaVA-UHD)