Skip to content

Commit

Permalink
Merge pull request #15 from togethercomputer/rahul_dev
Browse files Browse the repository at this point in the history
New Dragonfly Architecture Merge
  • Loading branch information
rthapa84 authored Oct 16, 2024
2 parents bf4377f + 165f999 commit bf6ac10
Show file tree
Hide file tree
Showing 18 changed files with 228 additions and 250 deletions.
179 changes: 114 additions & 65 deletions LICENSE

Large diffs are not rendered by default.

65 changes: 33 additions & 32 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,32 +1,37 @@
<div align="center">
<img src="assets/dragonfly_icon.png" alt="Dragonfly" style="width: 150px; display: block; margin-left: auto; margin-right: auto;" />
<h1>Dragonfly: Multi-Resolution Zoom Supercharges Large Visual-Language Model</h1>
<h1>Dragonfly: Multi-Resolution Zoom-In Encoding Enhances Vision-Language Models</h1>
</div>

## 🔥 News
- **Note**: We updated our codebase and arxiv paper with improved version of Dragonfly architecture. If you still want to use the old version of the code, it is still in [github branch](https://github.com/togethercomputer/Dragonfly/tree/dragonfly-v1).
- [Our paper](https://arxiv.org/abs/2406.00977) is out on arxiv.
- Check out [our blogpost](https://www.together.ai/blog/dragonfly-v1).
- Our model checkpoints are out on huggingface 🤗 🚀:
- General: [`togethercomputer/Llama-3-8B-Dragonfly-v1`](https://huggingface.co/togethercomputer/Llama-3-8B-Dragonfly-v1)
- Biomed: [`togethercomputer/Llama-3-8B-Dragonfly-Med-v1`](https://huggingface.co/togethercomputer/Llama-3-8B-Dragonfly-Med-v1)
- General: [`togethercomputer/Llama-3.1-8B-Dragonfly-v2`](https://huggingface.co/togethercomputer/Llama-3.1-8B-Dragonfly-v2)
- Biomed: [`togethercomputer/Llama-3.1-8B-Dragonfly-Med-v2`](https://huggingface.co/togethercomputer/Llama-3.1-8B-Dragonfly-Med-v2)


## 📖 Introduction

![Dragonfly framework](assets/model_overview.png)

Recent advances in large multimodal models (LMMs) suggest that higher image resolution enhances the fine-grained understanding of image details, crucial for tasks such as visual commonsense reasoning and analyzing biomedical images. However, increasing input resolution poses two main challenges: 1) It extends the context length required by the language model, leading to inefficiencies and hitting the model's context limit; 2) It increases the complexity of visual features, necessitating more training data or more complex architecture. We introduce Dragonfly, a new LMM architecture that enhances fine-grained visual understanding and reasoning about image regions to address these challenges. Dragonfly employs two key strategies: multi-resolution visual encoding and zoom-in patch selection. These strategies allow the model to process high-resolution images efficiently while maintaining reasonable context length. Our experiments on eight popular benchmarks demonstrate that Dragonfly achieves competitive or better performance compared to other architectures, highlighting the effectiveness of our design. Additionally, we finetuned Dragonfly on biomedical instructions, achieving state-of-the-art results on multiple biomedical tasks requiring fine-grained visual understanding, including 92.3% accuracy on the Path-VQA dataset (compared to 83.3% for Med-Gemini) and the highest reported results on biomedical image captioning. To support model training, we curated a visual instruction-tuning dataset with 5.5 million image-instruction samples in the general domain and 1.4 million samples in the biomedical domain. We also conducted ablation studies to characterize the impact of various architectural designs and image resolutions, providing insights for future research on visual instruction alignment.
Recent advances in vision-language models (VLMs) have demonstrated the advantages of processing images at higher resolutions and utilizing multi-crop features to preserve native resolution details. However, despite these improvements, existing vision transformers (ViTs) still struggle to capture fine-grained details from less prominent objects, charts, and embedded text, limiting their effectiveness in certain tasks. In this paper, we go beyond recent high-resolution and multi-crop techniques by not only preserving the native resolution, but zooming in beyond it and extracting features from a large number of image sub-crops. This enhancement allows our model to better capture fine-grained details, overcoming the limitations of current ViTs. To manage the increased token count and computational complexity, we demonstrate that a simple mean-pooling aggregation over tokens is effective. Our model, Dragonfly, achieves competitive performance on general-domain tasks such as ScienceQA and AI2D, and excels in tasks requiring fine-grained image understanding, including TextVQA and ChartQA. Among models in the 7-8B parameter range, Dragonfly consistently ranks at the top across ten general-domain benchmarks, achieving the highest or second-highest scores in most cases, outperforming models that are significantly larger or trained on larger datasets. Our biomedical version, Dragonfly-Med, sets new benchmarks on several medical tasks, achieving 91.6% accuracy on SLAKE (compared to 84.8% for Med-Gemini), 67.1% token F1 score on Path-VQA (compared to 62.7% for Med-PaLM M), and attains state-of-the-art results across the majority of performance metrics. Overall, our work highlights the persistent challenge of engineering visual representations with fixed-resolution ViTs, and proposes a simple yet effective solution to address this issue and boost performance in both general and specialized domains.

![Example Generations](assets/examples.png)

# 📖 Table of Contents
1. [Installation](#installation)
2. [Checkpoint](#checkpoint)
5. [Inference](#inference)
3. [Dataset](#dataset)
4. [Training](#training)
6. [BibTeX](#bibtex)
7. [Licence](#license)

# 📖 Table of Contents
- [📖 Table of Contents](#-table-of-contents)
- [💿 Installation](#-installation)
- [🏁 Checkpoint](#-checkpoint)
- [🧠 Inference](#-inference)
- [📊 Dataset](#-dataset)
- [🏋️‍♂️ Training](#️️-training)
- [Stage 1](#stage-1)
- [Stage 2](#stage-2)
- [🏆 Credits](#-credits)
- [📚 BibTeX](#-bibtex)
- [🪪 License](#-license)

<a name="installation"/>

Expand All @@ -52,21 +57,21 @@ pip install --upgrade -e .

## 🏁 Checkpoint

*Note: These models are released under [Llama 3 Community License Agreement](LICENSE)*
*Note: These models are released under [Llama 3.1 Community License Agreement](LICENSE)*

We release two huggingface model checkpoints: [`togethercomputer/Llama-3-8B-Dragonfly-v1`](https://huggingface.co/togethercomputer/Llama-3-8B-Dragonfly-v1) and [`togethercomputer/Llama-3-8B-Dragonfly-Med-v1`](https://huggingface.co/togethercomputer/Llama-3-8B-Dragonfly-Med-v1). Please follow the script [`test_dragonfly.py`](test_dragonfly.py) for more details. We provide a brief description on how to use them below.
We release two huggingface model checkpoints: [`togethercomputer/Llama-3.1-8B-Dragonfly-v2`](https://huggingface.co/togethercomputer/Llama-3.1-8B-Dragonfly-v2) and [`togethercomputer/Llama-3.1-8B-Dragonfly-Med-v2`](https://huggingface.co/togethercomputer/Llama-3.1-8B-Dragonfly-Med-v2). Please follow the script [`test_dragonfly.py`](test_dragonfly.py) for more details. We provide a brief description on how to use them below.

<a name="inference"/>

## 🧠 Inference

If you have successfully completed the [Installation](#installation) process, then you should be able to follow the steps below.

We provide two test examples inside [`test_images`](test_images).
We provide two test examples inside [`assets`](assets).

Question: Summarize the visual content of the image.
Question: What is so funny about this image?

![Skateboard](test_images/skateboard.png)
![Monalisa Dog](assets/monalisa_dog.jpg)

Load necessary packages
```python
Expand All @@ -83,26 +88,26 @@ Instantiate the tokenizer, processor, and model.
```python
device = torch.device("cuda:0")

tokenizer = AutoTokenizer.from_pretrained("togethercomputer/Llama-3-8B-Dragonfly-v1")
clip_processor = AutoProcessor.from_pretrained("openai/clip-vit-base-patch32")
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/Llama-3.1-8B-Dragonfly-v2")
clip_processor = AutoProcessor.from_pretrained("openai/clip-vit-large-patch14-336")
image_processor = clip_processor.image_processor
processor = DragonflyProcessor(image_processor=image_processor, tokenizer=tokenizer, image_encoding_style="llava-hd")

model = DragonflyForCausalLM.from_pretrained("togethercomputer/Llama-3-8B-Dragonfly-v1")
model = DragonflyForCausalLM.from_pretrained("togethercomputer/Llama-3.1-8B-Dragonfly-v2")
model = model.to(torch.bfloat16)
model = model.to(device)
```

Now, lets load the image and process them.
```python
image = Image.open("./test_images/skateboard.png")
image = Image.open("./assets/monalisa_dog.jpg")
image = image.convert("RGB")
images = [image]
# images = [None] # if you do not want to pass any images

text_prompt = "<|start_header_id|>user<|end_header_id|>\n\nSummarize the visual content of the image.<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"
text_prompt = "<|start_header_id|>user<|end_header_id|>\n\nWhat is so funny about this image?<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"

inputs = processor(text=[text_prompt], images=images, max_length=2048, return_tensors="pt", is_generate=True)
inputs = processor(text=[text_prompt], images=images, max_length=4096, return_tensors="pt", is_generate=True)
inputs = inputs.to(device)
```

Expand All @@ -118,11 +123,7 @@ generation_text = processor.batch_decode(generation_output, skip_special_tokens=

An example response.
```plaintext
In the heart of a vibrant skatepark, a skateboarder is caught in a moment of pure exhilaration. The skateboarder, dressed in a black t-shirt adorned with a yellow graphic and black pants, is suspended in mid-air, performing an impressive trick on a concrete ramp. The skateboarder's arms are outstretched, adding balance to the daring stunt.
The skatepark itself is a concrete playground, with the skateboarder's ramp being the main focus. In the background, palm trees sway gently, adding a touch of nature to the urban setting. A few spectators can be seen in the distance, their attention riveted on the airborne skateboarder.
The image captures not just a moment, but a story of skill, courage, and the joy of skateboarding.<|eot_id|>
The humor in this image comes from the surreal juxtaposition of a dog's face with the body of the Mona Lisa, a famous painting by Leonardo da Vinci. The Mona Lisa is known for her enigmatic smile and is often considered one of the most famous paintings in the world. By combining the dog's face with the body of the Mona Lisa, the artist has created a whimsical and amusing image that plays on the viewer 's expectations and familiarity with the original paintings. The contrast between the dog's natural, expressive features and the serene, mysterious expression of the Mona Lisa creates a humerous effect that is likely to elicit laughter<|eot_id|>
```

<a name="dataset"/>
Expand Down Expand Up @@ -179,7 +180,7 @@ Describe the content in the image.<|eot_id|><|start_header_id|>assistant<|end_he
We would like to acknowledge the following resources that were instrumental in the development of Dragonfly:

- [Meta Llama 3](https://huggingface.co/meta-llama/Meta-Llama-3-8B): We utilized the Llama 3 model as our foundational language model.
- [CLIP](https://huggingface.co/openai/clip-vit-base-patch32): Our vision backbone is CLIP model from OpenAI.
- [CLIP](https://huggingface.co/openai/clip-vit-large-patch14-336): Our vision backbone is CLIP model from OpenAI.
- Our codebase is built upon the following two codebases:
- [Otter: A Multi-Modal Model with In-Context Instruction Tuning](https://github.com/Luodian/Otter)
- [LLaVA-UHD: an LMM Perceiving Any Aspect Ratio and High-Resolution Images](https://github.com/thunlp/LLaVA-UHD)
Expand All @@ -190,8 +191,8 @@ We would like to acknowledge the following resources that were instrumental in t

```bibtex
@misc{chen2024dragonfly,
title={Dragonfly: Multi-Resolution Zoom Supercharges Large Visual-Language Model},
author={Kezhen Chen and Rahul Thapa and Rahul Chalamala and Ben Athiwaratkun and Shuaiwen Leon Song and James Zou},
title={Dragonfly: Multi-Resolution Zoom-In Encoding Enhances Vision-Language Models},
author={Rahul Thapa and Kezhen Chen and Ian Covert and Rahul Chalamala and Ben Athiwaratkun and Shuaiwen Leon Song and James Zou},
year={2024},
eprint={2406.00977},
archivePrefix={arXiv},
Expand Down
File renamed without changes
Binary file added assets/examples.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file removed assets/model_overview.pdf
Binary file not shown.
Binary file modified assets/model_overview.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added assets/monalisa_dog.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
3 changes: 1 addition & 2 deletions environment.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,7 @@ name: dragonfly_env
channels:
- defaults
dependencies:
- python=3.9
- conda-forge::openjdk
- python=3.10
- pip
- pip:
- -r requirements.txt
7 changes: 0 additions & 7 deletions pipeline/data_utils/data.py
Original file line number Diff line number Diff line change
Expand Up @@ -33,13 +33,6 @@
USER_AGENT = get_datasets_user_agent()

Image.MAX_IMAGE_PIXELS = 1000000000
MAX_NUM_TOKENS = 256
MAX_NUM_IMAGES = 5
TINY_IMAGE_SIZE_THRESHOLD = 1
NUM_BACKUP_SPLIT = 5000
N_CHANNELS = 3
INTERLEAVED_IMAGE_SIZE = 224
MIN_KB = 10

IMAGE_CAP_INSTRUCT = [
"Analyze the image in a comprehensive and detailed manner.",
Expand Down
3 changes: 3 additions & 0 deletions pipeline/train/training.py
100755 → 100644
Original file line number Diff line number Diff line change
Expand Up @@ -325,6 +325,7 @@ def train_one_epoch(
total=total_training_steps,
initial=current_global_steps,
):

data_time_m.update(time.time() - end)
global_step = num_steps + current_global_steps

Expand Down Expand Up @@ -423,6 +424,8 @@ def mask_embedding(m):
lr_scheduler.step()
optimizer.zero_grad()

# print(f"Step 3: Beginning Step: {num_steps}; Global Step: {global_step}")

# step time and reset end outside of rank 0
step_time_m.update(time.time() - end)
end = time.time()
Expand Down
49 changes: 10 additions & 39 deletions requirements.txt
Original file line number Diff line number Diff line change
@@ -1,40 +1,11 @@
accelerate>=0.19.0
braceexpand>=0.1.7
einops>=0.6.1
einops_exts>=0.0.4
fastapi>=0.95.2
gradio>=3.33.1
huggingface_hub>=0.13.3
importlib_metadata>=6.6.0
inflection>=0.5.1
markdown2>=2.4.8
more_itertools>=9.1.0
nltk>=3.8.1
numpy>=1.23.5
open_clip_torch>=2.16.0
openai>=1.1.1
opencv_python_headless>=4.5.5.64
Pillow>=9.5.0
pycocoevalcap>=1
pycocotools>=2.0.6
Requests>=2.31.0
scipy>=1.10.1
timm>=0.9.2
tqdm>=4.65.0
transformers==4.35.1
uvicorn>=0.22.0
webdataset>=0.2.48
natsort>=8.4.0
peft>=0.4.0
ijson>=3.2.3
yajl>=0.3.5
deepspeed>=0.10.0
wandb>=0.15.8
trl>=0.5.0
cffi>=1.15.1
pyyaml>=6.0.1
pytest>=7.4.2
prettytable>=3.9.0
datasets
torch==2.4.1
transformers==4.45.2
datasets==3.0.1
accelerate==1.0.1
deepspeed==0.15.2
packaging
ninja
ninja
tqdm
wandb
numpy
Pillow
6 changes: 3 additions & 3 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,13 +7,13 @@

setup(
name="dragonfly",
version="0.1.0",
version="0.1.1",
packages=find_packages(where="src"),
package_dir={"": "src"},
install_requires=requirements,
author="Together AI",
author_email="kezhen@together.ai",
description="Dragonfly: Multi-Resolution Zoom Supercharges Large Visual-Language Model",
author_email="rthapa84@stanford.edu",
description="Dragonfly: Multi-Resolution Zoom-In Encoding Enhances Vision-Language Models",
long_description=open("README.md").read(),
long_description_content_type="text/markdown",
url="https://github.com/togethercomputer/Dragonfly",
Expand Down
Loading

0 comments on commit bf6ac10

Please sign in to comment.