Skip to content

Commit

Permalink
Update the KDD tutorial materials
Browse files Browse the repository at this point in the history
  • Loading branch information
yxdyc committed Aug 23, 2024
1 parent 613737b commit 88407d4
Show file tree
Hide file tree
Showing 37 changed files with 98,733 additions and 0 deletions.
1,068 changes: 1,068 additions & 0 deletions tutorials/notebooks/1-1_OP_insights.ipynb

Large diffs are not rendered by default.

284 changes: 284 additions & 0 deletions tutorials/notebooks/1-2_multimodal_dataset_format.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,284 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Data-Juicer Intermediate Dataset Format for Multimodal Datasets\n",
"\n",
"Due to large format diversity among different multimodal datasets and works, Data-Juicer propose a novel intermediate text-based interleaved data format for multimodal dataset, which is based on chunk-wise formats such MMC4 dataset.\n",
"\n",
"In the Data-Juicer format, a multimodal sample or document is based on a text, which consists of several text chunks. Each chunk is a semantic unit, and all the multimodal information in a chunk should talk about the same thing and be aligned with each other.\n",
"\n",
"Here is a multimodal sample example in Data-Juicer format below.\n",
"\n",
"- It includes 4 chunks split by the special token `<|__dj__eoc|>`.\n",
"- In addition to texts, there are 3 other modalities: images, audios, videos. They are stored on the disk and their paths are listed in the corresponding first-level fields in the sample.\n",
"- Other modalities are represented as special tokens in the text (e.g. image -- `<__dj__image>`). The special tokens of each modality correspond to the paths in the order of appearance. (e.g. the two image tokens in the third chunk are images of antarctica_map and europe_map respectively)\n",
"- There could be multiple types of modalities and multiple modality special tokens in a single chunk, and they are semantically aligned with each other and text in this chunk. The position of special tokens can be random in a chunk. (In general, they are usually before or after the text.)\n",
"- For multimodal samples, unlike text-only samples, the computed stats for other modalities could be a list of stats for the list of multimodal data (e.g. image_widths in this sample).\n",
"\n",
"```json\n",
"{\n",
" \"text\": \"<__dj__image> Antarctica is Earth's southernmost and least-populated continent. <|__dj__eoc|> \"\n",
" \"<__dj__video> <__dj__audio> Situated almost entirely south of the Antarctic Circle and surrounded by the \"\n",
" \"Southern Ocean (also known as the Antarctic Ocean), it contains the geographic South Pole. <|__dj__eoc|> \"\n",
" \"Antarctica is the fifth-largest continent, being about 40% larger than Europe, \"\n",
" \"and has an area of 14,200,000 km2 (5,500,000 sq mi). <__dj__image> <__dj__image> <|__dj__eoc|> \"\n",
" \"Most of Antarctica is covered by the Antarctic ice sheet, \"\n",
" \"with an average thickness of 1.9 km (1.2 mi). <|__dj__eoc|>\",\n",
" \"images\": [\n",
" \"path/to/the/image/of/antarctica_snowfield\",\n",
" \"path/to/the/image/of/antarctica_map\",\n",
" \"path/to/the/image/of/europe_map\"\n",
" ],\n",
" \"audios\": [\n",
" \"path/to/the/audio/of/sound_of_waves_in_Antarctic_Ocean\"\n",
" ],\n",
" \"videos\": [\n",
" \"path/to/the/video/of/remote_sensing_view_of_antarctica\"\n",
" ],\n",
" \"meta\": {\n",
" \"src\": \"customized\",\n",
" \"version\": \"0.1\",\n",
" \"author\": \"xxx\"\n",
" },\n",
" \"stats\": {\n",
" \"lang\": \"en\",\n",
" \"image_widths\": [224, 336, 512],\n",
" ...\n",
" }\n",
"}\n",
"```"
]
},
{
"cell_type": "markdown",
"id": "f0ed99bb",
"metadata": {},
"source": [
"## Dataset Format Conversion Tools\n",
"\n",
"According to the intermediate format, Data-Juicer provides several dataset format conversion tools for some popular multimodal works, such as LLaVA, MMC4, WavCaps, Video-ChatGPT, and so on.\n",
"\n",
"These tools consist of two types:\n",
"- Other format to Data-Juicer format: These tools are in `source_format_to_data_juicer_format` directory. They help to convert datasets in other formats to target datasets in Data-Juicer format.\n",
"- Data-Juicer format to other format: These tools are in `data_juicer_format_to_target_format` directory. They help to convert datasets in Data-Juicer formats to target datasets in target format.\n",
"\n",
"Here we take LLaVA-like dataset as an example to show you how to convert them to Data-Juicer intermediate format and convert back."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### LLaVA-like Dataset for Example"
]
},
{
"cell_type": "markdown",
"id": "68689751",
"metadata": {},
"source": [
"Below is a original sample in LLaVA format. As we can see, each sample consists of 3 level-1 fields: \"id\", \"image\", and \"conversations\". The conversion in field \"conversations\" could be single or multiple turns. We can convert it an interleaved image-text sample format in Data-Juicer intermediate format. Let's begin!\n",
"\n",
"First, we write this example sample to a file."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"import json\n",
"original_llava_data = [\n",
" {\n",
" \"id\": \"000000033471\",\n",
" \"image\": \"coco/train2017/000000033471.jpg\",\n",
" \"conversations\": [\n",
" {\n",
" \"from\": \"human\",\n",
" \"value\": \"<image>\\nWhat are the colors of the bus in the image?\"\n",
" },\n",
" {\n",
" \"from\": \"gpt\",\n",
" \"value\": \"The bus in the image is white and red.\"\n",
" },\n",
" {\n",
" \"from\": \"human\",\n",
" \"value\": \"What feature can be seen on the back of the bus?\"\n",
" },\n",
" {\n",
" \"from\": \"gpt\",\n",
" \"value\": \"The back of the bus features an advertisement.\"\n",
" },\n",
" {\n",
" \"from\": \"human\",\n",
" \"value\": \"Is the bus driving down the street or pulled off to the side?\"\n",
" },\n",
" {\n",
" \"from\": \"gpt\",\n",
" \"value\": \"The bus is driving down the street, which is crowded with people and other vehicles.\"\n",
" }\n",
" ]\n",
" }\n",
"]\n",
"\n",
"with open('llava.json', 'w') as file:\n",
" file.write(json.dumps(original_llava_data, indent=2))"
]
},
{
"cell_type": "markdown",
"id": "92648b2a",
"metadata": {},
"source": [
"Now, we can convert it to Data-Juicer Format with `llava_to_dj.py` tool in conversion tools. For conversation with multiple turns, we convert it into the same text chunk and only put the image in the first turn. And for each turn, we also add the speaker roles before each sentence. Different speakers in different turns are separated by a newline character '\\n'."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[32m2024-08-06 20:06:14.032\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36m__main__\u001b[0m:\u001b[36mmain\u001b[0m:\u001b[36m161\u001b[0m - \u001b[1mLoading original LLaVA dataset.\u001b[0m\n",
"\u001b[32m2024-08-06 20:06:14.032\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36m__main__\u001b[0m:\u001b[36mmain\u001b[0m:\u001b[36m163\u001b[0m - \u001b[1mLoad [1] samples.\u001b[0m\n",
"100%|██████████████████████████████████████████| 1/1 [00:00<00:00, 19239.93it/s]\n",
"\u001b[32m2024-08-06 20:06:14.034\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36m__main__\u001b[0m:\u001b[36mmain\u001b[0m:\u001b[36m287\u001b[0m - \u001b[1mStore the target dataset into [dj.jsonl].\u001b[0m\n",
"{\n",
" \"id\": \"000000033471\",\n",
" \"text\": \"[[human]]: <image>\\nWhat are the colors of the bus in the image?\\n[[gpt]]: The bus in the image is white and red.\\n[[human]]: What feature can be seen on the back of the bus?\\n[[gpt]]: The back of the bus features an advertisement.\\n[[human]]: Is the bus driving down the street or pulled off to the side?\\n[[gpt]]: The bus is driving down the street, which is crowded with people and other vehicles. <|__dj__eoc|>\",\n",
" \"images\": [\n",
" \"coco/train2017/000000033471.jpg\"\n",
" ]\n",
"}\n"
]
}
],
"source": [
"# you can replace the tool path with the correct path on your environment.\n",
"!python ../tools/multimodal/source_format_to_data_juicer_format/llava_to_dj.py --llava_ds_path llava.json --target_ds_path dj.jsonl\n",
"dj_data = json.load(open('dj.jsonl', 'r'))\n",
"\n",
"print(json.dumps(dj_data, indent=2))"
]
},
{
"cell_type": "markdown",
"id": "580c5ed1",
"metadata": {},
"source": [
"After processing with Data-Juicer, it can be converted back into LLaVA format, and used in the LLava training process."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[32m2024-08-06 20:06:46.638\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36m__main__\u001b[0m:\u001b[36mmain\u001b[0m:\u001b[36m149\u001b[0m - \u001b[1mStart to convert.\u001b[0m\n",
"1it [00:00, 10230.01it/s]\n",
"\u001b[32m2024-08-06 20:06:46.640\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36m__main__\u001b[0m:\u001b[36mmain\u001b[0m:\u001b[36m235\u001b[0m - \u001b[1mStart to write the converted dataset to [llava.json]...\u001b[0m\n",
"[\n",
" {\n",
" \"id\": \"000000033471\",\n",
" \"conversations\": [\n",
" {\n",
" \"from\": \"human\",\n",
" \"value\": \"<image>\\nWhat are the colors of the bus in the image?\"\n",
" },\n",
" {\n",
" \"from\": \"gpt\",\n",
" \"value\": \"The bus in the image is white and red.\"\n",
" },\n",
" {\n",
" \"from\": \"human\",\n",
" \"value\": \"What feature can be seen on the back of the bus?\"\n",
" },\n",
" {\n",
" \"from\": \"gpt\",\n",
" \"value\": \"The back of the bus features an advertisement.\"\n",
" },\n",
" {\n",
" \"from\": \"human\",\n",
" \"value\": \"Is the bus driving down the street or pulled off to the side?\"\n",
" },\n",
" {\n",
" \"from\": \"gpt\",\n",
" \"value\": \"The bus is driving down the street, which is crowded with people and other vehicles.\"\n",
" }\n",
" ],\n",
" \"image\": \"coco/train2017/000000033471.jpg\"\n",
" }\n",
"]\n"
]
}
],
"source": [
"# you can replace the tool path with the correct path on your environment.\n",
"!python ../tools/multimodal/data_juicer_format_to_target_format/dj_to_llava.py --dj_ds_path dj.jsonl --target_llava_ds_path llava.json\n",
"dj_data = json.load(open('llava.json', 'r'))\n",
"\n",
"print(json.dumps(dj_data, indent=2))"
]
},
{
"cell_type": "markdown",
"id": "e75ab8ef",
"metadata": {},
"source": [
"Finally, you can clean up the generated temperary files."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"!rm llava.json\n",
"!rm dj.jsonl"
]
},
{
"cell_type": "markdown",
"id": "8eb994bb",
"metadata": {},
"source": [
"# Conclusion\n",
"\n",
"In this notebook, we dive into the details of Data-Juicer intermediate multimodal dataset format and know how to convert datasets in other format to this Data-Juicer format and vice versa using a LLaVA-like example dataset."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
Loading

0 comments on commit 88407d4

Please sign in to comment.