Skip to content

Example

Loris Sauter edited this page Jul 4, 2024 · 19 revisions

This is a full-scale example on how to use vitrivr-engine to index a collection of images and videos and serves as a starting point for advanced users of vitrivr-engine. Previous knowledge about multimedia retrieval and vitrivr-engine are beneficial, however the aim of this example is such that even novices can use vitrivr-engine only with this tutorial.

Goals

This is a tutorial / example on how to use vitrivr-engine for users, e.g. people with a multimedia collection aiming on indexing it. There are three goals of this tutorial:

  1. A quick reference for vitrivr-engine ingestion and retrieval
  2. Thoughts and design choices for schema, ingestion and retrieval
  3. Real-world example, in contrast to other documentation in this wiki, which is more abstract

Why vitrivr-engine

Having a multimedia collection, (videos and images for the sake of this tutorial) is great, however the means to explore / search within (large) collections are still rather limited. With vitrivr-engine, a general purpose content-based multimedia retrieval engine, ingestion (i.e. analysing the content and storing this information for efficient use) and retrieval (i.e. using the previously gathered information to find items of the collection) may improve the understanding / usability of the collection.

Prerequisites

Not a requirement, however reading and following the Getting Started guide is beneficial. Additionally, reading the introduction of the Documentation wiki page is also helpful.

Technical requirements are as follows:

  • JDK 21 or higher, e.g. OpenJDK
  • CottontailDB at least v0.16.5
  • The example collection consisting of CC-0 videos and images. This is arguably a small collection and a real-world multimedia collection would be significantly larger.

Setup

In case no release exists, then building vitrivr-engine is required.

  1. Start CottontailDB on the default port 1865
  2. Build vitrivr-engine (from the root of the repository): Unix:
./gradlew distZip

Windows:

.\gradlew.bat distZip
  1. Unzip the distribution, e.g. unzip -d ../instance/ vitrivr-engine-module-server/build/distribution/vitrivr-engine-server-0.0.1-SNAPSHOT.zip
  2. Prepare the media data into a folder called example/media

By now, you should have the following folder structure:

+ vitrivr-engine/
|
+ instance/
  |
  + vitrivr-engine-server-0.0.1-SNAPSHOT/
    |
    + bin/
    |
    + lib/
+ example/
  |
  + media/
    |
    + images/
    |
    + videos/
    |
    - README.md
|
+ cottontaildb/

The cottontaildb folder is optional and might contain either the DBMS or the repository. We will not delve deeper into the cottontail setup. In the

The Schema

Since we have images and videos with a rather diverse set of styles, we aim on extracting as much content-based information as possible. Therefore, we set the schema accordingly:

The schema fields in detail:

Field Type Description Module
averagecolor Vector, length: 3 The most basic feature for completeness sake core
clip Vector, length: 512 CLIP based dense embedding, enables textual, concept search fes
file Structural Metadata for the file core
whisper Textual ASR: OpenAI whisper deep learning based subtitle analysis fes
ocr Textual OCR: Text recogntion both for images and videos, however for videos only on key frames fes
dino Vector length 384 DINO based dense embedding, predominantly for query-by-example fes
time Structural Temporal metadata for time-based media (e.g. video, audio) core
video Structural Metadata for videos, e.g. resolution, FPS, ... core

The fes module depends on the feature extraction server, a micro service for extraction and queries using pre-trained deep learning models. There is a list of available tasks and the README explains the setup.

For the sake of this tutorial, we assume that there is a FES instance running on the same machine, available under the host http://127.0.0.1:8888 (which should be the default port, following the instructions of FES).

Schema Configuration

This is the schema we use:

{
  "schemas": [
    {
      "name": "example",
      "connection": {
        "database": "CottontailConnectionProvider",
        "parameters": {
          "Host": "127.0.0.1",
          "port": "1865"
        }
      },
      "fields": [
        {
          "name": "averagecolor",
          "factory": "AverageColor"
        },
        {
          "name": "file",
          "factory": "FileSourceMetadata"
        },
        {
          "name": "clip",
          "factory": "DenseEmbedding",
          "parameters": {
            "host": "http://127.0.0.1:8888",
            "model": "open-clip-vit-b32",
            "length":"512"
          }
        },
        {
          "name": "dino",
          "factory": "DenseEmbedding",
          "parameters": {
            "host": "http://127.0.0.1:8888/",
            "model": "dino-v2-vits14",
            "length":"384"
          }
        },
        {
          "name": "whisper",
          "factory": "ASR",
          "parameters": {
            "host": "http://127.0.0.1:8888/",
            "model": "whisper"
          }
        },
        {
          "name": "ocr",
          "factory": "OCR",
          "parameters": {
            "host": "http://127.0.0.1:8888/",
            "model": "tesseract"
          }
        },
        {
          "name": "time",
          "factory": "TemporalMetadata"
        },
        {
          "name": "video",
          "factory": "VideoSourceMetadata"
        },
      ],
      "resolvers": {
        "disk": {
          "factory": "DiskResolver",
          "parameters": {
            "location": "./example/thumbs"
          }
        }
      },
      "exporters": [
        {
          "name": "thumbnail",
          "factory": "ThumbnailExporter",
          "resolverName": "disk",
          "parameters": {
            "maxSideResolution": "300",
            "mimeType": "JPG"
          }
        }
      ],
      "extractionPipelines": []
    }
  ]
}

The Ingestion

To simplify the pipelines, it is beneficial to separate them based on media type. In this tutorial's collection, we do have images and videos, therefore we have two separate ones. Even with a shared schema, not all media types can be analysed for all the fields we have defined. For instance, there is no audio attached to images and therefore, we won't extract ASR from them.

Image Pipeline

The basic idea behind the image pipeline is the assumption, that the microservice, FES (feature-extraction-server), for CLIP, OCR, and DINO can handle multiple requests, which may take some time. In the meantime, vitrivr-engine can extract metadata information.

%%{
  init: {
    'theme': 'base',
    'themeVariables': {
      'primaryColor': '#2D373C',
      'primaryTextColor': '#D2EBE9',
      'primaryBorderColor': '#A5D7D2',
      'lineColor': '#D20537',
      'secondaryColor': '#2D373C',
      'edgeLabelBackground': '#000'
    }
  }
}%%
flowchart LR

direction LR
 s[ ] --> e[enumerator] --> d[decoder]
 d --> a[averagecolor]
 d --> c[clip]
 d --> i[dino]
 d --> o[ocr]
 d --> t[thumbails]
 t --> f[filter]
 a --> f[filter]
 f -->|combine| m[file]
 m --> p[persistance]
 c --> p
 i --> p
 o --> p
 p -->|combine| q[ ]

 style q fill:#0000,stroke:#0000,stroke-width:0px
 style s fill:#0000,stroke:#0000,stroke-width:0px

Loading

Video Pipeline

Video Pipeline Configuration

Store the pipeline as configuration JSON file under example/videp-pipeline.json.

{
  "schema": "example",
  "context": {
    "contentFactory": "InMemoryContentFactory",
    "resolverName":"disk",
    "local": {
      "enumerator": {
        "path": "./example/media/",
        "depth": "3"
      },
      "decoder": {
        "timeWindowMs": "30_000"
      },
      "filter": {
        "type": "SOURCE:VIDEO"
      }
    }
  },
  "operators": {
    "enumerator": {
      "type": "ENUMERATOR",
      "factory": "FileSystemEnumerator",
      "mediaTypes": ["VIDEO"]
    },
    "decoder": {
      "type": "DECODER",
      "factory": "VideoDecoder"
    },
    "selector": {
      "type": "TRANSFORMER",
      "factory": "LastContentAggregator"
    },
    "averagecolor": {
      "type": "EXTRACTOR",
      "fieldName": "averagecolor"
    },
    "clip": {
      "type": "EXTRACTOR",
      "fieldName": "clip"
    },
    "dino": {
      "type": "EXTRACTOR",
      "fieldName": "dino"
    },
    "whisper": {
      "type": "EXTRACTOR",
      "fieldName": "whisper"
    },
    "ocr": {
      "type": "EXTRACTOR",
      "fieldName": "ocr"
    },
    "meta-file": {
      "type": "EXTRACTOR",
      "fieldName": "file"
    },
    "meta-video": {
      "type": "EXTRACTOR",
      "fieldName": "video"
    },
    "meta-time": {
      "type": "EXTRACTOR",
      "fieldName": "time"
    },
    "thumbnail": {
      "type": "EXPORTER",
      "exporterName": "thumbnail"
    },
    "filter": {
      "type": "TRANSFORMER",
      "factory": "TypeFilterTransformer"
    }
  },
  "operations": {
    "stage-0-0": {"operator": "enumerator"},
    "stage-1-0": {"operator": "decoder","inputs": ["stage-0-0"]},
    "stage-2-0": {"operator": "selector","inputs": ["stage-1-0"]},
    "stage-3-0": {"operator": "clip","inputs": ["stage-2-0"]},
    "stage-3-1": {"operator": "dino","inputs": ["stage-2-0"]},
    "stage-3-2": {"operator": "whisper","inputs": ["stage-2-0"]},
    "stage-3-3": {"operator": "ocr","inputs": ["stage-2-0"]},
    "stage-3-4": {"operator": "averagecolor","inputs": ["stage-2-0"]},
    "stage-3-5": {"operator": "thumbnail","inputs": ["stage-2-0"]},
    "stage-4-0": {"operator": "filter","inputs": ["stage-3-5","stage-3-4"], "merge": "COMBINE"},
    "stage-5-0": {"operator": "meta-file", "inputs": ["stage-4-0"]},
    "stage-6-0": {"operator": "meta-video", "inputs": ["stage-5-0"]},
    "stage-7-0": {"operator": "meta-time", "inputs": ["stage-6-0"]}
  },
  "output": [
    "stage-3-0",
    "stage-3-1",
    "stage-3-2",
    "stage-3-3",
    "stage-7-0"
  ],
  "mergeType": "COMBINE"
}

Running the Ingestion

We run the ingestion using the shipped CLI

Start vitrivr-engine

Let's start the CLI using the previously built executable, this also works from an IDE (be careful to select the Main from the vitrivr-engine-server module!) or directly with the JAR.

./instance/vitrivr-engine-server-0.0.1-SNAPSHOT/bin/vitrivr-engine-server ./example/schema.json

Initialise Storage Layer

Before the start of the ingestion, it is essential that we prepare the database, essentially implementing the schema.

Using the CLI, we call the schema's init command.

v> example init

Since our schema is named example the command is as above. In case you renamed the schema, please use the template <schema> init.

Start the Ingestion

Ingestion jobs are schema-dependent and therefore, the command is similar to the init as before:

v> example extract -c ./example/image-pipeline.json

It is good practice to wait until a job has finished. With the default settings, there are a lot of log statements printed continiously to the console. As a rule of thumb, if the logs have stopped popping up every now and so often, the ingestion has finished (wether successfully or not, should be written by the log).

v> example extract -c ./example/video-pipeline.json

We provide with the -c option the path to the pipeline we created earlier. It is important to note, that these relative paths work due to the setup. In addition, please be aware that any path in any configuration file is relative to the the working directory by default. If you followed this tutorial, this shouldn't be a problem. Another solution to not run into issues is to always use absolute paths.

Retrieval

Clone this wiki locally