Research Report: Exploring and Building Visual Question Answering Systems using CLEVR and EasyVQA
Example 1:
Example 2:
Please follow the below instructions to run our code:
-
Download our best performing model checkpoint from here and place in:
models/
-
Download the CLEVR dataset from here, we've used CLEVR v1.0 Main (Not CoGenT), place the data in:
CLEVR_v1.0/ images/ questions/
If you want to instead try it on a simpler, smaller datatset you can try EasyVQA, which has only 13 classes. The code and process remains the same.
-
Run multimodel-clevr-public.py which is in the project root folder: (Make sure all the requirements are installed)
python multimodel-clevr-public.py