Back Projection Generative Strategy for Low and Normal Light Image Pairs with Enhanced Statistical Fidelity and Diversity (TCE'24)
By Cheuk-Yiu Chan, Wan-Chi Siu, Yuk-Hee Chan and H. Anthony Chan

Paper can be found in Here
@ARTICLE{10794693,
author={Chan, Cheuk-Yiu and Siu, Wan-Chi and Chan, Yuk-Hee and Chan, H. Anthony},
journal={IEEE Transactions on Consumer Electronics},
title={Back Projection Generative Strategy for Low and Normal Light Image Pairs With Enhanced Statistical Fidelity and Diversity},
year={2024},
volume={},
number={},
pages={1-1},
keywords={Diffusion models;Image enhancement;Data models;Deep learning;Image color analysis;Lighting;Colored noise;Training data;Mathematical models;Image synthesis;Low Light Image Enhancement;Image Synthesis;Generative Model;Diffusion;Data Augmentation},
doi={10.1109/TCE.2024.3516366}}
Low light image enhancement (LLIE) using supervised deep learning is limited by the scarcity of matched low/normal light image pairs. We propose Back Projection Normal-to-Low Diffusion Model (N2LDiff-BP), a novel diffusion-based generative model that realistically transforms normal-light images into diverse low-light counterparts. By injecting noise perturbations over multiple timesteps, our model synthesizes low-light images with authentic noise, blur, and color distortions. We introduce innovative architectural components - Back Projection Attention, BP
The code is implemented in PyTorch. To install the required packages, run:
conda env create -f environment.yml
MAX_JOBS=4 pip install flash-attn==2.2.5 --no-build-isolation
After installing the required packages, activate the environment by running:
conda activate N2L
We use LOL, VELOL and LOL-v2 datasets for training and testing. You can download the datasets from the following links:
and put them in the folder data/
with the following hararchy (note that some of the folder may need renaming):
data
- LOL
- eval15
- our485
- LOLv2
- Real_captured
- Test
- Train
- VE-LOL-L
- VE-LOL-L-Cap-Full
- test
- train
Pre-trained model can be downloaded from here.
The pre-trained model is required to be placed in the folder model/
with the following hararchy:
model
- final.ckpt
To generate low-light images using the pre-trained model, configure the cfg/test/test_unpaired.yaml
file:
test_folder_unpaired: "WHERE THE FOLDER BEING TEST"
results_folder_unpaired: './results'
and run:
python test_unpaired.py --cfg cfg/test/test_unpaired.yaml
run:
python test.py --cfg cfg/test/test.yaml


To train the model, run the following command:
python train.py --cfg cfg/train/train.yaml
In this paper, we have also proposed the LOL-Diff dataset. The dataset can be downloaded from here. The dataset is organized in the following way:
LOLDiff
- low_res (one zip file with 919MB)
- train
- test
- 4K (120GB with 30 split 7z file)
- train
- test
The purpose of this dataset is to provide a large-scale synthetic low-light dataset for training and testing. The dataset is generated using the N2LDiff-BP.
Thanks for looking into this repo! If you have any suggestion or question, feel free to leave a message here or contact me via cy3chan@sfu.edu.hk.