You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi Mr. Hu,
When I try to run the inference on illuminated .PNG or .jpg images, observed the illumination correction is not as good as for the images from the mentioned datasets. Observing a pale pink tint in the corrected output image for various samples.
So does it mean the trained model works only for linear images and in order to make it work for non linear images (jpg, Png) does it need to be re-trained?
The text was updated successfully, but these errors were encountered:
The way color constancy CNN models (including this one) work is that they learn how to correct color that camera sensor produces (camera sensitivity function), as well as color from the source of illumination. Therefore, a particular model is only usable on the type of images it was trained on, in this case unprocessed images taken by a particular camera sensor.
Universal estimator using CNN, would require a prestep to remove the impact of camera sensor. Interesting idea can be found in this paper. https://arxiv.org/pdf/1912.06888.pdf
Hi Mr. Hu,
When I try to run the inference on illuminated .PNG or .jpg images, observed the illumination correction is not as good as for the images from the mentioned datasets. Observing a pale pink tint in the corrected output image for various samples.
So does it mean the trained model works only for linear images and in order to make it work for non linear images (jpg, Png) does it need to be re-trained?
The text was updated successfully, but these errors were encountered: