-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A quession about D_net! #11
Comments
Hi, D-Net is trained to estimate the depth map in absolute scale. If the scale of the test scene is significantly different from the scale of the training scenes, the depth error is likely to be high. You can try performing some sort of scale-matching to minimize the reprojection error. MaGNet aims to tackle multi-view depth estimation tasks where the training scenes and test scenes have similar metric scales. For example, if you need a depth estimation method for indoor scenes, you can assume that the scale of the training/test scenes will more or less be similar. Under such a scenario, it becomes useful to utilize monocular cues. |
Thanks for your reply! |
thanks for your nice work!
I trained the D_net on DTU dataset, the training loss declined normally(avg depth_error 5mm), but on the validation datase, the loss is high(avg depth_error 50mm). That seems like an overfit,.Could you tell me how to deal with this problem?
The text was updated successfully, but these errors were encountered: