Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A quession about D_net! #11

Open
Wuuu3511 opened this issue Sep 26, 2022 · 2 comments
Open

A quession about D_net! #11

Wuuu3511 opened this issue Sep 26, 2022 · 2 comments

Comments

@Wuuu3511
Copy link

Wuuu3511 commented Sep 26, 2022

thanks for your nice work!
I trained the D_net on DTU dataset, the training loss declined normally(avg depth_error 5mm), but on the validation datase, the loss is high(avg depth_error 50mm). That seems like an overfit,.Could you tell me how to deal with this problem?

@Wuuu3511 Wuuu3511 changed the title D_net A quession about D_net! Sep 26, 2022
@baegwangbin
Copy link
Owner

Hi, D-Net is trained to estimate the depth map in absolute scale. If the scale of the test scene is significantly different from the scale of the training scenes, the depth error is likely to be high.

You can try performing some sort of scale-matching to minimize the reprojection error.

MaGNet aims to tackle multi-view depth estimation tasks where the training scenes and test scenes have similar metric scales. For example, if you need a depth estimation method for indoor scenes, you can assume that the scale of the training/test scenes will more or less be similar. Under such a scenario, it becomes useful to utilize monocular cues.

@Wuuu3511
Copy link
Author

Hi, D-Net is trained to estimate the depth map in absolute scale. If the scale of the test scene is significantly different from the scale of the training scenes, the depth error is likely to be high.

You can try performing some sort of scale-matching to minimize the reprojection error.

MaGNet aims to tackle multi-view depth estimation tasks where the training scenes and test scenes have similar metric scales. For example, if you need a depth estimation method for indoor scenes, you can assume that the scale of the training/test scenes will more or less be similar. Under such a scenario, it becomes useful to utilize monocular cues.

Thanks for your reply!
Every scene in DTU dataset has seem depth range(425mm--900mm).Does this mean that every scene has the same scale? Is there a relationship between depth range and scale?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants