Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add interface for depth in both forward rendering and backward propagation #5

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

ingra14m
Copy link

@ingra14m ingra14m commented Aug 2, 2023

Test in self-defined dataset with GT-depth
depth-GT
Without depth loss:
00039

With depth loss
00039

@XuyangBai
Copy link

XuyangBai commented Aug 16, 2023

Should there be a second part of gradient affected by the depth loss, the dL_dtz in L264 is also affected by the dL_depths?

@ingra14m ingra14m force-pushed the depth branch 2 times, most recently from 2eb32ea to 8fa430b Compare November 9, 2023 07:30
@cv-lab-x
Copy link

Test in self-defined dataset with GT-depth depth-GT Without depth loss: 00039

With depth loss 00039

Hi,thanks for your work, have you transformed the render depths to point3ds? I test your latest branch on datasets bicycle mip360-datasets, it seems the render depths have some erros, and multi views's render depths are not consistent. Looking forward to your reply, thanks! @ingra14m

@shippoT
Copy link

shippoT commented Feb 27, 2024

Hi,how do you get the depth map shown in ”Test in self-defined dataset with GT-depth“. I rendered the same NeRF Synthetic dataset using blender,but got quite different result.
000_depth0000

@ingra14m
Copy link
Author

If I remember correctly, first, I set the parts where alpha=0 to be black.

Second, since the depth directly from Blender shows deeper depths in blacker colors, I used 1.0 - normalized depth. In this way, the depth gt can be directly used to supervise the depth directly output by gs.

@arcman7
Copy link

arcman7 commented Jun 4, 2024

Hi,thanks for your work, have you transformed the render depths to point3ds? I test your latest branch on datasets bicycle mip360-datasets, it seems the render depths have some erros, and multi views's render depths are not consistent. Looking forward to your reply, thanks! @ingra14m

What's the status of this branch? Does including the depth information improve the splat fitting or rendering?

@ingra14m
Copy link
Author

ingra14m commented Jun 5, 2024

Hi @arcman7, from my perspective, I think depth can not improve the rendering quality of 3D Gaussian splatting. The geometry enhancement cannot lead to a better rendering quality.

@arcman7
Copy link

arcman7 commented Jun 5, 2024

Hi @arcman7, from my perspective, I think depth can not improve the rendering quality of 3D Gaussian splatting. The geometry enhancement cannot lead to a better rendering quality.

Ah okay, thanks for the heads up - it was looking really hopeful when I was going through all of the related discussion threads and experiments people had setup

@arcman7
Copy link

arcman7 commented Jun 7, 2024

Hey btw, I wanted to ask you a quick question, and I didn't think creating another issue on this busy repo was the way to go; is there a way to render a pretrained gaussian model without setting up training gradient tensors in cuda? I'm not trying to improve or modify this pre-existing gaussian splat model, I just want to rasterize it from it's various camera view points but using the existing py-torch rasterization methods in this repo

@zcc00210
Copy link

Test in self-defined dataset with GT-depth depth-GT Without depth loss: 00039
With depth loss 00039

Hi,thanks for your work, have you transformed the render depths to point3ds? I test your latest branch on datasets bicycle mip360-datasets, it seems the render depths have some erros, and multi views's render depths are not consistent. Looking forward to your reply, thanks! @ingra14m

Hi,
May I ask if the depth problem of this branch has been solved? I also encountered the problem that the multi-frame view cannot be aligned after being projected to 3d.@cv-lab-x

@zcc00210
Copy link

Test in self-defined dataset with GT-depth depth-GT Without depth loss: 00039

With depth loss 00039

Hi,
May I ask why after I use your depth visualization cuda code under this branch, the rendered depth map output is all 255,255,255?
Your prompt reply will be highly appreciated!@ingra14m

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants