Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

As the number of iterations increases, the image quality decreases #96

Open
Zhou-Yijie opened this issue Mar 28, 2024 · 5 comments
Open

Comments

@Zhou-Yijie
Copy link

I trained the Dreamshaper_v7 model following the settings of train_lcm_distill_sd_wds.py. In the early stages of training (such as 100 iterations), the output image looks like this:
image
But as the number of iterations increases (such as 1000iteration), the output image becomes low quality and rough, like this:
image
More iterations did not fix it.
Has anyone encountered such a problem?

@yuyongcan
Copy link

Have you solved this?
When I tried to train the SD2.1 model on my own datasets, I found that the loss didn't converge and is higher than the initial iteration.

@Tramac
Copy link

Tramac commented May 15, 2024

+1

@mk322
Copy link

mk322 commented Oct 27, 2024

+1 I faced the same issue on video generation.

@dragon-cao
Copy link

+1 I faced the same issue.Have you solved this?

@yangzhenyu6
Copy link

Hello, I also trained the Dreamshaper-v7 model according to the settings of train_1cm_istill_std_wds.comy, but in the end, I got pytorch-lora_ weights. safetensors. How can I call them to see the effect? I tried various methods but couldn't call them. How did you see the effect in the picture?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants