-
Notifications
You must be signed in to change notification settings - Fork 6k
[Bug fix] "previous_timestep()" in DDPM scheduling compatible with "trailing" and "linspace" options #9384
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug fix] "previous_timestep()" in DDPM scheduling compatible with "trailing" and "linspace" options #9384
Conversation
I think it indeed fixes the issue . That said:
- it does not fix ddim or any other scheduler that are likely to have the same mistake such as pndm.
- the current code around schedulers feel generally too complicated and duplicated across schedulers. Would you guys be interested in simplifying (ddim and other could inherit some functions from ddpm) and / or having some test framework ? (For another PR maybe)
Otherwise its good to me for this PR
…________________________________
De : Anand Kumar ***@***.***>
Envoyé : samedi 7 septembre 2024 09:22
À : huggingface/diffusers ***@***.***>
Cc : RochMollero ***@***.***>; Mention ***@***.***>
Objet : [huggingface/diffusers] [Bug fix] "previous_timestep()" in DDPM scheduling compatible with "trailing" and "linspace" options (PR #9384)
What does this PR do?
Fixes the bug of previous_timestep() not giving the right timestep for numbers not a factor of 1000 during "trailing" and "linspace" options. The previous_timestep has to be accruate for the options only during inference and since during inference the timesteps array is available, I just grab the value from there as for custom_timesteps. During training it is going to be 1 less than the current timestep.
(This is simplest solution I can do, if you need calculating the previous timestep for each case without the timesteps array let me know!)
Test Script
import torch
from diffusers import DDPMScheduler
# Initialize the DDPM scheduler
scheduler = DDPMScheduler.from_pretrained("google/ddpm-celebahq-256")
#test params
scheduler.config.timestep_spacing = 'trailing' # or 'linspace'
scheduler.set_timesteps(num_inference_steps = 7) # any prime number
# Set up a dummy pipeline
generator = torch.manual_seed(0)
# Initialize some dummy latents (normally this would come from your model)
latents = torch.randn((1, 3, 64, 64))
# Inference loop
for i, t in enumerate(scheduler.timesteps):
# In a real scenario, you'd run your model here to get model_output
model_output = torch.randn_like(latents)
# Get the previous timestep
prev_timestep = scheduler.previous_timestep(t)
# Print current and previous timesteps
print(f"Step {i}:")
print(f" Current timestep: {t}")
print(f" Previous timestep: {prev_timestep}")
# Scheduler step
latents = scheduler.step(model_output, t, latents, generator=generator).prev_sample
print("Test complete.")
Output
Step 0:
Current timestep: 999
Previous timestep: 856
Step 1:
Current timestep: 856
Previous timestep: 713
Step 2:
Current timestep: 713
Previous timestep: 570
Step 3:
Current timestep: 570
Previous timestep: 428
Step 4:
Current timestep: 428
Previous timestep: 285
Step 5:
Current timestep: 285
Previous timestep: 142
Step 6:
Current timestep: 142
Previous timestep: -1
Test complete.
Fixes #9261<#9261>
Before submitting
* This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
* Did you read the contributor guideline<https://github.com/huggingface/diffusers/blob/main/CONTRIBUTING.md>?
* Did you read our philosophy doc<https://github.com/huggingface/diffusers/blob/main/PHILOSOPHY.md> (important for complex PRs)?
* Was this discussed/approved via a GitHub issue or the forum<https://discuss.huggingface.co/c/discussion-related-to-httpsgithubcomhuggingfacediffusers/63>? Please add a link to it if that's the case.
* Did you make sure to update the documentation with your changes? Here are the
documentation guidelines<https://github.com/huggingface/diffusers/tree/main/docs>, and
here are tips on formatting docstrings<https://github.com/huggingface/diffusers/tree/main/docs#writing-source-documentation>.
* Did you write any new necessary tests?
Who can review?
@yiyixuxu<https://github.com/yiyixuxu> @RochMollero<https://github.com/RochMollero> @bghira<https://github.com/bghira>
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
________________________________
You can view, comment on, or merge this pull request online at:
#9384
Commit Summary
* d1f764a<d1f764a> Update scheduling_ddpm.py
File Changes
(1 file<https://github.com/huggingface/diffusers/pull/9384/files>)
* M src/diffusers/schedulers/scheduling_ddpm.py<https://github.com/huggingface/diffusers/pull/9384/files#diff-5014bf116e54d20c980d805dc58ac83ac75b2976a2d785978632f14d8fb53ea9> (8)
Patch Links:
* https://github.com/huggingface/diffusers/pull/9384.patch
* https://github.com/huggingface/diffusers/pull/9384.diff
—
Reply to this email directly, view it on GitHub<#9384>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AA374DJHQLB4JB2WY5LDVJTZVKSUXAVCNFSM6AAAAABNZZCRSOVHI2DSMVQWIX3LMV43ASLTON2WKOZSGUYTCNJRG4YTAOI>.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Yeah the change has to be propagated to remaining schedulers. There is also a lot of duplication in schedulers and will need an overhaul to modularize it. Probably the moderators can open issues for these. |
thanks so much for your PR! will take a look!
|
I will close the PR then |
hey! sorry I just noticed this feel free to re-open this |
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the contributing guidelines are likely to be ignored. |
can you take a look? @hlky |
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @AnandK27! 🤗
…railing" and "linspace" options (huggingface#9384) * Update scheduling_ddpm.py * fix copies --------- Co-authored-by: YiYi Xu <[email protected]> Co-authored-by: hlky <[email protected]>
…railing" and "linspace" options (#9384) * Update scheduling_ddpm.py * fix copies --------- Co-authored-by: YiYi Xu <[email protected]> Co-authored-by: hlky <[email protected]>
What does this PR do?
Fixes the bug of
previous_timestep()
not giving the right timestep for numbers not a factor of 1000 during "trailing" and "linspace" options. The previous_timestep has to be accruate for the options only during inference and since during inference the timesteps array is available, I just grab the value from there as forcustom_timesteps
. During training it is going to be 1 less than the current timestep.(This is simplest solution I can do, if you need calculating the previous timestep for each case without the timesteps array let me know!)
Test Script
Output
Fixes #9261
Before submitting
documentation guidelines, and
here are tips on formatting docstrings.
Who can review?
@yiyixuxu @RochMollero @bghira
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.