-
Notifications
You must be signed in to change notification settings - Fork 3.5k
Fix: no_grad
with AMP bug
#20921
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Fix: no_grad
with AMP bug
#20921
Conversation
08508b6
to
d18fb08
Compare
return torch.autocast( | ||
self.device, dtype=(torch.bfloat16 if self.precision == "bf16-mixed" else torch.half), cache_enabled=False | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
return torch.autocast( | |
self.device, dtype=(torch.bfloat16 if self.precision == "bf16-mixed" else torch.half), cache_enabled=False | |
) | |
dtype = torch.bfloat16 if self.precision == "bf16-mixed" else torch.half | |
return torch.autocast(self.device, dtype=dtype, cache_enabled=False) |
Then se shall report it and offer a fix in BTW, have you measured the performance drop? |
@Borda it is a long-standing issue in But I agree with you that it should be fixed in I haven't measured the performance drop since it will vary strongly across architectures and probably also hardware setups. |
Fixes #20644
Note however that this would affect performance for other users, so the question is whether it is worth optimizing for this edge case that is fundamentally a
torch
bug.cc @Borda
📚 Documentation preview 📚: https://pytorch-lightning--20921.org.readthedocs.build/en/20921/