Skip to content

Commit 79665d2

Browse files
fbgheithfacebook-github-bot
authored andcommitted
adhere to lazy import rules (facebookresearch#807)
Summary: Pull Request resolved: facebookresearch#807 Pull Request resolved: facebookresearch#806 Lazy import changes `Python` import semantics, specifically when it comes to initialization of packages/modules: https://www.internalfb.com/intern/wiki/Python/Cinder/Onboarding/Tutorial/Lazy_Imports/Troubleshooting/ For example, this pattern is not guaranteed to work: ``` import torch.optim ... torch.optim._multi_tensor.Adam # may fail to resolve _multi_tensor ``` And this is guaranteed to work: ``` import torch.optim._multi_tensor ... torch.optim._multi_tensor.Adam # will always work ``` A recent change to `PyTorch` changed module initialization logic in a way that exposed this issue. But the code has been working for years? This is the nature of undefined behavior, any change in the environment (in this the `PyTorch` code base can make it fail. Reviewed By: mannatsingh Differential Revision: D58881291
1 parent aef8d97 commit 79665d2

File tree

1 file changed

+2
-1
lines changed

1 file changed

+2
-1
lines changed

classy_vision/optim/adamw_mt.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,7 @@
77
from typing import Any, Dict, Tuple
88

99
import torch.optim
10+
from torch.optim import _multi_tensor
1011

1112
from . import ClassyOptimizer, register_optimizer
1213

@@ -30,7 +31,7 @@ def __init__(
3031
self._amsgrad = amsgrad
3132

3233
def prepare(self, param_groups) -> None:
33-
self.optimizer = torch.optim._multi_tensor.AdamW(
34+
self.optimizer = _multi_tensor.AdamW(
3435
param_groups,
3536
lr=self._lr,
3637
betas=self._betas,

0 commit comments

Comments
 (0)