-
Notifications
You must be signed in to change notification settings - Fork 366
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multiple Improvements for mmengine #1629
Open
MGAMZ
wants to merge
25
commits into
open-mmlab:main
Choose a base branch
from
MGAMZ:contribute-250117
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead.
FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead.
FSDP.optim_state_dict_to_load requires the following parameters: model: Module, optim: Optimizer, optim_state_dict: Dict[str, Any]
…tions The current runner implementation has not yet supported for pure-python style configurations on model wrapper class. I follow the mainstream implementation to support this feature.
This may be due to the version confliction. Newer PyTorch may have introduced this optimizer.
This reverts commit 8f37dd2.
This reverts commit be86710.
This reverts commit 7103c3e.
This reverts commit eecaa92.
张贻钦 seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account. You have signed the CLA already but the status is still pending? Let us recheck it. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Motivation
During the deep use of the mmengine framework, I improved several subtle issues, hoping to make the project more compatible with the current latest PyTorch version.
Modification
Judgment on the ‘disable’ parameter in the compile function
In the current PyTorch compile configuration (PyTorch Compile Doc), there is a ‘disable’ parameter. The mmengine compile implementation does not judge the ‘disable’ parameter; as long as dict is set as
compile
, the mmengine will definitely be compiled.Fix optim state loading BUG when using FSDP
The
torch.distributed.fsdp.fully_sharded_data_parallel.FullyShardedDataParallel.optim_state_dict_to_load
method requires the following parameters:mmengine's FSDP strategies' call is incorrect, and will cause error.
Update
GradScaler
to align with the latest PyTorch versionfrom torch.cuda.amp import GradScaler
will raise a PyTorch Warning, this import method will be deprecated in the future.Update
Adafactor
to align with the latest PyTorch versiontransformers' Adafactor optimizer has been implemented by PyTorch now. So it no longer requires
OPTIMIZERS.register_module
.Add Pure-Python style config for
OptimWrapperConstructor
The current mmengine does not support Pure-Python style config of
OptimWrapperConstructor
.Update torch load to align to the latest PyTorch version.
torch.load
requires weights_only param in the future, it currently raises warnings.Add Pure-Python style config for
model_wrapper
The current mmengine does not support Pure-Python style config of
model_wrapper
.Improve the warning information in Visualization
The improvement is minor, just add more hints.