-
Notifications
You must be signed in to change notification settings - Fork 80
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Extend usability of calculate_offload_device_map
#768
base: main
Are you sure you want to change the base?
Conversation
calculate_offload_device_map
default to all GPUscalculate_offload_device_map
calculate_offload_device_map
calculate_offload_device_map
calculate_offload_device_map
calculate_offload_device_map
In hindsight, I think I'd prefer to give users helper functions which they can use to compute their own device maps. For example from llmcompressor import hessian_memory_requirements, quantization_memory_requirements, batch_memory_requirements
model_skeleton = load_model_skeleton(model_stub)
reserved_memory = (
hessian_memory_requirements(model_skeleton) +
quantization_memory_requirement(model_skeleton) +
batch_memory_requirements((bs, seq_len), attention_mask=False) +
("whatever junk or padding the user thinks is relevant")
) device_map = infer_auto_device_map(
model_skeleton,
max_memory=get_max_memory(reserved_memory, gpu_ids=[1, 2]),
no_split_module_classes=model_skeleton._no_split_modules,
)
device_map = get_uniform_device_map(model_skeleton, reserved_memory, gpu_ids=[1, 2]) I believe that this is a preferable user experience as opposed to trying to hide too many things behind a function api which a user has to learn. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is somewhat difficult to review atm.
Can you summarize what each of these helper functions you're suggesting to use and how the interface is expected to change before and after?
Generally speaking, having the helper functions is nice but we should maintain a higher level api that most users can just use/does the necessary memory calculations for them, which for most people right now would not include batching memory (but we could always expand to include this)
Purpose
calculate_offload_device_map
to be used in environments with non-homogenous and/or non-sequential GPUsChanges
num_gpus
is not specifiedgpu_ids
argument to allow users to choose which devices to useTesting