Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't set output location for model download #658

Open
keithachorn-intel opened this issue Feb 13, 2025 · 0 comments
Open

Can't set output location for model download #658

keithachorn-intel opened this issue Feb 13, 2025 · 0 comments

Comments

@keithachorn-intel
Copy link

I have a script which pulls a dataset and model for the Llama2-70b workload to my server. After the user enters the HF token, it is expected that the model will download to a specified location, not merely to the 'cache'. The dataset does this successfully by setting a parameter 'outdirname', but the model does not. From past experience I attempted the following steps, but in all cases, the model remains only in the cache:

  • Setting the '--to' flag
  • Setting the '--outdirname' flag
  • Setting these environmental variables: LLAMA2_CHECKPOINT_PATH and CM_ML_MODEL_PATH

This is the script I'm trying to pull with: https://github.com/mlcommons/cm4mlops/tree/mlperf-inference/script/get-ml-model-llama2

Is there any way to direct the final output location?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant