You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a script which pulls a dataset and model for the Llama2-70b workload to my server. After the user enters the HF token, it is expected that the model will download to a specified location, not merely to the 'cache'. The dataset does this successfully by setting a parameter 'outdirname', but the model does not. From past experience I attempted the following steps, but in all cases, the model remains only in the cache:
Setting the '--to' flag
Setting the '--outdirname' flag
Setting these environmental variables: LLAMA2_CHECKPOINT_PATH and CM_ML_MODEL_PATH
I have a script which pulls a dataset and model for the Llama2-70b workload to my server. After the user enters the HF token, it is expected that the model will download to a specified location, not merely to the 'cache'. The dataset does this successfully by setting a parameter 'outdirname', but the model does not. From past experience I attempted the following steps, but in all cases, the model remains only in the cache:
This is the script I'm trying to pull with: https://github.com/mlcommons/cm4mlops/tree/mlperf-inference/script/get-ml-model-llama2
Is there any way to direct the final output location?
The text was updated successfully, but these errors were encountered: