From 5df1ec36ed6ab3d05c41edf5056a15c8d5bbb860 Mon Sep 17 00:00:00 2001 From: Roshani Narasimhan Date: Fri, 31 Jan 2025 13:10:43 -0800 Subject: [PATCH] Update Run_Gemma.md with correct links. --- end_to_end/tpu/gemma/Run_Gemma.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/end_to_end/tpu/gemma/Run_Gemma.md b/end_to_end/tpu/gemma/Run_Gemma.md index 8fbfc94c6..d2c474d5a 100644 --- a/end_to_end/tpu/gemma/Run_Gemma.md +++ b/end_to_end/tpu/gemma/Run_Gemma.md @@ -19,7 +19,7 @@ Following the instructions at [kaggle](https://www.kaggle.com/models/google/gemma/frameworks/maxText) will let you download Gemma model weights. You will have to consent to license for Gemma using your kaggle account's [API credentials](https://github.com/Kaggle/kaggle-api?tab=readme-ov-file#api-credentials). -After downloading the weights run [convert_gemma_chkpt.py](../../MaxText/convert_gemma_chkpt.py), which converts the checkpoint to be compatible with MaxText and uploads them to a GCS bucket. You can run decode and finetuning using instructions mentioned in the test scripts at [end_to_end/tpu/gemma](../../end_to_end/tpu/gemma). +After downloading the weights run [convert_gemma_chkpt.py](https://github.com/AI-Hypercomputer/maxtext/blob/main/MaxText/convert_gemma_chkpt.py), which converts the checkpoint to be compatible with MaxText and uploads them to a GCS bucket. You can run decode and finetuning using instructions mentioned in the test scripts at [end_to_end/tpu/gemma](https://github.com/AI-Hypercomputer/maxtext/tree/main/end_to_end/tpu/gemma). ## MaxText supports pretraining and finetuning with high performance