Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pre-Training LLama3.1 on AWS Trainium using Ray and PyTorch Lightning #724

Open
sindhupalakodety opened this issue Jan 15, 2025 · 0 comments

Comments

@sindhupalakodety
Copy link
Contributor

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

What is the outcome that you are trying to reach?

A tutorial that shows how to launch a distributed PyTorch Lightning (PTL) neuronx-distributed pre-training job on a Ray cluster with multiple Trn1 nodes within an Amazon Elastic Kubernetes Service (EKS) cluster. Many customers are looking for examples associated with the combination of these technologies (Ray + PTL + Neuron) on AWS AI Accelerator.

Describe the solution you would like

The integration of Ray, PyTorch Lightning (PTL), and AWS Neuron combines PTL's intuitive model development API, Ray Train's robust distributed computing capabilities for seamless scaling across multiple nodes, and AWS Neuron's hardware optimization for Trainium, significantly simplifying the setup and management of distributed training environments for large-scale AI projects, particularly those involving computationally intensive tasks like large language models.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant