Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Native support for PyTorch CPU tensors #6276

Open
vhewes opened this issue Jan 29, 2025 · 0 comments
Open

Native support for PyTorch CPU tensors #6276

vhewes opened this issue Jan 29, 2025 · 0 comments
Labels
? - Needs Triage Need team to review and classify feature request New feature or request

Comments

@vhewes
Copy link

vhewes commented Jan 29, 2025

Is your feature request related to a problem? Please describe.
CuML doesn't natively support PyTorch tensors on CPU. When a tensor is on GPU, it can be fed into a CuML algorithm directly because it supports __cuda_array_interface__, but when a tensor is on CPU it raises TypeError: Cannot interpret 'torch.float32' as a data type. This can be easily worked around by the user by adding a block like

if not t.is_cuda:
    t = t.numpy()

before passing a CPU tensor t into a RAPIDS algorithm, but it requires a couple of extra lines in user code.

Describe the solution you'd like
Let the RAPIDS interface call .numpy() on input PyTorch CPU tensors under the hood, so the user doesn't need to do so manually.

@vhewes vhewes added ? - Needs Triage Need team to review and classify feature request New feature or request labels Jan 29, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
? - Needs Triage Need team to review and classify feature request New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant