You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
CuML doesn't natively support PyTorch tensors on CPU. When a tensor is on GPU, it can be fed into a CuML algorithm directly because it supports __cuda_array_interface__, but when a tensor is on CPU it raises TypeError: Cannot interpret 'torch.float32' as a data type. This can be easily worked around by the user by adding a block like
ifnott.is_cuda:
t=t.numpy()
before passing a CPU tensor t into a RAPIDS algorithm, but it requires a couple of extra lines in user code.
Describe the solution you'd like
Let the RAPIDS interface call .numpy() on input PyTorch CPU tensors under the hood, so the user doesn't need to do so manually.
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
CuML doesn't natively support PyTorch tensors on CPU. When a tensor is on GPU, it can be fed into a CuML algorithm directly because it supports
__cuda_array_interface__
, but when a tensor is on CPU it raisesTypeError: Cannot interpret 'torch.float32' as a data type
. This can be easily worked around by the user by adding a block likebefore passing a CPU tensor
t
into a RAPIDS algorithm, but it requires a couple of extra lines in user code.Describe the solution you'd like
Let the RAPIDS interface call
.numpy()
on input PyTorch CPU tensors under the hood, so the user doesn't need to do so manually.The text was updated successfully, but these errors were encountered: