-
Notifications
You must be signed in to change notification settings - Fork 24.5k
Suport tensor type for XPU #96656
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Suport tensor type for XPU #96656
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/96656
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 5769957: This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Note that these APIs are pretty much deprecated. You should just do All the |
Yes, the new API is better to understand. But there are still some customer codes using {Int,Float}Tensor. To Align CUDA, XPU should support this scenario for convenience, right? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From asking Ed, the other use case is for typing.
So that sounds ok to add.
Would it be possible to test this by any chance?
Thanks, there are no building envs to test XPU code in PyTorch CI due to the lack of XPU runtime. We can test it in our extension IPEX(Intel extensor for PyTorch). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok then.
We still highly recommend that you don't use these things ;)
@pytorchbot merge |
Merge failedReason: This PR needs a label If not, please add the For more information, see Details for Dev Infra teamRaised by workflow job |
Thanks, I got it. |
@albanD, it failed, what should I do? |
@pytorchbot merge |
Merge failedReason: This PR needs a label If not, please add the For more information, see Details for Dev Infra teamRaised by workflow job |
As the message mentions, you need to add labels corresponding to the category for the release notes. |
Note that if you can't add labels yourself, you can ask the bot to do it: |
PyTorchBot Help
Merge
Revert
Rebase
Label
Dr CI
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Thanks very much. |
# Motivate To support tensor type scenario for XPU. like CUDA: ```python >>> import torch >>> torch.rand(2,3).cuda(0).type(torch.cuda.IntTensor) tensor([[0, 0, 0], [0, 0, 0]], device='cuda:0', dtype=torch.int32) ``` without this PR: ```python >>> import torch >>> import intel_extension_for_pytorch >>> torch.rand(2,3).xpu('xpu:0').type(torch.xpu.IntTensor) Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: invalid type: 'torch.xpu.IntTensor' ``` with this PR: ```python >>> import torch >>> import intel_extension_for_pytorch >>> torch.rand(2,3).xpu('xpu:0').type(torch.xpu.IntTensor) tensor([[0, 0, 0], [0, 0, 0]], device='xpu:0', dtype=torch.int32) ``` # Solution Add allXPUTypes in type method to parse all xpu tensor type # Additional UT pass. Pull Request resolved: pytorch/pytorch#96656 Approved by: https://github.com/albanD
# Motivate To support tensor type scenario for XPU. like CUDA: ```python >>> import torch >>> torch.rand(2,3).cuda(0).type(torch.cuda.IntTensor) tensor([[0, 0, 0], [0, 0, 0]], device='cuda:0', dtype=torch.int32) ``` without this PR: ```python >>> import torch >>> import intel_extension_for_pytorch >>> torch.rand(2,3).xpu('xpu:0').type(torch.xpu.IntTensor) Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: invalid type: 'torch.xpu.IntTensor' ``` with this PR: ```python >>> import torch >>> import intel_extension_for_pytorch >>> torch.rand(2,3).xpu('xpu:0').type(torch.xpu.IntTensor) tensor([[0, 0, 0], [0, 0, 0]], device='xpu:0', dtype=torch.int32) ``` # Solution Add allXPUTypes in type method to parse all xpu tensor type # Additional UT pass. Pull Request resolved: pytorch/pytorch#96656 Approved by: https://github.com/albanD
Motivate
To support tensor type scenario for XPU.
like CUDA:
without this PR:
with this PR:
Solution
Add allXPUTypes in type method to parse all xpu tensor type
Additional
UT pass.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10