Skip to content

Suport tensor type for XPU #96656

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from
Closed

Suport tensor type for XPU #96656

wants to merge 1 commit into from

Conversation

guangyey
Copy link
Collaborator

@guangyey guangyey commented Mar 13, 2023

Motivate

To support tensor type scenario for XPU.
like CUDA:

>>> import torch
>>> torch.rand(2,3).cuda(0).type(torch.cuda.IntTensor)
tensor([[0, 0, 0],
        [0, 0, 0]], device='cuda:0', dtype=torch.int32)

without this PR:

>>> import torch
>>> import intel_extension_for_pytorch
>>> torch.rand(2,3).xpu('xpu:0').type(torch.xpu.IntTensor)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ValueError: invalid type: 'torch.xpu.IntTensor'

with this PR:

>>> import torch
>>> import intel_extension_for_pytorch
>>> torch.rand(2,3).xpu('xpu:0').type(torch.xpu.IntTensor)
tensor([[0, 0, 0],
        [0, 0, 0]], device='xpu:0', dtype=torch.int32)

Solution

Add allXPUTypes in type method to parse all xpu tensor type

Additional

UT pass.

cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10

@pytorch-bot
Copy link

pytorch-bot bot commented Mar 13, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/96656

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 5769957:
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@albanD
Copy link
Collaborator

albanD commented Mar 13, 2023

Note that these APIs are pretty much deprecated. You should just do torch.rand(2, 3, device="xpu:0", dtype=torch.int).

All the {Int,Float}Tensor classes are here only "for show" and don't have any actual meaning. The new API is better to reflect what it is actually doing.

@guangyey
Copy link
Collaborator Author

Note that these APIs are pretty much deprecated. You should just do torch.rand(2, 3, device="xpu:0", dtype=torch.int).

All the {Int,Float}Tensor classes are here only "for show" and don't have any actual meaning. The new API is better to reflect what it is actually doing.

Yes, the new API is better to understand. But there are still some customer codes using {Int,Float}Tensor. To Align CUDA, XPU should support this scenario for convenience, right?

Copy link
Collaborator

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From asking Ed, the other use case is for typing.
So that sounds ok to add.
Would it be possible to test this by any chance?

@guangyey
Copy link
Collaborator Author

From asking Ed, the other use case is for typing. So that sounds ok to add. Would it be possible to test this by any chance?

Thanks, there are no building envs to test XPU code in PyTorch CI due to the lack of XPU runtime. We can test it in our extension IPEX(Intel extensor for PyTorch).

Copy link
Collaborator

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok then.
We still highly recommend that you don't use these things ;)

@guangyey
Copy link
Collaborator Author

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Mar 14, 2023
@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: This PR needs a label
If your changes are user facing and intended to be a part of release notes, please use a label starting with release notes:.

If not, please add the topic: not user facing label.

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

Details for Dev Infra team Raised by workflow job

@guangyey
Copy link
Collaborator Author

Ok then. We still highly recommend that you don't use these things ;)

Thanks, I got it.

@guangyey
Copy link
Collaborator Author

Merge failed

Reason: This PR needs a label If your changes are user facing and intended to be a part of release notes, please use a label starting with release notes:.

If not, please add the topic: not user facing label.

For more information, see https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

Details for Dev Infra team

@albanD, it failed, what should I do?

@jingxu10 jingxu10 added the intel This tag is for PR from Intel label Mar 14, 2023
@guangyey
Copy link
Collaborator Author

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: This PR needs a label
If your changes are user facing and intended to be a part of release notes, please use a label starting with release notes:.

If not, please add the topic: not user facing label.

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

Details for Dev Infra team Raised by workflow job

@albanD
Copy link
Collaborator

albanD commented Mar 14, 2023

As the message mentions, you need to add labels corresponding to the category for the release notes.
In this case, python API and improvement sounds good.

@albanD albanD added release notes: python_frontend python frontend release notes category topic: improvements topic category labels Mar 14, 2023
@albanD
Copy link
Collaborator

albanD commented Mar 14, 2023

Note that if you can't add labels yourself, you can ask the bot to do it:
@pytorchbot -h

@pytorch-bot
Copy link

pytorch-bot bot commented Mar 14, 2023

PyTorchBot Help

usage: @pytorchbot [-h] {merge,revert,rebase,label,drci} ...

In order to invoke the bot on your PR, include a line that starts with
@pytorchbot anywhere in a comment. That line will form the command; no
multi-line commands are allowed. 

Example:
    Some extra context, blah blah, wow this PR looks awesome

    @pytorchbot merge

optional arguments:
  -h, --help            Show this help message and exit.

command:
  {merge,revert,rebase,label,drci}
    merge               Merge a PR
    revert              Revert a PR
    rebase              Rebase a PR
    label               Add label to a PR
    drci                Update Dr. CI

Merge

usage: @pytorchbot merge [-g | -f MESSAGE | -l] [-r [{viable/strict,master}]]

Merge an accepted PR, subject to the rules in .github/merge_rules.json.
By default, this will wait for all required checks (lint, pull) to succeed before merging.

optional arguments:
  -g, --green           Merge when all status checks running on the PR pass. To add status checks, use labels like `ciflow/trunk`.
  -f MESSAGE, --force MESSAGE
                        Merge without checking anything. This requires a reason for auditting purpose, for example:
                        @pytorchbot merge -f 'Minor update to fix lint. Expecting all PR tests to pass'
  -l, --land-checks     [Deprecated - your PR instead now gets the `ciflow/trunk` label on approval] Merge with land time checks. This will create a new branch with your changes rebased on viable/strict and run a majority of trunk tests _before_ landing to increase trunk reliability and decrease risk of revert. The tests added are: pull, Lint and trunk. Note that periodic is excluded.
  -r [{viable/strict,master}], --rebase [{viable/strict,master}]
                        Rebase the PR to re run checks before merging.  Accepts viable/strict or master as branch options and will default to viable/strict if not specified.

Revert

usage: @pytorchbot revert -m MESSAGE -c
                          {nosignal,ignoredsignal,landrace,weird,ghfirst}

Revert a merged PR. This requires that you are a Meta employee.

Example:
  @pytorchbot revert -m="This is breaking tests on trunk. hud.pytorch.org/" -c=nosignal

optional arguments:
  -m MESSAGE, --message MESSAGE
                        The reason you are reverting, will be put in the commit message. Must be longer than 3 words.
  -c {nosignal,ignoredsignal,landrace,weird,ghfirst}, --classification {nosignal,ignoredsignal,landrace,weird,ghfirst}
                        A machine-friendly classification of the revert reason.

Rebase

usage: @pytorchbot rebase [-s | -b BRANCH]

Rebase a PR. Rebasing defaults to the stable viable/strict branch of pytorch.
You must have write permissions to the repo to rebase a PR.

optional arguments:
  -s, --stable          [DEPRECATED] Rebase onto viable/strict
  -b BRANCH, --branch BRANCH
                        Branch you would like to rebase to

Label

usage: @pytorchbot label labels [labels ...]

Adds label to a PR

positional arguments:
  labels  Labels to add to given Pull Request

Dr CI

usage: @pytorchbot drci

Update Dr. CI. Updates the Dr. CI comment on the PR in case it's gotten out of sync with actual CI results.

@albanD
Copy link
Collaborator

albanD commented Mar 14, 2023

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@guangyey
Copy link
Collaborator Author

Note that if you can't add labels yourself, you can ask the bot to do it: @pytorchbot -h

Thanks very much.

cyyever pushed a commit to cyyever/pytorch_private that referenced this pull request Mar 23, 2023
# Motivate
To support tensor type scenario for XPU.
like CUDA:
```python
>>> import torch
>>> torch.rand(2,3).cuda(0).type(torch.cuda.IntTensor)
tensor([[0, 0, 0],
        [0, 0, 0]], device='cuda:0', dtype=torch.int32)
```
without this PR:
```python
>>> import torch
>>> import intel_extension_for_pytorch
>>> torch.rand(2,3).xpu('xpu:0').type(torch.xpu.IntTensor)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ValueError: invalid type: 'torch.xpu.IntTensor'
```
with this PR:
```python
>>> import torch
>>> import intel_extension_for_pytorch
>>> torch.rand(2,3).xpu('xpu:0').type(torch.xpu.IntTensor)
tensor([[0, 0, 0],
        [0, 0, 0]], device='xpu:0', dtype=torch.int32)
```

# Solution
Add allXPUTypes in type method to parse all xpu tensor type

# Additional
UT pass.

Pull Request resolved: pytorch/pytorch#96656
Approved by: https://github.com/albanD
cyyever pushed a commit to cyyever/pytorch_private that referenced this pull request Mar 27, 2023
# Motivate
To support tensor type scenario for XPU.
like CUDA:
```python
>>> import torch
>>> torch.rand(2,3).cuda(0).type(torch.cuda.IntTensor)
tensor([[0, 0, 0],
        [0, 0, 0]], device='cuda:0', dtype=torch.int32)
```
without this PR:
```python
>>> import torch
>>> import intel_extension_for_pytorch
>>> torch.rand(2,3).xpu('xpu:0').type(torch.xpu.IntTensor)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ValueError: invalid type: 'torch.xpu.IntTensor'
```
with this PR:
```python
>>> import torch
>>> import intel_extension_for_pytorch
>>> torch.rand(2,3).xpu('xpu:0').type(torch.xpu.IntTensor)
tensor([[0, 0, 0],
        [0, 0, 0]], device='xpu:0', dtype=torch.int32)
```

# Solution
Add allXPUTypes in type method to parse all xpu tensor type

# Additional
UT pass.

Pull Request resolved: pytorch/pytorch#96656
Approved by: https://github.com/albanD
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ciflow/trunk Trigger trunk jobs on your pull request intel This tag is for PR from Intel Merged open source release notes: python_frontend python frontend release notes category topic: improvements topic category
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants