Skip to content

torch.autograd.grad Runtime Error: XLA tensors do not have storage #2576

Closed
@jpmailoa

Description

@jpmailoa

Bug

torch.autograd.grad produces RuntimeError XLA tensors do not have storage

To Reproduce

torch.autograd.grad function giving out weird error in the pytorch nightly build. It used to run fine when I ran it on the torch_xla docker (gcr.io/tpu-pytorch/xla:r1.6, pulled on Sep 10, 2020). I tried installing the nightly build docker today (gcr.io/tpu-pytorch/xla:nightly_3.6, Oct 26, 2020).

import torch, torch_xla
import torch_xla.core.xla_model as xm
xla_dev = xm.xla_device()
x1 = torch.Tensor([[1.,2.],[3.,4.]]).to(xla_dev, non_blocking=True)
x2 = torch.Tensor([[1.,2.],[3.,4.]])
x1.requires_grad = True
x2.requires_grad = True
y1 = torch.split((x1**3).view(-1), 1)
y2 = torch.split((x2**3).view(-1), 1)
z2 = torch.autograd.grad(y2,x2,create_graph=True)[0]    # this one works just fine
z1 = torch.autograd.grad(y1,x1,create_graph=True)[0]    # this one will run into an error

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/autograd/__init__.py", line 216, in grad
    inputs, allow_unused)
RuntimeError: torch_xla/csrc/tensor_impl.cpp:144 : XLA tensors do not have storage

Steps to reproduce the behavior:

  1. Use gcr.io/tpu-pytorch/xla:nightly_3.6 docker, pulled on Oct 26, 2020
  2. Run the code snippet above

Expected behavior

Normal run of torch.autograd.grad function, returning gradient.

Environment

  • Reproducible on XLA backend [CPU/TPU]: TPU v2-8, pytorch_nightly
  • torch_xla version: 1.6+82d5850

Additional context

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions