-
Notifications
You must be signed in to change notification settings - Fork 690
Assert that inputs are contiguous #418
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Assert that inputs are contiguous #418
Conversation
Test added. |
@jaybdub, any other tests / checks you want me to throw in? |
Just cloned / validated. Looks good to me! |
Thanks @alsrgv, this should saves some others some frustration! Feel free to reach out again if you run into any issues / features in the future. Best, |
Thanks! Oh, I do. I'd like add a couple of things:
We'll send PRs for these as we get to them - do you have any feedback / direction before we start coding? |
These both sound like nice features! For layer naming implementation I think it may work to wrap the TensorRT network, but I haven't considered fully class ContextNetwork():
...
def add_convolution(self, *args, **kwargs):
layer = self.network.add_convolution(*args, **kwargs)
layer.name = self.ctx.method_str + ... # or something similar, with input/outputs shapes, layer count, etc...
return layer In this case, we could replace the plain As for layer/debug naming convetion: I'm curious also if it's possible to get the methods module scope, like If so, then maybe for debugging something like the following is possible
For example, the following model which calls normalize which has many TRT layers class Head():
def forward(self, x):
x = F.conv2d(x, ...)
return F.normalize(x, p=1)
class Model()
...
def forward(self, x):
...
return self.head(x) Would log
This reads kind-of long, but I think uniquely identifies the layer in it's context. For naming perhaps the TensorRT name / args could be omitted:
Let me know if this makes sense to you. Getting the scope seems a bit more challenging to get, so if it's not that important for your use cases, I think it could be omitted, and added later perhaps. |
* Assert that inputs are contiguous * Turn non-contiguous tensors into contiguous * Add unit test * Fix tabs
@jaybdub, I've spent last 24 hours figuring out why PyTorch and TensorRT predictions differed, and it ended up being caused by strided tensor that was not accounted for. With this change I'd like to ensure this doesn't happen again.
Can you suggest where should I add tests for it?