Skip to content

Use caffe2::int8::Int8TensorCPU when input type is uint8_t #12274

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 2 commits into from

Conversation

jspark1105
Copy link
Contributor

Summary: We use caffe2::int8::Int8TensorCPU for quantized tensor with uint8_t element type.

Differential Revision: D10156452

Differential Revision: D9846488

fbshipit-source-id: 9c935b3ff106b45d5e341e3adb8478be5480bc25
…2274)

Summary:
Pull Request resolved: pytorch#12274

We use caffe2::int8::Int8TensorCPU for quantized tensor with uint8_t element type.

Differential Revision: D10156452

fbshipit-source-id: 8edc1bbc7de8c04751dbb28343c679fa5fa36bfc
zdevito pushed a commit to zdevito/ATen that referenced this pull request Oct 4, 2018
Summary:
Pull Request resolved: pytorch/pytorch#12274

We use caffe2::int8::Int8TensorCPU for quantized tensor with uint8_t element type.

Reviewed By: llyfacebook

Differential Revision: D10156452

fbshipit-source-id: 52cf2bedc9dbb433cd5d03f0b76723f7df6a7361
@ezyang ezyang added the merged label Jun 26, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants