-
Notifications
You must be signed in to change notification settings - Fork 90
[BUG] RuntimeError: Numpy is not available #1403
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
@davidray222 Thanks for the report. I think your torch version 2.2 maybe too old. Or maybe I broke something in main. Will check soon! |
@davidray222 I think you have a broken Numy pkg install. Try the following: import torch
t = torch.tensor([1, 2, 3], dtype=torch.int32)
print(t.numpy()) Run the above python code in your env. If you get same error, I suggest you uninstall numpy and re-install numpy. |
@davidray222 Also please use releae-version 2.0 if possible. There may be other bugs in the |
@Qubitium I update Software version:
my steps: I use this code:
get the output always is I would like to ask if I made a mistake somewhere.Thank you!! |
I am getting a similar error: ImportError Traceback (most recent call last) 47 frames The above exception was the direct cause of the following exception: RuntimeError Traceback (most recent call last) The above exception was the direct cause of the following exception: RuntimeError Traceback (most recent call last) RuntimeError: Failed to import transformers.models.auto.tokenization_auto because of the following error (look up to see its traceback): from running
My library versions: Name: gptqmodel
Version: 2.0.0
---
Name: torch
Version: 2.6.0+cu124
---
Name: transformers
Version: 4.49.0
---
Name: accelerate
Version: 1.3.0
---
Name: triton
Version: 3.2.0
---
Name: numpy
Version:2.2.4 When I downgrade numpy to 2.2.2, I get: ImportError Traceback (most recent call last) 3 frames ImportError: cannot import name 'SUPPORTED_MODELS' from 'gptqmodel.models._const' (/usr/local/lib/python3.11/dist-packages/gptqmodel/models/_const.py) |
Uh oh!
There was an error while loading. Please reload this page.
Describe the bug
I used the code from "/GPTQModel/examples/quantization/basic_usage_autoround.py" to quantize deepseek-ai/DeepSeek-R1-Distill-Llama-8B and Qwen/QwQ-32B, but I encountered the same issue in both cases.
GPU Info
Show output of:NVIDIA A6000
Software Info
CUDA Version: 12.8
Show output of:
my code:
Thank you!!!!!!!!!!!!!!!!
The text was updated successfully, but these errors were encountered: