Closed
Description
Please include information about your system, the steps to reproduce the bug, and the version of llama.cpp that you are using. If possible, please provide a minimal code example that reproduces the bug.
I downloaded model from llama using steps provided and I have 14 gb .pth file I try to convert the model using convert.py but it fails giving RuntimeError: Internal: could not parse ModelProto from H:\Downloads\llama3-main\Meta-Llama-3-8B\tokenizer.model
but when I added --vocab-type bpe it gives FileNotFoundError: Could not find a tokenizer matching any of ['bpe']
If the bug concerns the server, please try to reproduce it first using the server test scenario framework.