Skip to content

Cannot convert llama3 8b model to gguf #7021

Closed
@Bedoshady

Description

@Bedoshady

Please include information about your system, the steps to reproduce the bug, and the version of llama.cpp that you are using. If possible, please provide a minimal code example that reproduces the bug.

I downloaded model from llama using steps provided and I have 14 gb .pth file I try to convert the model using convert.py but it fails giving RuntimeError: Internal: could not parse ModelProto from H:\Downloads\llama3-main\Meta-Llama-3-8B\tokenizer.model but when I added --vocab-type bpe it gives FileNotFoundError: Could not find a tokenizer matching any of ['bpe']

If the bug concerns the server, please try to reproduce it first using the server test scenario framework.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions