Skip to content

Feature Request: Instructions how to correctly use/convert original llama3.1 instruct .pth modelΒ #8808

Closed
@scalvin1

Description

@scalvin1

Prerequisites

  • I am running the latest code. Mention the version if possible as well.
  • I carefully followed the README.md.
  • I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
  • I reviewed the Discussions, and have a new and useful enhancement to share.

Feature Description

Can someone please add (or point me to) instructions to correctly set everything to get from FaceMeta-.pth downloaded weights to .gguf (and then onwards to Q8_0)?

I am running a local 8B instance with llama-server and CUDA.

Keep up the great work!

Motivation

With all the half-broken llama3.1 gguf files uploaded to hf by brownie point kids, it would make sense to drop a few words on how to convert and quantize the original/official Meta llama 3.1 weights for use with a local llama.cpp. (Somehow everyone seems to get the weights from hf, but why not source these freely available weights from the actual source?)

My tries still leave me hazy on whether the rope scaling is done correctly, even though I use latest transformers (for .pth to .safetensors) and then latest git version of llama.cpp for convert_hf_to_gguf.py.

The closest description I could find (edit: note that this is valid for llama3, not llama3.1 with the larger 128k token context) is here: https://voorloopnul.com/blog/quantize-and-run-the-original-llama3-8b-with-llama-cpp/

Possible Implementation

Please add two lines on llama3.1 "from META.pth to GGUF" to a README or to an answer to this issue.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions