2

I installed the Llama 3.1 8B model through Meta's Github page, but I can't get their example code to work. I'm running the following code in the same directory as the Meta-Llama-3.1-8B folder:

import transformers
import torch

pipeline = transformers.pipeline(
  "text-generation",
  model="Meta-Llama-3.1-8B",
  model_kwargs={"torch_dtype": torch.bfloat16},
  device="cuda"
)

The error is

OSError: Meta-Llama-3.1-8B does not appear to have a file named config.json

Where can I get config.json?

I've installed the latest transformers module, and I understand that I can access the remote model on HuggingFace. But I'd rather use my local model. Is this possible?

1 Answer 1

5

The issue isn't on your end. The confusion arises from Meta not clearly distinguishing between the distributions via Hugging Face and download.sh.

To resolve this, you can download the model files using the Hugging Face CLI:

!huggingface-cli download meta-llama/Meta-Llama-3-8B-Instruct --local-dir meta-llama/Meta-Llama-3-8B-Instruct

This method will provide you with the config.json and tokenizer.json files.

Additionally, you can try downloading other versions manually. For instance, someone shared a link to the configuration file on Hugging Face:

llama-3-8b/config.json

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.