I installed the Llama 3.1 8B model through Meta's Github page, but I can't get their example code to work. I'm running the following code in the same directory as the Meta-Llama-3.1-8B folder:
import transformers
import torch
pipeline = transformers.pipeline(
"text-generation",
model="Meta-Llama-3.1-8B",
model_kwargs={"torch_dtype": torch.bfloat16},
device="cuda"
)
The error is
OSError: Meta-Llama-3.1-8B does not appear to have a file named config.json
Where can I get config.json?
I've installed the latest transformers module, and I understand that I can access the remote model on HuggingFace. But I'd rather use my local model. Is this possible?