66 questions
0
votes
2
answers
116
views
LangChain HuggingFace ChatHuggingFace raises StopIteration with any model
I’m trying to use LangChain’s Hugging Face integration to chat with the model TinyLlama/TinyLlama-1.1B-Chat-v1.0 for the very first time, but I’m getting a StopIteration error when calling .invoke().
...
1
vote
0
answers
263
views
Running huggingface models offline using huggingface snapshot download causes errors
I have been trying to run some models from huggingface locally.
The script is being hosted on google cloud run.
Since running the instance multiple times triggers rate limiting, I have downloaded the ...
1
vote
1
answer
33
views
ModuleNotFoundError: module not found by named 'bert_opinion' through hf_hub_download
I'm trying to import modules from bert_opinion.py and post.py after downloading them from the Hugging Face Hub using hf_hub_download, as described for my chosen model on the Hugging Face website. Here'...
1
vote
1
answer
91
views
HfHubHTTPError when calling DoclingLoader with a pdf file
I have installed docling successfully, but when doing the following:
from langchain_docling import DoclingLoader
source_path = "shared\abc.pdf"
loader = DoclingLoader(file_path=source_path)
...
0
votes
1
answer
250
views
how to download huggingface-model files by filtering unwanted files
a huggingface model, like Qwen32B-GGUF, contains some quantization-related files which are large. Perhaps, only use one quantization-related file and the rest is not used.
By huggingface-cli, it ...
1
vote
2
answers
688
views
Facing issue using a model hosted on HuggingFace Server and talking to it using API_KEY
I am trying to create a simple langchain app on text-generation using API to communicate with models on HuggingFace servers.
I created a “.env” file and stored by KEY in the variable: “...
0
votes
0
answers
316
views
Unable to call Hugging face api from my local machine
I want to use a model via hugging face, but even with a valid token it's not working. Can someone please help.
Test Code
from huggingface_hub import InferenceClient
token = "...
1
vote
1
answer
362
views
how to use jinaai/jina-embeddings-v2-base-en with trust_remote_code=False
I am using jinaai/jina-embeddings-v2-base-en model to generate embeddings for vector search. Following is my code to generate embeddings using the jinaai model:
from transformers import AutoModel
...
0
votes
0
answers
175
views
Error while loading the MultiModal Models from Huggingface hub
I am trying to use a multimodal model from Huggingface hub. I tried with "maya-multimodal/maya" model.(Following is the code to load the model):
from llama_index.multi_modal_llms.huggingface ...
5
votes
1
answer
24k
views
ImportError: cannot import name 'cached_download' from 'huggingface_hub'
huggingface_hub==0.27.1
diffusers==0.28.0
I am getting this error:
Traceback (most recent call last): File "/data/om/Lotus/infer.py", line 11, in <module>
from diffusers.utils ...
2
votes
0
answers
697
views
Cannot download Llama 3.2 3B model using Unsloth and Hugging Face
I want to locally fine-tune using my own dataset and then save the Llama 3.2-3B model locally too. I have an Anaconda setup and I'm on the base environment, where I can see clearly that unsloth and ...
0
votes
0
answers
835
views
SSL Certificate Verification Error with Hugging Face Transformers CLI
I'm trying to download the TheBloke/falcon-40b-instruct-GPTQ model using the Hugging Face Transformers CLI in PowerShell on Windows 10, but I consistently encounter an SSL certificate error. It ...
0
votes
0
answers
155
views
SSL certification error while calling huggingface inference APIs via Langgraph & langchain
I am using Huggingface inference APIs for a basic GenAI applciation using Llama 3.2 & mistral. While calling the APIs i am getting the below error:
(MaxRetryError("HTTPSConnectionPool(host='...
0
votes
1
answer
2k
views
`repo_type` argument if needed
Now trying to run a python script, loading a model from hugging face.
In the terminal it gives an error:
huggingface_hub.errors.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/...
4
votes
2
answers
6k
views
Cannot load a gated model from hugginface despite having access and logging in
I am training a Llama-3.1-8B-Instruct model for a specific task.
I have request the access to the huggingface repository, and got access, confirmed on the huggingface webapp dashboard.
I tried calling ...
0
votes
1
answer
324
views
How to load the tokenizer locally from Unbabel/COMET
I am trying to use COMET in a place where it cannot download its own models. It seems to load wmt22-comet-da model as far as I can tell, but it seems not to recognize my local xlm-roberta-large ...
1
vote
2
answers
358
views
Error when pushing Llama3.1 7B fine-tuned model to Huggingface
I'm having an issue pushing a fine-tuned Llama 3.1 model to Huggingface, getting the error below. All of the literature that I've read suggests that the code below that I'm using to push is correct, ...
-1
votes
1
answer
141
views
HuggingFace model downloads are not counted
I'm trying to get HuggingFace to count the downloads of a model, but it refuses to count it.
The model in questions is uploaded here:
https://huggingface.co/ibm-granite/granite-geospatial-wxc-...
1
vote
0
answers
165
views
How can I enable logging on my huggingface model that I'm using an inference point (api) for?
The catch is that I'm not using the huggingface 'transformers' library, since i'm not running the models locally but am using an inference point to the model on HF (similar to how we use the Openai ...
0
votes
1
answer
157
views
getting unexpected keys error while loading weights
import torch
from PIL import Image
import numpy as np
from effdet import get_efficientdet_config, EfficientDet
config = get_efficientdet_config('tf_efficientdet_d0')
model = EfficientDet(config, ...
0
votes
0
answers
1k
views
Duplicating/cloning huggingface space
I'm interested in setting up an inference API, using huggingface. I'm following an article https://medium.com/@dahmanihichem01/mixtral-and-rest-api-turning-mixtral-8x7b-into-an-api-using-huggingface-...
6
votes
2
answers
7k
views
Cannot import name 'DatasetFilter' from 'huggingface_hub'
Cannot import name 'DatasetFilter' from 'huggingface_hub'
(/opt/conda/lib/python3.10/site-packages/huggingface_hub/__init__.py)
Issue while using huggingface_hub
1
vote
1
answer
155
views
Why do I get an exception when attempting automatic processing by the Hugging Face parquet-converter?
What file structure should I use on the Hugging Face Hub, if I have a /train.zip archive with PNG image files and an /metadata.csv file with annotations for them, so that the parquet-converter bot can ...
0
votes
2
answers
4k
views
How do I increase max_new_tokens
I'm facing this error while running my code:
ValueError: Input length of input_ids is 1495, but max_length is set to 20. This can lead to unexpected behavior. You should consider increasing ...
2
votes
1
answer
3k
views
ImportError: cannot import name 'CommitInfo' from 'huggingface_hub'
I am encountering an ImportError when running a Python script that imports CommitInfo from the huggingface_hub package. The error message is as follows:
ImportError: cannot import name 'CommitInfo' ...
1
vote
0
answers
390
views
datasets package from pip causing a segfault on MacOS?
I'm using pip version 24.1.2 and Python 3.12.4. The installation seemingly goes fine. However, when importing the package, like in the line
from datasets import load_dataset
I'll see
zsh: ...
0
votes
1
answer
398
views
How can I make my Hugging Face fine-tuned model's config.json file reference a specific revision/commit from the original pretrained model?
I uploaded this model: https://huggingface.co/pamessina/CXRFE, which is a fine-tuned version of this model: https://huggingface.co/microsoft/BiomedVLP-CXR-BERT-specialized
Unfortunately, CXR-BERT-...
1
vote
0
answers
26
views
Method to wait until user is logged in in order to continue
I'm using
from huggingface_hub import notebook_login
notebook_login()
on Google Colabs in order to login before the rest of the script runs.
However, the script continues before I've logged in, ...
0
votes
1
answer
78
views
How to recreate the "view" features of common voice v11 in HuggingFace?
The Common Voice v11 on HuggingFace has some amazing View features! They include a dropdown button to select the language, and columns with the dataset features, such as client_id, audio, sentence, ...
1
vote
0
answers
350
views
How to upload the best models from lightning_logs checkpoint to huggingface during training after each epoch
I'm training a LayoutLMv3 model for document classification using pytorch-lightning.
While training and testing the model locally I'm facing no issues(able to save the checkpoint and able to load the ...
0
votes
0
answers
2k
views
How to Load an Already Instantiated Hugging Face Model into vLLM for Inference?
I am working on a project where I need to utilize a model that has already been loaded and instantiated on the GPU using Hugging Face's Transformers library. The goal is to pass this loaded model into ...
0
votes
1
answer
282
views
LangChain Hugging face model for PromptTemplate class
when invoking the Intel/dynamic_tinybert model in the HuggingFaceEndpoint/HuggingFaceHub class in langchain for a translation/Q&A task, one persistent error was as below:
Bad request:
Error in ...
0
votes
0
answers
133
views
Docker build hangs on downloading model shards using transformers in a Flask application
I am trying to containerize a Flask application that uses the transformers library to load a BLIP model. The application works fine locally, but when I try to build the Docker image, the process hangs ...
0
votes
1
answer
2k
views
Huggingface requests.exceptions.HTTPError: 404 Client Error: Not Found for url
I am trying to push a dataset to hub, the repo-name and token is set correctly, but I am getting this error:
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/...
2
votes
1
answer
816
views
Timm throwing HuggingFace Hub not installed error when HFhub is installed
I'm trying to create the convit_base.fb_in1k model via timm. When I call timm.create_model('convit_base.fb_in1k', pretrained=True), I get a RuntimeError: Hugging Face hub model specified but package ...
0
votes
1
answer
249
views
Inconsistent Output with HuggingFace ChatHuggingFace in Google Colab and Langchain Documentation
I am using the HuggingFace ChatHuggingFace module to generate text in a Google Colab notebook. However, I noticed that the output I receive differs from the output shown in the Langchain documentation,...
0
votes
0
answers
2k
views
huggingface_hub.utils._errors.LocalEntryNotFoundError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/pangchongwen/anaconda3/envs/chatglm3/bin/huggingface-cli", line 8, in
sys....
1
vote
1
answer
918
views
How to share downloaded huggingface models among users?
I'd like several users to share downloaded models, such that when any of the users downloads a model, e.g. using
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM....
2
votes
1
answer
4k
views
Convert PyTorch Model to Hugging Face model
I have looked at a lot resources but I still have issues trying to convert a PyTorch model to a hugging face model format. I ultimately want to be able to use inference API with my custom model.
I ...
4
votes
1
answer
7k
views
AttributeError: module 'huggingface_hub.constants' has no attribute 'HF_HUB_CACHE'
I got this error after running:
import os
import torch
import torch.nn as nn
from transformers import AutoTokenizer, AutoConfig,AutoModelForCausalLM
I tried installing deferent versions of ...
0
votes
1
answer
377
views
huggingface model.push_to_hub(peft_model_id) NotADirectoryError: [Errno 20] Not a directory
I am trying to push a model to huggingface hub. My huggingface id is aben118 and the model name is test that I want to upload.
peft_model_id = "aben118/test"
model.push_to_hub(peft_model_id)
...
1
vote
0
answers
425
views
autotrain.trainers.common:wrapper:92 - No GPU found. A GPU is needed for quantization
❌ ERROR | 2024-02-06 11:07:29 | autotrain.trainers.common:wrapper:91 - train has failed due to an exception: Traceback (most recent call last):
File "/app/src/autotrain/trainers/common.py"...
0
votes
1
answer
2k
views
ChromaDB and HuggingFace cannot process large files
I am trying to process 1000+ page PDFs using huggingface embeddings and chroma db. Whenever I try to upload a large file, however, I get the error below. I don't know if chromadb can handle that big ...
2
votes
1
answer
2k
views
Colab cannot find HuggingFace dataset
When I try to run the following code to load a dataset from Hugging Face hub to google Colab, I get an error!
! pip install transformers datasets
from datasets import load_dataset
cv_13 = load_dataset(...
0
votes
1
answer
603
views
When copying file from HugginFace to GoogleColab with wget I only get a small size
I have identified my account in Google Colab by providing a token
from huggingface_hub import notebook_login
notebook_login()
then I would like to copy this 6.5 Go file :
!wget https://...
0
votes
1
answer
54
views
Adding ID to Text Output in AWS Batch Transform Job with DistilBERT Model
I have a dataset in JSON format with ‘id’ and ‘text’ columns. Currently, I’m using the following pipeline configuration in AWS:
hub = {
'HF_MODEL_ID':'distilbert-base-uncased',
'HF_TASK':'...
0
votes
3
answers
3k
views
How to perform inference with a Llava Llama model deployed to SageMaker from Huggingface?
I deployed a Llava Llama Huggingface model (https://huggingface.co/liuhaotian/llava-llama-2-13b-chat-lightning-preview/discussions/3) to a SageMaker Domain + Endpoint by using the deployment card ...
0
votes
1
answer
2k
views
Flash attention argument throwing error while finetuning falcon_7b_instruct
I am exploring the flash attention in my code to fine-tune the falcon-7b-instruct model as it is explained on the huggingface.
I am getting an error:
TypeError: FalconForCausalLM.__init__() got an ...
6
votes
2
answers
7k
views
How to Merge Fine-tuned Adapter and Pretrained Model in Hugging Face Transformers and Push to Hub?
I have fine-tuned the Llama-2 model following the llama-recipes repository's tutorial. Currently, I have the pretrained model and fine-tuned adapter stored in two separate directories as follows:
...
0
votes
0
answers
2k
views
Problem Uploading Large Files to Hugging Face: Slow Speeds and Interruptions
I'm facing issues with uploading large model files to Hugging Face.
I managed a large upload once using the webinterface, after several interuptions and restarts, but in order to automatize things i ...