0

I try to use langchain load_evaluator() with local LLM Ollama. But I don't understand which model I should use.

from langchain.evaluation import load_evaluator
from langchain.chat_models import ChatOllama
from langchain.llms import Ollama
from langchain.embeddings import HuggingFaceEmbeddings
#This is work
evaluator = load_evaluator("labeled_score_string", llm=ChatOllama(model="llama2"))
evaluator = load_evaluator("pairwise_string",  llm=Ollama(model="llama2"))
#This is not
evaluator = load_evaluator("pairwise_embedding_distance",  llm=HuggingFaceEmbeddings())
evaluator = load_evaluator("pairwise_embedding_distance",  llm=Ollama(model="llama2"))
1
  • Welcome to the SO community! I think your question is a bit broad to be well-received on the site. Can you add more details? What exactly is not working? Are there any error messages? Does it work with other LLMs rather than Ollama? Commented Mar 27, 2024 at 16:38

1 Answer 1

0

I think we have same problem and I found this

embedding_function = OllamaEmbeddings(model="llama3.2:3b")
evaluator = load_evaluator("pairwise_embedding_distance", embeddings=embedding_function)

instead asking for "llm" parameter, it should provide "embeddings" param

source: https://python.langchain.ac.cn/docs/guides/productionization/evaluation/comparison/pairwise_embedding_distance/

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.