2

I am trying to use my llama2 model (exposed as an API using ollama). I want to chat with the llama agent and query my Postgres db (i.e. generate text to sql). I was able to find langchain code that uses open AI to do this. However, I am unable to find anything out there which fits my situation.

Any pointers will be of great help.

Code with openai

# Create connection to postgres
import psycopg2  # Import the library

database = 'postgres'
username = 'postgres'
password = 'password'
server = 'localhost'
port = '5432'

# Establish the connection
conn = psycopg2.connect(
    dbname=database,
    user=username,
    password=password,
    host=server,
    port=port
)

db = SQLDatabase.from_uri(
    "postgresql://postgres:password@localhost:5432/postgres")
toolkit = SQLDatabaseToolkit(db=db, llm=OpenAI(temperature=0))

agent_executor = create_sql_agent(
    llm=OpenAI(temperature=0),
    toolkit=toolkit,
    verbose=True,
    agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
)

agent_executor.run("Describe the transaction table")

I want to make the above code work for my llama2 model exposed via an API at localhost:11434/api/generate

1 Answer 1

2

Load your llm like mentioned here https://python.langchain.com/docs/integrations/llms/ollama

and then use that inplace of openai. You'll most probably have to change the prompts to fit llama2 desired format

Sign up to request clarification or add additional context in comments.

1 Comment

Could you please provide more data, I am using Llama 2 and i am able to create the object but when i send the questions it complains about the input text sequence: TypeError: TextEncodeInput must be Union[TextInputSequence, Tuple[InputSequence, InputSequence]] Any help is appreciated.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.