In this tutorial, you will build a ReAct (Reasoning and Action) AI agent with the open-source LangGraph framework by using an IBM Granite model through the IBM® watsonx.ai® API in Python. The use case is to manage existing IT support tickets and to create new ones.
An artificial intelligence (AI) agent refers to a system or program that is capable of autonomously performing tasks on behalf of a user or another system by designing its agent workflow and using available tools. Generative AI agents use the advanced natural language processing (NLP) techniques of large language models (LLMs) to comprehend and respond to user inputs step-by-step and determine when to call on external tools. A core component of AI agents is reasoning. Upon acquiring new information through tool calling, human intervention or other agents, the reasoning paradigm guides the agent’s next steps.
With each action and each tool response, the ReAct (Reasoning and Action) paradigm instructs agents to "think" and plan their next steps. This step-by-step, slow reasoning, gives us insight into how the agent uses updated context to formulate conclusions. Because this process of reflection is continuous, it is often referred to as a think-act-observe loop and is a form of chain-of-thought prompting.
This tutorial will use the LangGraph framework, an open source AI agent framework designed to build, deploy and manage complex generative AI agent workflows. The prebuilt
Within LangGraph, the “state” feature serves as a memory bank that records and tracks all the valuable information processed by each iteration of the AI system. These stateful graphs allow agent's to recall past information and valuable context. The cyclic structure of the ReAct graph is leveraged when the outcome of one step depends on previous steps in the loop. The nodes, or "actors," in the graph encode agent logic and are connected by edges. Edges are essentially Python functions that determine the next node to execute depending on the current state.
You need an IBM Cloud® account to create a watsonx.ai™ project.
In order to use the watsonx application programming interface (API), you will need to complete the following steps. Note, you can also access this tutorial on GitHub.
Log in to watsonx.ai using your IBM Cloud account.
Create a watsonx.ai Runtime service instance (select your appropriate region and choose the Lite plan, which is a free instance).
Generate an application programming interface (API) key.
To easily get started with deploying agents on watsonx.ai, clone this GitHub repository and access the IT support ReAct agent project. You can run the following command in your terminal to do so.
Next, install poetry if you do not already have it installed. Poetry is a tool for managing Python dependencies and packaging.
Then, activate your virtual environment.
Rather than using the
Adding a working directory to PYTHONPATH is necessary for the next steps. In your terminal execute:
To set up your environment, follow along with the instructions in the README.md file on Github. This set up requires several commands to be run on your IDE or command line.
In the
Our agent requires a data source to provide up-to-date information and add new data. We will store our data file in IBM Cloud® Object Storage.
To provide the ReAct agent with IT ticket management functionality, we must connect to our data source in IBM Cloud Object Storage. For this step, we can use the
In
Our agent will be able to both read and write data in our file. First, let's create the tool to read data using the LangChain
We have added this
Next, we have added the
This tool takes in the description of the issue from the user and the urgency of the issue as its arguments. A new row is added to our file in COS with this information and thus, a new ticket is created. Otherwise, an exception is thrown.
One last tool we must add to our
To grant our agent access to these tools, we have added them to the
These tools are imported in the
Before deploying your agent, remember to complete all the necessary information in the
There are three ways to chat with your agent.
Run the script for local AI service execution.
The final option is to access the agent in the Deployments space on watsonx.ai. To do this, select "Deployments" on the left-side menu. Then, select your deployment space, select the "Assets" tab, select your
To run the deployment script, initialize the
The
Next, run the deployment script.
Then, run the script for querying the deployment.
For the purposes of this tutorial, let's choose option 2 and query our deployed agent on watsonx.ai in the form of an agentic chatbot. Let's provide the agent with some prompts that would require the usage of tools. Upon following the steps listed in Option 3, you should see a chat interface on watsonx.ai. There, we can type our prompt.
First, let's test whether the
As you can see in the agent's final answer, the AI system successfully used problem-solving to create a new ticket with the
Great! The agent successfully added the ticket to the file.
In this tutorial, you created an agent with the ReAct framework that uses decision making to solve complex tasks such as retrieving and creating support tickets. There are several AI models out there that allow for agentic tool calling such as Google's Gemini, IBM's Granite and OpenAI's GPT-4. In our project, we used an IBM Granite AI model through the watsonx.ai API. The model behaved as expected both locally and when deployed on watsonx.ai. As a next step, check out the LlamaIndex and crewAI multiagent templates available in the watsonx-developer-hub GitHub repository for building AI agents.
Easily design scalable AI assistants and agents, automate repetitive tasks and simplify complex processes with IBM® watsonx Orchestrate™.
Create breakthrough productivity with one of the industry's most comprehensive set of capabilities for helping businesses build, customize and manage AI agents and assistants.
Achieve over 90% cost savings with Granite's smaller and open models, designed for developer efficiency. These enterprise-ready models deliver exceptional performance against safety benchmarks and across a wide range of enterprise tasks from cybersecurity to RAG.