Friday, March 14, 2025

IBM Agent Lab: Low-Code AI Agent Deployment on Watsonx.ai

Use Agent Lab to begin creating AI agents on Watsonx.ai.

IBM presents Agent Lab, a low-code tool for creating and deploying agents on Watsonx.ai, in beta form. Without writing code, developers may create and connect tools, customise agent behaviour, and debug interactions using this user-friendly, low-code tool.

AI agents: what are they?

A software or system that plans its workflow and uses its resources to perform tasks for a user or other system. disposal is known as an artificial intelligence (AI) agent. From software design and IT automation to code-generation tools and conversational assistants, these agents can be used in a variety of applications to accomplish complicated tasks in a range of organisational scenarios. Large language models (LLMs) employ sophisticated natural language processing techniques to understand and react to user inputs in a step-by-step manner, as well as to decide when to use external tools.

The following are Agent Lab‘s primary capabilities:

  • Easy and quick setup: Utilise an easy-to-use visual interface to customise your agent’s behaviour. Without writing code, select from open-source frameworks, specify goals, then test and improve your agent.
  • Connect using both custom and pre-made tools: Use tools for routine activities like document search and code execution analysis, or develop your own tools to connect AI agents to other systems.
  • One-click setup: Install agents as production-ready API endpoints that can be used into Watsonx Orchestrate or other third-party apps.

Create and implement your very own AI agent

Register for a free trial on Watsonx.ai, then follow these steps to create and launch your first agent:

  1. Open your Watsonx.ai environment and navigate to Agent Lab.
  2. Select a framework based on the needs of your use case.

Only the Langraph framework is included in the beta release.

  1. To specify agent behaviour, choose an architecture.

Only the ReAct architecture is included in the beta release.

  1. Give the agent instructions in plain, everyday language.
  2. Create a custom tool or add the necessary tools from the pre-built library.
  3. Examine how the agent behaves in the development environment.
  4. Put into production

You must create an API key and deployment space before you can deploy an agent for the first time.

  1. Go to the endpoint that your deployed agent has generated.

Watsonx.ai agents can be incorporated into Watsonx Orchestrate or other third-party apps.

Using your IDE, create and deploy agents to Watsonx.ai

A software or system that plans its workflow and uses its resources to perform tasks for a user or other system disposal is known as an artificial intelligence (AI) agent. From software design and IT automation to code-generation tools and conversational assistants, these agents can be used in a variety of applications to accomplish complicated tasks in a range of organisational scenarios.

Large language models (LLMs) employ sophisticated natural language processing techniques to understand and react to user inputs in a step-by-step manner, as well as to decide when to use external tools.

Using new agent templates, create and launch agents on Watsonx.ai from your IDE

It’s now simpler to create AI bots! With the new agent templates, you can now create and launch agents on Watsonx.ai straight from your command line or integrated development environment (IDE). The first available template makes it easier for developers to get started with generating agents by guiding them through the process of building a ReAct research agent powered by LangGraph. The agent template offers a strong foundation to speed development and can be used as a basis for creating research agents, automation assistants, or customer care bots.

We’ll walk through the entire process of installing and deploying the new agent template in this guide. By the time you finish this lesson, you will have a cloud-based research agent that you can access by its endpoint or the Watsonx.ai dashboard.

Create and implement your very own AI agent

Register for a free Watsonx.ai trial and then follow the steps below to create and launch your first agent using the agent templates.

Two tools are available to the agent in this template: a tool to retrieve the contents of articles published on ArXiv and a web search through DuckDuckGo. It will utilise the Mistral Large model by default, but you can connect it to any model that is available in your Watsonx.ai environment and supports tool calling.

For instance, you can use this agent to look up publications on ArXiv about DeepSeek. Additionally, request a summary of any papers it discovered.

In order to deploy the agent template to Watsonx.ai and access it from the cloud, let’s first set it up locally.

Python must be installed on your computer, ideally with the help of a Python environment manager such as uv or pipx.

Use your IDE to set up the template

  1. You must clone the repository containing the agent templates using your IDE or command line. To accomplish this, execute the following commands:
git clone --no-tags --depth 1 --single-branch --filter=tree:0 --sparse https://github.com/IBM/watsonx-developer-hub.git

We restrict the tracked files to a subset of just the necessary files by employing “sparse” when cloning the repository.

  1. You can install the necessary dependencies when the files have been copied to your computer.

Making use of “pipx”:

pipx install --python 3.11 poetry

Making use of “uv”

uv venv --python 3.11

You can use other Python-based dependency managers if you’d want, but the templates utilise Poetry for dependency management. Make sure you utilise Python versions 10 through 13.

  1. Configure the agent’s deployment space and the environment variables required to connect to Watsonx.ai models.

Add the following settings after opening the “config.toml” file:

  • IBM Cloud API Key: “watsonx_apikey”
  • An example of an API URL is https://us-south.ml.cloud.ibm.com.
  • “space_id”: The deployment space’s ID while the Watsonx.ai runtime is active

Go to the Watsonx.ai dashboard’s Developer Access page to determine your values.

4. The agent can now be locally executed from the template by running:

poetry run python examples/execute_ai_service_locally.py

This will launch a chat program in your terminal. You can ask the model your own question or select one of the pre-formulated ones. If you want the agent to be able to summarise the research paper, please keep in mind that it should be accessible on arXiv.

By modifying the “agent.py” or “tools.py” files, you can alter the agent. For instance, you can update the model parameters or provide the agent with new tools. You can develop your own Python function that the agent can run, or you can utilise any of the tools that the LangGraph framework or community supports.

The agent must then be deployed to the cloud so that its API endpoint may be accessed from a distance.

The agent should be deployed to the cloud

The agent can now be deployed to the cloud after being tested locally. Setting up a deployment area in watsonx.ai and connecting it to an active watsonx.ai runtime is necessary for this. As stated in Step 3 of the preceding section, the ID of this space needs to be added to the “config.toml” file.

Use this command to deploy the agent:

poetry run python scripts/deploy.py

The agent may take a few moments to deploy, and after it does, your terminal will display the “deployment_id”:

This ID, which you can see in your Watsonx.ai dashboard under “Deployments” and by entering the area you just deployed the agent to, is required in order to access the agent via its API endpoint.

Take a look at the agent from the dashboard

Once an agent has been deployed on Watsonx.ai, you can use the built-in chat interface to communicate with the agent directly or use the API endpoint to connect to the agent remotely. Take these actions to utilise the agent through the chat interface:

  1. Locate your recently deployed agent in the “Deployments” section of the Watsonx.ai dashboard. This is where you should access the area where the agent was just deployed.
  2. The list of deployments can be accessed by clicking on the space (in this case, “agent test”). A new deployment should appear in this list following the execution of the script from the previous step.
    To interact with the deployment through the “Preview” page, make sure it has the tag “wx-agent.”
  3. The private and public API endpoints of the agent are displayed after selecting the most recent deployment. Additionally, a “Preview” tab appears, which you must open in order to test the agent in the chat interface.
  4. You can ask the same questions via the chat interface as you would while executing the agent locally, for instance: “show me a list or arxiv papers about deepseek.” This ought to yield a list of articles that discuss DeepSeek that have been published on ArXiv. The tools the agent used to obtain this data are also visible to you.

Naturally, you may ask follow-up questions in the chat to find out additional details about a particular document or to ask it to locate other ones.

  1. You may also integrate the agent with Watsonx Orchestrate or third-party apps, and you can access the produced endpoint of your deployed agent directly from a remote source.
Drakshi
Drakshi
Since June 2023, Drakshi has been writing articles of Artificial Intelligence for govindhtech. She was a postgraduate in business administration. She was an enthusiast of Artificial Intelligence.
RELATED ARTICLES

Recent Posts

Popular Post

Govindhtech.com Would you like to receive notifications on latest updates? No Yes