Thursday, April 17, 2025

Introduction to Multi Agent Systems Enhancement in Vertex AI

Introduction to multi agent systems

New approaches to creating and overseeing multi-agent systems are provided by Vertex AI.

Even if they are developed using different frameworks or vendors, multi-agent systems multiple AI agents cooperating will soon be a necessity for every business. Agents are intelligent systems with the capacity for memory, planning, and reasoning that may act on your behalf. They are capable of multi-step planning and completing tasks across multiple systems under your guidance.

Models with improved reasoning skills, such as those found in Gemini 2.5, are essential for multi-agent systems. They also rely on connecting to your company data and integrating with your workflows. These components are effortlessly brought together by Vertex AI, its all-inclusive platform for coordinating the three pillars of production AI: models, data, and agents. To guarantee that agents operate dependably, it blends an open methodology with extensive platform features in a way that would otherwise necessitate disjointed and brittle solutions.

Google Cloud is revealing some improvements to Vertex AI today so you can:

Create agents with an open methodology and implement enterprise-level controls

  • Built on the same structure as Google Agentspace and Google Customer Engagement Suite (CES) agents, the Agent Development Kit (ADK) is an open-source framework for creating agents. Agent Garden offers a wealth of extensible sample agents and strong examples.
  • Vertex AI’s Agent Engine is a fully managed runtime that facilitates the safe, worldwide deployment of your bespoke agents to production with integrated testing, release, and dependability.

Agents should be connected throughout your company ecosystem

  • No matter what framework or vendor your agents are built on, the Agent2Agent protocol provides them with a single, open language to operate together. In order to further our common vision of multi-agent systems, we are spearheading this open project and collaborating with more than fifty industry leaders (and counting).
  • Use open standards such as Model Context Protocol (MCP) to provide agents with your data, or establish a direct connection with Google Cloud-managed APIs and connectors. Google Maps data, your favourite data sources, or Google Search can serve as the foundation for your AI replies.

Introducing Agent Garden and Agent Development Kit: Creating agents using an open methodology

Google’s new open-source framework, the Agent Development Kit (ADK), makes it easier to create agents and complex multi-agent systems while preserving fine-grained control over agent behaviour. You can create an AI agent using ADK in less than 100 lines of user-friendly code. Take a look at these instances.

Available now in Python (additional languages will be added later this year), you can:

  • With the help of orchestration controls and deterministic guardrails, you can precisely regulate the behaviour and decision-making processes of your agents.
  • The special bidirectional audio and video streaming features of ADK allow you to have human-like discussions with your agents. You may transform your interactions with agents from text to rich, interactive dialogue by writing a few lines of code that create natural interactions.
  • Agent Garden, a set of readily usable tools and samples that are directly available within ADK, will help you get started on your development. Use pre-built agent components and patterns to learn from functioning examples and speed up your development process.
  • Select the model that best suits your requirements. ADK can be used with any model that is available through Model Garden, including Gemini. In addition to Google’s models, there are more than 200 models available from companies such as Anthropic, Meta, Mistral AI, AI21 Labs, CAMB.AI, Qodo, and others.
  • Whether it’s local debugging or any containerised production deployment like Cloud Run, Kubernetes, or Vertex AI, choose your deployment target. Additionally, ADK supports Model Context Protocol (MCP), which allows your data and agents to connect securely.
  • Use Vertex AI’s direct integration to deploy to production. The usual burden of transferring agents to production is eliminated by this dependable, transparent route from development to enterprise-grade deployment.

ADK is optimised for Gemini and Vertex AI, although it can also be used with your favourite tools. For instance, Gemini 2.5 Pro Experimental’s expanded reasoning capabilities enable AI agents developed with ADK to deconstruct complicated problems, and its tool-use capabilities enable them to interact with your preferred platforms. Additionally, you can use the native integration to Vertex AI from ADK to deploy this agent to a fully-managed runtime and run it at enterprise scale.

Presenting Agent Engine: Using enterprise-grade controls to deploy AI agents

Google Cloud’s completely controlled runtime, Agent Engine, makes it simple to put AI agents into production. Rebuilding your agent system during the prototype-to-production transition is no longer necessary. Security, evaluation, monitoring, scaling complexity, infrastructure management, and agent context are all handled by Agent Engine. For a seamless develop-to-deploy process, Agent Engine also interfaces with ADK (or your favourite framework). Together, you are able to:

  • Use any framework to deploy agents, including ADK, LangGraph, Crew.ai, and others. You can also use any model you like, such as Gemini, Claude from Anthropic, Mistral AI, or others. Enterprise-grade governance and compliance measures are combined with this flexibility.
  • Maintain the context in your sessions: The Agent Engine supports both short-term and long-term memory, so you don’t have to start from scratch every time. In this manner, your agents can remember your previous chats and preferences while you are able to manage your sessions.
  • Vertex AI offers extensive evaluation tools to gauge and enhance agent quality. Enhance agent performance by fine-tuning models to improve your agents based on real-world usage or by using the Example Store.
  • Connecting to Agentspace can encourage wider usage. You can register your Agent Engine-hosted agents with Google Agentspace. This enterprise platform maintains centralised administration and security while giving employees access to Gemini, Google-quality search, and powerful agents.

Google Cloud will utilize cutting-edge testing and tooling to further enhance Agent Engine‘s capabilities in the coming months. It will be possible for your agents to use computers and run programs. To guarantee dependability in production, you can also thoroughly test agents with a variety of user personas and realistic tools in a specialised simulation environment.

Presenting the Agent2Agent protocol, which links agents throughout your company’s environment

Getting agents developed on various frameworks and vendors to cooperate is one of the largest obstacles to enterprise AI adoption. To develop an open Agent2Agent (A2A) protocol, Google Cloud collaborated with other industry leaders that share its vision for multi-agent systems.

Agents from various ecosystems can communicate with one another with the Agent2Agent protocol, regardless of the framework (ADK, LangGraph, Crew.ai, or others) or vendor on which they are based. Through A2A, agents can securely collaborate while publishing their capabilities and deciding how they will communicate with users (text, forms, or bidirectional audio/video).

In addition to collaborating with one another, your agents must have access to your enterprise truth, which is the informational ecosystem you have created using data sources, APIs, and business capabilities. Using any method you choose, you can provide agents with your current corporate truth data rather than starting from scratch:

Because ADK supports Model Context Protocol (MCP), your agents can use the expanding ecosystem of MCP-compatible products to connect to the many and varied data sources or capabilities you already depend on.

You can also directly link your agents to your corporate capabilities and systems using ADK. This includes data saved in your systems, such as AlloyDB, BigQuery, NetApp, and many more, as well as more than 100 pre-built interfaces and workflows created using Application Integration. For instance, there is no need for data duplication when creating AI agents using your current NetApp data.

With ADK, you can also easily connect to call tools from a variety of sources, such as MCP, LangChain, CrewAI, Application Integration, and any OpenAPI endpoints, or to your current agents that are built into other frameworks, such as LangGraph.

Apigee API Management oversee more than 800K APIs that run your company both within and outside of Google Cloud. With the right authorisation, your agents can also use ADK to access these current API investments, regardless of where they are located.

After connecting, you can use data from sources like Google Search or specific information from companies like Zoominfo, S&P Global, HGInsights, Cotality, and Dun & Bradstreet to support your AI responses. Today, we’re enabling you to ground your agents using Google Maps for those that depend on geospatial context. Every day, we update 100 million Maps data points to make sure the information is current and accurate. Additionally, your agents can now respond with geographical data associated with millions of locations across the United States by using Grounding with Google Maps.

Developing AI agents with enterprise-grade security: Creating trustworthy agents

In addition to their functionality, enterprise AI agents in production are vulnerable to security issues such as incorrect content generation, unauthorised data access, and prompt injection attacks. These issues are addressed on several levels by Google Cloud’s Gemini and Vertex AI building. You could:

  • Utilise Gemini’s integrated safety measures, such as system instructions that establish limits around forbidden subjects and conform to your brand voice, and adjustable content filters, to manage agent output.
  • By controlling agent permissions with identity controls, you can stop privilege escalation and unwanted access by deciding whether agents work with dedicated service accounts or on behalf of specific users.
  • Use Google Cloud’s VPC service controls to restrict agent activity within secure perimeters, stop data exfiltration, and reduce the potential impact radius in order to safeguard sensitive data.
  • Create boundaries around your agents to manage interactions at every stage, from checking parameters before to tool execution to screening inputs before they reach models. Defensive boundaries can be set up to enforce restrictions, such as limiting database queries to particular tables or utilising lightweight models to include safety validators.
  • Use extensive tracing tools to automatically monitor agent behaviour. These features allow you to see every step an agent does, including its execution pathways, tool selection, and reasoning process.

Begin constructing multi-agent systems

The true worth of Vertex AI lies not just in the individual features listed above, but also in the way they function as a cohesive unit. A single platform now allows for the smooth integration of previously disparate solutions from many providers. Painful trade-offs between models, interaction with enterprise apps and data, or production readiness are eliminated by this cohesive approach.

Drakshi
Drakshi
Since June 2023, Drakshi has been writing articles of Artificial Intelligence for govindhtech. She was a postgraduate in business administration. She was an enthusiast of Artificial Intelligence.
RELATED ARTICLES

Page Content

Recent Posts

Index