Presented by Google Cloud
AI agents are approaching the kind of breakthrough moment that APIs had in the early 2010s. At that time, REST and JSON unlocked system-to-system integration at scale by simplifying what had been a tangle of SOAP, WSDL, and tightly coupled web services. That change didn’t just make developers more productive; it enabled entire business ecosystems built around modular software.
A similar shift is underway in artificial intelligence. As agents become more capable and specialized, enterprises are discovering that coordination is the next big challenge. Two open protocols — Agent-to-Agent (A2A) and Model Context Protocol (MCP) — are emerging to meet that need. They simplify how agents share tasks, exchange information, and access enterprise context, even when they were built using different models or tools.
These protocols are more than technical conveniences. They are foundational to scaling intelligent software across real-world workflows.
AI systems are moving beyond general-purpose copilots. In practice, most enterprises are designing agents to specialize: managing inventory, handling returns, optimizing routes, or processing approvals. Value comes not only from their intelligence, but from how these agents work together.
A2A provides the mechanism for agents to interact across systems. It allows agents to advertise their capabilities, discover others, and send structured requests. Built on JSON-RPC and OpenAPI-style authentication, A2A supports stateless communication between agents, making it simpler and more secure to run multi-agent workflows at scale.
MCP is another protocol that is empowering AI agents with seamless access to essential tools, comprehensive data, and relevant context. It provides a standardized framework for connecting to diverse enterprise systems. Once an MCP Server is established by a service provider, its full functionality becomes universally accessible to all agents, enabling more intelligent and coordinated actions across the ecosystem.
These protocols don’t require organizations to build or glue systems together manually. They make it possible to adopt a shared foundation for AI collaboration that works across the ecosystem.
Why it’s gaining traction quickly
Google Cloud initiated A2A as an open standard and published its draft in the open, encouraging contributions from across the industry. More than 50 partners have participated in its evolution, including Salesforce, Deloitte, and UiPath. Microsoft now supports A2A in Azure AI Foundry and Copilot Studio; SAP has integrated A2A into its Joule assistant.
Other examples are emerging across the ecosystem. Zoom is using A2A to facilitate cross-agent interactions in its open platform. Box and Auth0 are demonstrating how enterprise authentication can be handled across agents using standardized identity flows.
This kind of participation is helping the protocol mature quickly, both in specification and in tooling. The Python A2A SDK is stable and production-ready. Google Cloud has also released the Java Agent Development Kit to broaden support for enterprise development teams. Renault Group is among the early adopters already deploying these tools.
Multi-agent workflows unlock new enterprise use cases
The transition from standalone agents to coordinated systems is already underway.
Imagine a scenario where a customer service agent receives a request. It uses A2A to check with an inventory agent about product availability. It then consults a logistics agent to recommend a shipping timeline. If needed, it loops in a finance agent to issue a refund. Each of these agents may be built using different models, toolkits, or platforms — but they can interoperate through A2A and MCP.
In more advanced settings, this pattern enables use cases like live operations management. For example, an AI agent monitoring video streams at a theme park could coordinate with operations agents to adjust staff allocation based on real-time crowd conditions. Video, sensor, and ticketing data can be made available through tools like BigLake metastore and accessed by agents through MCP. Decisions are made and executed across agents, with minimal need for human orchestration.
Architecturally, this is a new abstraction layer
MCP and A2A represent more than messaging protocols. They are part of a broader shift toward clean, open abstractions in enterprise software.
These agent protocols decouple intelligence from integration. With MCP, developers don’t need to hand-code API access for every data source. With A2A, they don’t need to maintain brittle logic for how agents interact.
The result is a more maintainable, secure, and portable approach to building intelligent multi-agent systems — one that scales across business units and platforms.
Google Cloud’s investment in open agent standards
Google Cloud’s contributions to the ecosystem are both foundational and practical. We are working with Anthropic on MCP and we have released A2A as open specification and backed them with production-grade tooling. These protocols are deeply integrated into our AI platforms, including Vertex AI, where multi-agent workflows can be developed and managed directly. It is great to see other cloud providers embracing MCP and A2A standards.
By releasing the Agent Development Kit for both Python and Java, and by making these components modular and extensible, Google Cloud is now enabling teams to adopt these standards without needing to reinvent infrastructure. The Agent Development Kit now also features built-in tools to access the data in BigQuery, making it easy to build your own agents backed by your enterprise data.
We are committed to enable you to access BigQuery, AlloyDB, and other GCP data services via MCP and A2A protocols. You can get started by using MCP Toolbox for Databases today and open your database queries as MCP tools. We are continuously adding more tools via MCP to enable developers to build even more sophisticated agents using the native capabilities of BigQuery.
Why this is worth tracking closely
For organizations investing in AI agents today, interoperability is going to matter more with each passing quarter. Systems built around isolated agents will struggle to scale; systems built on shared protocols will be more agile, collaborative, and future-proof.
This transition echoes the rise of APIs in the last decade. REST and JSON didn’t just improve efficiency, they became the foundation of modern cloud applications. MCP and A2A are poised to do the same for AI agents.
Adopting these protocols doesn’t require a full system rebuild. The point is to create flexibility: to allow agents developed internally or by vendors to collaborate and operate with context, using standards that are already gaining support across the industry.
For companies evaluating their AI stack, it’s worth asking whether their agents will be able to talk to each other — and what happens when they can’t.
Tomas Talius is VP of Engineering at Google BigQuery.
Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact