Agent Factory: Connecting agents, apps, and data with new open standards like MCP and A2A
Agents are truly powerful because they can connect with one another, access enterprise data, and integrate with the systems where work actually takes place.
This blog post marks the fifth entry in our six-part series titled Agent Factory, which offers best practices, design patterns, and tools to assist you in adopting and developing agentic AI.
An agent that’s unable to communicate with other agents, tools, or applications is essentially isolated. The true strength of agents lies in their ability to interconnect, tapping into enterprise data and seamlessly integrating with operational systems. This integration turns an agent from merely a smart prototype into a significant asset for any business.
With Azure AI Foundry, we’re witnessing numerous shifts: customer service agents are working alongside retrieval agents to tackle complex issues, research agents are collaborating across datasets to speed up discoveries, and business agents are automating workflows that used to require multiple human teams. The focus in agent development has shifted from “Can we construct one?” to “How can we make them collaborate effectively and safely on a large scale?”
Industry Trends Highlight Integration as the Key
Over my years at Microsoft, I’ve observed how open protocols have helped shape ecosystems. From OData, which standardised data API access, to OpenTelemetry, which provided developers a common platform for observability, open standards have consistently facilitated both innovation and scalability across various industries. Currently, our clients using Azure AI Foundry are demanding flexibility without being tied to a single vendor. We’re seeing this trend with AI agents as well. Proprietary and closed systems bring risks when agents, tools, or data cannot interact, stifling innovation and elevating switching costs.
- Emergence of Standard Protocols: Open standards like the Model Context Protocol (MCP) and Agent2Agent (A2A) are establishing a common language for agents to share tools, context, and outcomes across different vendors. This level of interoperability is essential for businesses eager to choose the best solutions without restrictions, ensuring smooth collaboration between their agents, tools, and data.
- A2A Collaboration on MCP: Specialist agents are working together more and more, with one managing scheduling, another accessing databases, and yet another summarising findings. This mimics how humans collaborate towards shared objectives. Discover more about this connection in our Agent2Agent and MCP blog.
- Creating Connected Ecosystems: Whether it’s Microsoft 365, Salesforce, or ServiceNow, businesses expect their agents to function across all applications, not just within a single platform. Integration libraries and connectors are becoming just as vital as the models themselves. Open standards make it easier for new tools and platforms to be integrated seamlessly, reducing the risk of ending up with isolated solutions.
- Interop Across Different Frameworks: Developers want the freedom to create using tools like LangGraph, AutoGen, Semantic Kernel, or CrewAI and still have their agents communicate effectively. The diversity of frameworks is here to stay.
Requirements for Integration at Scale
Through our experiences with enterprises and open-source communities, we’ve identified key requirements for connecting agents, apps, and data:
- Intentional Cross-Agent Collaboration: Multi-agent workflows necessitate open protocols that allow different runtimes and frameworks to work in sync. Protocols like A2A and MCP are evolving to foster deeper collaboration and integration among agents. A2A enhances agent-to-agent collaboration, while MCP serves as a foundational layer for sharing context, tools, and facilitating coordination across frameworks.
- Consistent Context Sharing through Open Standards: Agents require a secure and consistent method to exchange context, tools, and results. MCP facilitates this by promoting the reuse of tools across different agents, frameworks, and vendors.
- Easy Access to Enterprise Systems: The real benefits materialise when agents can perform actions, such as updating a CRM record, posting in Teams, or initiating an ERP workflow. Having integration fabrics with pre-built connectors simplifies this process, allowing enterprises to link both new and legacy systems without incurring significant costs.
- Unified Observability: As workflows encompass multiple agents and applications, tracking and resolving issues becomes crucial. Teams need visibility into the reasoning behind actions taken across agents to ensure safety, compliance, and reliability. Open telemetry and evaluation standards provide enterprises with the necessary transparency to operate effectively at scale.
Enabling Integration at Scale with Azure AI Foundry
Azure AI Foundry has been crafted for this interconnected future. It facilitates agent interoperability, making them ready for enterprise use and integrating seamlessly with operational systems.
- Model Context Protocol (MCP): Foundry agents can access MCP-compatible tools directly, allowing developers to utilise existing connectors and tap into an expanding marketplace of interoperable tools. Additionally, Semantic Kernel supports MCP for pro-code developers.
- A2A Support: Through Semantic Kernel, Foundry incorporates A2A, enabling agents to work together across different ecosystems and runtimes. Multi-agent workflows, such as a research agent collaborating with a compliance agent before drafting a report, function seamlessly.
- Enterprise Integration Fabric: Foundry is equipped with thousands of connectors for SaaS and enterprise systems. This means agents can operate where the real business takes place, without developers needing to build integrations from scratch. Plus, with Logic Apps now supporting MCP, existing workflows and connectors can be directly utilized within Foundry agents.
- Holistic Observability and Governance: Tracing, evaluation, and compliance checks are integrated across various workflows. Developers can debug multi-agent reasoning efficiently, ensuring that enterprises can maintain identity, policy, and compliance throughout.
Importance of This Now
Businesses are looking for interconnected systems rather than isolated solutions. The next edge in AI isn’t just about developing more intelligent agents, but creating connected agent ecosystems that operate across various applications, frameworks, and vendors. Interoperability and open standards are vital for this future, providing customers with flexibility, choice, and the reassurance that they can invest in AI without the worry of being locked into a single vendor.
Azure AI Foundry makes this achievable:
- Flexible protocols (MCP and A2A) for collaborative and interoperable agents.
- Enterprise connectors facilitating system integration.
- Governance and guardrails ensuring trust at scale.
With these pillars in place, organizations can transition from isolated prototypes to fully connected AI ecosystems that span the enterprise.
Looking Ahead
In the sixth and final part of the Agent Factory series, we’ll delve into one of the most crucial aspects of agent development: trust. Building capable agents only resolves half the challenge; enterprises must ensure that these agents maintain the highest security, identity, and governance standards.
Have you missed any previous posts in this series?