Agent Factory: From prototype to production—developer tools and rapid agent development
It’s not just about creating an agent anymore; it’s all about how quickly and efficiently you can turn your concepts into fully operational systems ready for enterprise deployment.
This post is the fourth installment in the six-part series titled Agent Factory, which shares best practices, design patterns, and tools to support you in developing and implementing agentic AI.
The Importance of Developer Experience for Scaling
AI agents have rapidly transitioned from mere experiments to functioning production systems. Across various sectors, developers are creating prototypes in their Integrated Development Environments (IDEs) one week and rolling out production agents for thousands of users the following week. The crucial question is not whether you can develop an agent, but how swiftly and effectively you can bring your idea to life in a business-ready format.
Current industry trends highlight this evolution:
- In-repo AI development: Models, prompts, and evaluations are now integral parts of GitHub repositories, providing developers a consolidated space to construct, test, and refine AI capabilities.
- Enhanced coding agents: GitHub Copilot has introduced a new coding agent that can automatically initiate pull requests once tasks such as bug fixing or test writing are completed, functioning as a helpful asynchronous team member.
- Maturing open frameworks: Communities surrounding LangGraph, LlamaIndex, CrewAI, AutoGen, and Semantic Kernel are expanding rapidly, with “agent templates” emerging as common elements in GitHub repositories.
- Emerging open protocols: Standards like the Model Context Protocol (MCP) and Agent-to-Agent (A2A) are fostering interoperability across different platforms.
Developers prefer to maintain their current workflows—using GitHub, VS Code, and familiar frameworks—while harnessing enterprise-level runtimes and integrations. The winning platforms will be those that support developers in their existing environments, focusing on openness, speed, and reliability.
Essential Features of a Modern Agent Platform
Through our interactions with clients and the open-source community, we’ve identified key features that developers require. A modern agent platform should do more than just provide models or orchestration; it needs to empower teams throughout the entire lifecycle:
- Local-first prototyping: Developers want to maintain their workflow. This means they should be able to design, trace, and evaluate AI agents directly within their IDEs, just as easily as writing and debugging code. If developing an agent demands navigating away from familiar tools, both the speed of iteration and overall adoption can suffer.
- Smooth transition to production: A common source of frustration is when an agent that works perfectly in a local environment becomes fragile or requires significant rewrites during deployment. The ideal platform offers a single, consistent API interface from experimentation through to deployment, ensuring that what functions in development remains functional in production, with built-in support for scaling, security, and governance.
- Open by design: No two organisations have the same tech stack. Some developers might work with LangGraph for orchestration, others with LlamaIndex for data retrieval, or even CrewAI for coordination. Meanwhile, some may prefer Microsoft’s own options like Semantic Kernel or AutoGen. A modern platform should accommodate this diversity while avoiding vendor lock-in, yet still provide robust pathways for those aiming for enterprise-grade solutions.
- Interop by design: Agents should integrate seamlessly with tools, databases, and other agents from various ecosystems. Proprietary protocols often lead to isolated systems and fragmentation. By adopting open standards like the Model Context Protocol (MCP) and Agent-to-Agent (A2A), collaboration across platforms is facilitated, creating a market for interoperable tools and reusable agent capabilities.
- A comprehensive integration framework: The true value of an agent materialises when it can perform meaningful actions—like updating records in Dynamics 365, initiating workflows in ServiceNow, querying SQL databases, or sending messages on Teams. Developers shouldn’t need to recreate connectors for each integration. A solid agent platform should provide a wide range of ready-made connectors and streamlined methods for integration into enterprise systems.
- Built-in safety measures: Enterprises can’t risk having agents that are unclear, unreliable, or non-compliant. Observability, evaluations, and governance need to be embedded within the development process—not treated as an afterthought. The ability to track agent decision-making, conduct continuous evaluations, and enforce identity, security, and compliance policies is as vital as the models themselves.
How Azure AI Foundry Facilitates This Experience
Azure AI Foundry is crafted to suit developers where they are, providing enterprises with the trust, security, and scalability they require. It connects different IDEs, frameworks, protocols, and business channels—ensuring a smooth progression from prototype to production.
Develop Within Familiar Tools: VS Code, GitHub, and Foundry
Developers want to create, debug, and refine AI agents within their regular tools without venturing into unfamiliar territories. Foundry fully integrates with both VS Code and GitHub to enhance this experience.
- VS Code Extension for Foundry: This extension enables developers to create, run, and debug agents locally while directly connecting to Foundry resources. It scaffolds projects, offers integrated tracing and evaluation, and allows one-click deployment to Foundry Agent Service—all within the IDE you already know.

- Model Inference API: With a consolidated inference endpoint, developers can assess performance across models and swap them out without needing to rewrite code. This adaptability speeds up experimentation and future-proofs applications against a rapidly evolving model landscape.
- GitHub Copilot and the Coding Agent: Copilot has evolved beyond mere autocomplete to become a fully autonomous coding agent that can resolve issues, set up a secure runner, and generate pull requests, demonstrating how agent development is becoming integrated into the developer workflow. When combined with Azure AI Foundry, developers can expedite agent creation, using Copilot to generate code while incorporating the necessary models, agent runtime, and observability tools from Foundry.
Embrace Your Frameworks
Agents are not universally applicable, and developers typically prefer using the frameworks they are most familiar with. Foundry acknowledges this variety:
- First-party frameworks: Foundry supports both Semantic Kernel and AutoGen, with plans for an eventual merge into a modern unified framework. This future-focused framework is designed for modularity, enterprise-level dependability, and seamless deployment to the Foundry Agent Service.
- Third-party frameworks: Foundry Agent Service integrates naturally with CrewAI, LangGraph, and LlamaIndex, facilitating developers in orchestrating multi-turn, multi-agent interactions across different platforms. This ensures that you can utilize your preferred open-source ecosystem while still leveraging Foundry’s enterprise-grade runtime.
Supporting Interoperability Through Open Protocols
Agents don’t function in isolation; they must interface with tools, systems, and even other agents. Foundry inherently supports open protocols:
- MCP: Foundry Agent Service allows agents to access any MCP-compatible tools directly, providing developers with a straightforward method to link external systems and reuse tools across platforms.
- A2A: Semantic Kernel incorporates A2A to enable agents to coordinate across various runtimes and ecosystems. A2A allows multi-agent workflows to encompass different vendors and frameworks, unlocking scenarios where specialised agents collaborate to resolve complex issues.
Deploy Where Your Users Work
Developing an agent is just the beginning—its true value comes when it’s accessible in environments your users are already using. Foundry simplifies the process of publishing agents to both Microsoft and custom channels:
- Microsoft 365 and Copilot: By utilising the Microsoft 365 Agents SDK, developers can publish Foundry agents directly to Teams, Microsoft 365 Copilot, BizChat, and other productivity tools.
- Custom apps and APIs: Agents can be presented as REST APIs, embedded within web applications, or integrated into workflows using Logic Apps and Azure Functions, with thousands of prebuilt connectors available for SaaS and enterprise systems.
Monitor and Strengthen
Reliability and security can’t just be added in later—they need to be integral to the development process. As discussed in our previous post, observability is crucial for delivering effective and trustworthy AI. Foundry incorporates these functions directly into the developer workflow:
- Tracing and evaluation tools are available for debugging, comparing, and validating agent performance both before and after deployment.
- CI/CD integration with GitHub Actions and Azure DevOps ensures continuous evaluation and governance checks are conducted with every code commit.
- Enterprise safeguards cover everything from networking and identity security to compliance and governance, enabling prototypes to scale confidently into production.
The Significance of This Now
The experience offered to developers is becoming a major factor in productivity. Enterprises need to ensure their teams can build and deploy AI agents efficiently, confidently, and at scale. Azure AI Foundry provides an open, modular, and enterprise-ready pathway—integrating seamlessly with GitHub and VS Code, accommodating both open source and proprietary frameworks, and ensuring agents can be deployed where users and data already exist.
With Foundry, the journey from prototype to production becomes smoother, faster, and more secure—enabling organisations to innovate at the pace of AI.
What’s Ahead
In Part 5 of the Agent Factory series, we will delve into how agents can connect and collaborate on a larger scale. We’ll clarify the integration landscape—from agent-to-agent collaboration with A2A to tool interoperability with MCP, exploring how open standards ensure agents operate effectively across various applications, frameworks, and ecosystems. Expect practical tips and reference patterns for constructing thoroughly connected agent systems.
Did you catch the previous posts in the series?