Loading Now

Microsoft Azure AI Foundry Models and Microsoft Security Copilot achieve ISO/IEC 42001:2023 certification

Microsoft has proudly received the ISO/IEC 42001:2023 certification, a respected standard for Artificial Intelligence Management Systems, covering both Azure AI Foundry Models and Microsoft Security Copilot.

Microsoft has proudly achieved ISO/IEC 42001:2023 certification, a globally recognised benchmark for Artificial Intelligence Management Systems (AIMS) that applies to both Azure AI Foundry Models and Microsoft Security Copilot. This certification clearly highlights Microsoft’s dedication to developing and managing AI systems with responsibility, security, and transparency while empowering customers to innovate confidently in line with the increasing focus on responsible AI across business and regulatory landscapes.

Setting New Standards for Responsible AI with ISO/IEC 42001

ISO/IEC 42001 was established by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC). It outlines a comprehensive framework for managing AI systems and covers a variety of essential aspects, from risk management and bias reduction to transparency and accountability. This globally acknowledged standard provides an effective structure for organisations to set up, implement, maintain, and enhance their AI management systems, focusing on minimising risks while seizing opportunities in the AI lifecycle.

By earning this certification, Microsoft confirms that Azure AI Foundry Models, including Azure OpenAI models, and Microsoft Security Copilot are committed to responsible innovation. Furthermore, this validation from an independent third party offers customers peace of mind as it showcases how Microsoft Azure adheres to robust governance and compliance practices united with Microsoft’s Responsible AI Standard.

Assisting Clients Across Various Industries

Whether you’re incorporating AI into regulated sectors, implementing generative AI in products, or investigating new AI applications, this certification supports customers in several ways:

  • Speed up your compliance efforts by using certified AI services that align with emerging regulations.
  • Build confidence among your users, partners, and regulators with transparent and auditable governance acknowledged through the AIMS certification.
  • Understand how Microsoft manages AI risks while promoting responsible AI development, allowing users to feel more secure in the services they create.

Fostering Trust and Responsible AI within the Azure Platform

The framework of Microsoft’s Responsible AI (RAI) programme is built around four primary pillars—Govern, Map, Measure, and Manage. These pillars outline how we design and oversee AI applications and agents. They are integral to both Azure AI Foundry Models and Microsoft Security Copilot, ensuring our services are innovative, safe, and accountable.

We’re devoted to honouring our Responsible AI commitment and continually progressing with our initiatives such as:

  1. Our AI Customer Commitments aimed at guiding customers in their responsible AI journey.
  2. Our first-ever Responsible AI Transparency Report which documents our evolving practices, shares our insights, and sets our goals while building trust with the public.
  3. Our Transparency Notes for Azure AI Foundry Models and Microsoft Security Copilot to help customers understand how our AI technology functions, including its capabilities and limitations.
  4. Our dedicated Responsible AI resources website, offering tools and templates to help clients establish their own responsible AI frameworks.

Guiding Your Responsible AI Journey with Trust

We understand that ensuring responsible AI involves more than just technology; it requires effective operational processes, risk management, and accountability. Microsoft supports clients through these challenges by offering both a reliable platform and the necessary expertise to foster trust and compliance. Our unwavering commitment includes:

  • Continuously enhancing our AI management system.
  • Grasping the expectations and needs of our clients.
  • Building on the Microsoft RAI programme and AI risk management strategies.
  • Identifying and acting on opportunities to uphold and strengthen trust in our AI products and services.
  • Collaborating with a growing network of responsible AI practitioners, regulators, and researchers to advance our responsible AI methodology.

The ISO/IEC 42001:2023 certification is an addition to Microsoft’s comprehensive collection of compliance certifications, showcasing our commitment to operational excellence and transparency. This helps clients develop responsibly within a cloud framework built on trust. From healthcare institutions aiming for fairness to financial entities managing AI-related risks, or governmental organisations enhancing ethical AI practices, Microsoft’s certifications promote the broad use of AI while ensuring alignment with evolving global standards for security, privacy, and responsible AI governance.

With a foundation in security and data privacy, alongside investments in operational resilience and responsible AI, Microsoft aims to foster and maintain trust at all levels. Azure is crafted for trust, supporting innovation on a secure, resilient, and transparent platform that empowers customers to scale AI responsibly while managing changing compliance requirements and retaining control over their data and operations.

Discover More with Microsoft

As AI regulations and expectations evolve, Microsoft is dedicated to providing a trustworthy framework for AI innovation, characterised by resilience, security, and transparency. Achieving ISO/IEC 42001:2023 certification is a significant milestone in this effort, and Microsoft will continue to invest in surpassing global standards while driving responsible innovations that empower clients securely and ethically at scale.

Find out how we prioritise trust in cloud innovation through our commitment to security, privacy, and compliance at the Microsoft Trust Center. See this certification and related reports, along with other compliance documentation, on the Microsoft Service Trust Portal.


The ISO/IEC 42001:2023 certification for Azure AI Foundry and Microsoft Security Copilot was granted by Mastermind, an ISO-accredited certification authority recognised by the International Accreditation Service (IAS).