Microsoft recently released a detailed report outlining its responsible AI practices.
The 40-page Responsible AI Transparency Report commends Microsoft's efforts in building generative AI responsibly.
Microsoft has launched 30 tools for developing responsible AI and over 100 features for AI customers to safely deploy solutions. The report notes a 17% increase in Microsoft’s responsible AI community, now exceeding 400 members.
All Microsoft employees are required to undergo responsible AI training, with 99% having completed related modules.
According to the report, Microsoft acknowledges its role in shaping AI technology and emphasizes its efforts in releasing generative AI technology with appropriate safeguards.
Microsoft has implemented various safety-focused initiatives, including training customers to deploy regulatory-compliant AI applications and offering legal fee coverage for companies facing intellectual property lawsuits related to its products.
Microsoft published a series of AI principles earlier, underscoring its commitment to fostering competition. However, this move came amid growing antitrust investigations.
The report highlights expanded tools for Azure customers to evaluate AI systems for issues such as hate speech and security circumventions.
Microsoft has expanded its red-teaming efforts, stressing the importance of security testing for AI models.
The report references PyRIT, Microsoft's internal security testing tool, which has garnered attention and usage among developers since its release on GitHub.
Microsoft collaborates with rivals Google and Anthropic through the Frontier Model Forum, focusing on safe AI development.
Microsoft pledges to continue investing in responsible AI efforts and to create tools for customers to safely develop AI applications.
Brad Smith, Microsoft’s president, and Natasha Crampton, the chief responsible AI officer, emphasize the company’s commitment to sharing responsible AI practices. Veera Siivonen, Saidot’s chief commercial officer, highlights the importance of transparency and proactive governance in the AI market.