Microsoft has revised its Azure OpenAI Service policy, prohibiting law enforcement agencies from utilizing the service for facial recognition purposes.
The Azure OpenAI Service grants access to models such as GPT-4 Turbo and DALL-E for Microsoft's cloud customers.
Microsoft now explicitly prohibits U.S. police departments from employing OpenAI models for facial recognition tasks.
While facial recognition systems primarily rely on visual data, OpenAI models like GPT-4 could potentially enhance related processes, such as improving user interfaces or generating natural language responses.
This update comes after Axon Enterprise's recent unveiling of an AI-powered tool for summarizing audio from police body cameras.
The Azure OpenAI Service's Code of Conduct specifies that its models cannot be used for real-time facial recognition technology by law enforcement globally.
The ban extends to officers using body-worn or dashboard cameras, including potential use cases by French police at the upcoming Paris Olympics.
Microsoft's policy also prohibits the use of OpenAI models for manipulating or deceiving individuals, creating romantic chatbots, and implementing social scoring systems.
Microsoft's updated policy underscores its commitment to responsible AI usage, restricting the application of OpenAI models in certain contexts, particularly in law enforcement and privacy-sensitive areas.