Microsoft bans U.S. police from using enterprise AI tool for facial recognition

https://techcrunch.com/2024/05/02/microsoft-bans-u-s-police-departments-from-using-enterprise-ai-tool/

Microsoft has reaffirmed its ban on U.S. police departments from using generative AI for facial recognition through Azure OpenAI Service, the company’s fully managed, enterprise-focused wrapper around OpenAI tech.

Language added Wednesday to the terms of service for Azure OpenAI Service more clearly prohibits integrations with Azure OpenAI Service from being used “by or for” police departments for facial recognition in the U.S., including integrations with OpenAI’s current — and possibly future — image-analyzing models.

A separate new bullet point covers “any law enforcement globally,” and explicitly bars the use of “real-time facial recognition technology” on mobile cameras, like body cameras and dashcams, to attempt to identify a person in “uncontrolled, in-the-wild” environments.

The changes in policy come a week after Axon, a maker of tech and weapons products for military and law enforcement, announced a new product that leverages OpenAI’s GPT-4 generative text model to summarize audio from body cameras. Critics were quick to point out the potential pitfalls, like hallucinations (even the best generative AI models today invent facts) and racial biases introduced from the training data (which is especially concerning given that people of color are far more likely to be stopped by police than their white peers).

It’s unclear whether Axon was using GPT-4 via Azure OpenAI Service, and, if so, whether the updated policy was in response to Axon’s product launch. OpenAI had previously restricted the use of its models for facial recognition through its APIs. We’ve reached out to Axon, Microsoft and OpenAI and will update this post if we hear back.

The new terms leave wiggle room for Microsoft.

The complete ban on Azure OpenAI Service usage pertains only to U.S., not international, police. And it doesn’t cover facial recognition performed with stationary cameras in controlled environments, like a back office (although the terms prohibit any use of facial recognition by U.S. police).

That tracks with Microsoft’s and close partner OpenAI’s recent approach to AI-related law enforcement and defense contracts.

In January, reporting by Bloomberg revealed that OpenAI is working with the Pentagon on a number of projects including cybersecurity capabilities — a departure from the startup’s earlier ban on providing its AI to militaries. Elsewhere, Microsoft has pitched using OpenAI’s image generation tool, DALL-E, to help the Department of Defense (DoD) build software to execute military operations, per The Intercept.

Azure OpenAI Service became available in Microsoft’s Azure Government product in February, adding additional compliance and management features geared toward government agencies, including law enforcement. In a blog post, Candice Ling, SVP of Microsoft’s government-focused division Microsoft Federal, pledged that Azure OpenAI Service would be “submitted for additional authorization” to the DoD for workloads supporting DoD missions.

Update: After publication, Microsoft said its original change to the terms of service contained an error, and in fact the ban applies only to facial recognition in the U.S. It is not a blanket ban on police departments using the service. 

Kyle Wiggers is a senior reporter at TechCrunch with a special interest in artificial intelligence. His writing has appeared in VentureBeat and Digital Trends, as well as a range of gadget blogs including Android Police, Android Authority, Droid-Life, and XDA-Developers. He lives in Brooklyn with his partner, a piano educator, and dabbles in piano himself. occasionally — if mostly unsuccessfully.

View Bio

{
"by": "coloneltcb",
"descendants": 149,
"id": 40240037,
"kids": [
40240262,
40240981,
40240998,
40240939,
40240413,
40240206,
40241324,
40241071,
40240305,
40240493,
40240957,
40241605,
40240361,
40240681,
40240281,
40241197
],
"score": 254,
"time": 1714675891,
"title": "Microsoft bans U.S. police from using enterprise AI tool for facial recognition",
"type": "story",
"url": "https://techcrunch.com/2024/05/02/microsoft-bans-u-s-police-departments-from-using-enterprise-ai-tool/"
}
{
"author": "Kyle Wiggers",
"date": "2024-05-03T16:32:49.000Z",
"description": "Microsoft has changed its terms of service to ban certain police departments from using Azure OpenAI Service.",
"image": "https://techcrunch.com/wp-content/uploads/2024/02/GettyImages-1930518491.jpg?resize=1200,800",
"logo": "https://techcrunch.com/wp-content/uploads/2018/04/tc-logo-2018-square-reverse2x.png?resize=1200,1200",
"publisher": "TechCrunch",
"title": "Microsoft bans US police departments from using enterprise AI tool for facial recognition | TechCrunch",
"url": "https://techcrunch.com/2024/05/02/microsoft-bans-u-s-police-departments-azure-openai-facial-recognition/"
}
{
"url": "https://techcrunch.com/2024/05/02/microsoft-bans-u-s-police-departments-from-using-enterprise-ai-tool/",
"title": "Microsoft bans US police departments from using enterprise AI tool for facial recognition | TechCrunch",
"description": "Microsoft has reaffirmed its ban on U.S. police departments from using generative AI for facial recognition through Azure OpenAI Service, the company’s fully managed, enterprise-focused wrapper around OpenAI...",
"links": [
"https://techcrunch.com/2024/05/02/microsoft-bans-u-s-police-departments-azure-openai-facial-recognition/",
"https://techcrunch.com/?p=2699868",
"https://techcrunch.com/2024/05/02/microsoft-bans-u-s-police-departments-from-using-enterprise-ai-tool/"
],
"image": "https://techcrunch.com/wp-content/uploads/2024/02/GettyImages-1930518491.jpg?resize=1200,800",
"content": "<div><p>Microsoft has <a href=\"https://learn.microsoft.com/en-us/legal/cognitive-services/openai/code-of-conduct\" target=\"_blank\">reaffirmed</a> its ban on U.S. police departments from using generative AI for facial recognition through <a target=\"_blank\" href=\"https://techcrunch.com/tag/azure-openai-service/\">Azure OpenAI Service</a>, the company’s fully managed, enterprise-focused wrapper around OpenAI tech.</p>\n<p>Language added Wednesday to the terms of service for Azure OpenAI Service more clearly prohibits integrations with Azure OpenAI Service from being used “by or for” police departments for facial recognition in the U.S., including integrations with OpenAI’s current — and possibly future — image-analyzing models.</p>\n<p>A separate new bullet point covers “any law enforcement globally,” and explicitly bars the use of “real-time facial recognition technology” on mobile cameras, like body cameras and dashcams, to attempt to identify a person in “uncontrolled, in-the-wild” environments.</p>\n<p>The changes in policy come a week after Axon, a maker of tech and weapons products for military and law enforcement, announced a <a href=\"https://www.forbes.com/sites/thomasbrewster/2024/04/23/axon-ai-police-reports-/#:~:text=Axon%20senior%20principal%20AI%20product,facts%20of%20what's%20being%20recorded.\" target=\"_blank\">new product</a> that leverages OpenAI’s <a target=\"_blank\" href=\"https://techcrunch.com/tag/gpt-4/\">GPT-4</a> generative text model to summarize audio from body cameras. Critics were quick to point out the potential pitfalls, like <a target=\"_blank\" href=\"https://techcrunch.com/2023/09/04/are-language-models-doomed-to-always-hallucinate/\">hallucinations</a> (even the best generative AI models today invent facts) and <a target=\"_blank\" href=\"https://techcrunch.com/2019/08/14/racial-bias-observed-in-hate-speech-detection-algorithm-from-google/\">racial biases</a> introduced from the training data (which is especially concerning given that people of color are <a href=\"https://www.washingtonpost.com/politics/2022/09/15/driving-while-black-racial-discrimination-traffic-tickets/\" target=\"_blank\">far more likely to be stopped by police</a> than their white peers).</p>\n<p>It’s unclear whether Axon was using GPT-4 via Azure OpenAI Service, and, if so, whether the updated policy was in response to Axon’s product launch. OpenAI had <a href=\"https://www.nytimes.com/2023/07/18/technology/openai-chatgpt-facial-recognition.html\" target=\"_blank\">previously restricted</a> the use of its models for facial recognition through its APIs. We’ve reached out to Axon, Microsoft and OpenAI and will update this post if we hear back.</p>\n<p>The new terms leave wiggle room for Microsoft.</p>\n<p>The complete ban on Azure OpenAI Service usage pertains only to U.S.<em>,</em> not international, police. And it doesn’t cover facial recognition performed with <em>stationary</em> cameras in <em>controlled</em> environments, like a back office (although the terms prohibit any use of facial recognition by U.S. police).</p>\n<p>That tracks with Microsoft’s and close partner OpenAI’s recent approach to AI-related law enforcement and defense contracts.</p>\n<p>In January, reporting by Bloomberg <a href=\"https://www.bloomberg.com/news/articles/2024-01-16/openai-working-with-us-military-on-cybersecurity-tools-for-veterans?embedded-checkout=true\" target=\"_blank\">revealed</a> that OpenAI is working with the Pentagon on a number of projects including cybersecurity capabilities — a departure from the startup’s <a target=\"_blank\" href=\"https://techcrunch.com/2024/01/12/openai-changes-policy-to-allow-military-applications/\">earlier ban</a> on providing its AI to militaries. Elsewhere, Microsoft has pitched using OpenAI’s image generation tool, DALL-E, to help the Department of Defense (DoD) build software to execute military operations, <a href=\"https://theintercept.com/2024/04/10/microsoft-openai-dalle-ai-military-use/\" target=\"_blank\">per</a> The Intercept.</p>\n<p>Azure OpenAI Service became available in Microsoft’s Azure Government product in February, adding additional compliance and management features geared toward government agencies, including law enforcement. In a <a href=\"https://devblogs.microsoft.com/azuregov/azure-openai-service-now-available-in-azure-gov-cloud/\" target=\"_blank\">blog post</a>, Candice Ling, SVP of Microsoft’s government-focused division Microsoft Federal, pledged that Azure OpenAI Service would be “submitted for additional authorization” to the DoD for workloads supporting DoD missions.</p>\n<p><em><strong>Update: </strong>After publication, Microsoft said its original change to the terms of service contained an error, and in fact the ban applies only to facial recognition in the U.S. It is not a blanket ban on police departments using the service. </em></p>\n</div><div>\n<div>\n\t<p>\n\t\tKyle Wiggers is a senior reporter at TechCrunch with a special interest in artificial intelligence. His writing has appeared in VentureBeat and Digital Trends, as well as a range of gadget blogs including Android Police, Android Authority, Droid-Life, and XDA-Developers. He lives in Brooklyn with his partner, a piano educator, and dabbles in piano himself. occasionally — if mostly unsuccessfully.\t</p>\n</div>\n\t<p>\n\t\t<a target=\"_blank\" href=\"https://techcrunch.com/author/kyle-wiggers/\">View Bio <svg></svg></a>\n\t</p>\n</div>",
"author": "@TechCrunch",
"favicon": "https://techcrunch.com/wp-content/uploads/2015/02/cropped-cropped-favicon-gradient.png?w=192",
"source": "techcrunch.com",
"published": "2024-05-03T16:32:49+00:00",
"ttr": 110,
"type": "article"
}