Microsoft implements restrictions on AI as privacy concerns rise
Microsoft is taking steps to restrict AI systems’ access to facial recognition tools. On Tuesday, the company released a 27-page report titled “Responsible AI Standard” and announced that Video Indexer, Azure Face API, and Computer Vision programs will now have limited access to facial recognition. In addition to complying with its new standard, the company will also take steps to improve its text-to-speech AI program, Azure’s Custom Neural Voice.
This restriction of facial recognition AI comes after studies determined that it disproportionately identifies women and those with darker skin tones, with a Harvard study showing a 34% higher error rate for women. darker-skinned compared to lighter-skinned men. And with a Georgetown Law study estimating that half of all Americans are part of a law enforcement facial recognition network, it’s no surprise that many are concerned about the technology’s erratic success. Since facial recognition AI is often used in criminal identification and surveillance, this misidentification can cause serious problems for both law enforcement and US citizens.
However, Microsoft is not alone in its efforts to limit this AI technology. It joins companies like Facebook, Google and Amazon, which have restricted, limited or stopped their own facial recognition and emotion reading programs. That said, Microsoft isn’t shutting down or restricting all of its AI systems, continuing to use in-house programs for accessibility purposes and more. And while it may limit its AI technology for privacy and security purposes, the company allows customers to get approval to use its facial recognition for services like face scans. website login.