Microsoft offers politicians protection against deepfakes

Amid growing concern that AI can make it easier to spread misinformation, Microsoft is offering its services, including a digital watermark identifying AI content, to help crack down on deepfakes and enhance cybersecurity ahead of several worldwide elections.

In a blog post co-authored by Microsoft president Brad Smith and Microsoft’s corporate vice president, Technology for Fundamental Rights Teresa Hutson, the company said it will offer several services to protect election integrity, including the launch of a new tool that harnesses the Content Credentials watermarking system developed by the Coalition for Content Provenance Authenticity’s (C2PA). The goal of the service is to help candidates protect the use of their content and likeness, and prevent deceiving information from being shared. 

Called Content Credentials as a Service, users like electoral campaigns can use the tool to attach information to an image or video’s metadata. The information could include provenance of when, how, when, and who created the content. It will also say if AI was involved in creating the content. This information becomes a permanent part of the image or video. C2PA, a group of companies founded in 2019 that works to develop technical standards to certify content provenance, launched Content Credentials this year. Adobe, a member of C2PA, released a Content Credentials symbol to be attached to photos and videos in October. 

Content Credentials as a Service will launch in the Spring of next year and will be first made available to political campaigns. Microsoft’s Azure team built the tool. The Verge reached out to Microsoft for more information on the new service. 

READ MORE  Comcast Xfinity offers surprise free internet speed boost to millions

“Given the technology-based nature of the threats involved, it’s important for governments, technology companies, the business community, and civil society to adopt new initiatives, including by building on each other’s work,” Smith and Huston said.

Microsoft said it formed a team that will provide advice and support to campaigns around strengthening cybersecurity protections and working with AI. The company will also set up what it calls an Election Communications Hub where world governments can get access to Microsoft’s security teams before elections.

Smith and Hutson said Microsoft will endorse the Protect Elections from Deceptive AI Act introduced by Sen. Amy Klobuchar (D-MN), Chris Coons (D-DE), Josh Hawley (R-MO) and Susan Collins (R-ME). The bill seeks to ban the use of AI to make “materially deceptive content falsely depicting federal candidates.”

“We will use our voice as a company to support legislative and legal changes that will add to the protection of campaigns and electoral processes from deepfakes and other harmful uses of new technologies,” Smith and Huston wrote.

Microsoft also plans to work with groups like the National Association of State Election Directors, Reporters Without Borders, and the Spanish news agency EFE to surface reputable sites on election information on Bing. The company said this extends its previous partnership with Newsguard and Claim Review. It hopes to release reports about foreign influences in key elections regularly. It has already released the first report analyzing threats from foreign malign influences. 

Already, some political campaigns were criticized for circulating manipulated photos and videos, though not all of these were created with AI. Bloomberg reported Ron DeSantis’ campaign released fake images of his rival Donald Trump posing with Anthony Fauci in June and that the Republican National Committee promoted a faked video of an apocalyptic US blaming the Biden administration. Both were relatively benign acts but were cited as examples of how the technology creates openings to spread misinformation.

READ MORE  Chinese language Spies Contaminated Dozens of Networks With Thumb Drive Malware

Misinformation and deep fakes are always a problem in any modern election, but the ease of using generative AI tools to create deceptive content fuels concern that it will be used to mislead voters. The US Federal Election Commission (FEC) is discussing whether it will ban or limit AI in political campaigns. Rep. Yvette Clark (D-NY) also filed a bill in the House to compel candidates to disclose AI use. 

However, there is concern that watermarks, like Content Credentials, will not be enough to stop disinformation outright. Watermarking is a central feature in the Biden administration’s executive order around AI. 

Microsoft is not the only Big Tech company hoping to curb AI misuse in elections. Meta now requires political advertisers to disclose AI-generated content after it banned them from using its generative AI ad tools.

Leave a Comment