Microsoft has released a tool designed to automatically identify deepfake videos.
The tool, dubbed Microsoft Video Authenticator, is rolling out ahead of the US presidential election and is meant to help stop the spread of digitally altered videos.
Video Authenticator will be able to “analyze a still photo or video” and give it a “confidence score” that provides the percentage chance that it has been artificially manipulated, Microsoft said in a blog post.
When it is analyzing a video, the tool “can provide this percentage in real-time on each frame as the video plays,” which will allow it to identify exactly which parts of a clip have been manipulated.
Microsoft said it works by “detecting the blending boundary of the deepfake and subtle fading or greyscale elements that might not be detectable by the human eye.”
Doctored videos involving politicians have gone viral in recent weeks. Last month, a video of House Speaker Nancy Pelosi designed to make her appear drunk was viewed more than 2 million times on Facebook before the social network was able to flag it.
Microsoft said the tech will be made available to “organizations involved in the democratic process, including news outlets and political campaigns.”
The company is also rolling out tech that will help identify manipulated content, such as news stories. That tool will allow content producers to add digital identifiers to their work, which can be identified by a browser extension that readers can install.
The identifiers will let people know “with a high degree of accuracy that the content is authentic and that it hasn’t been changed.”