- Twitter has introduced a preliminary version of a policy on deepfakes
- The platform proposed labeling posts found to contain manipulated media
- It would also link to media outlets who have debunked the media
Twitter introduced its first ever policy on deepfakes – media manipulated by AI to alter its meaning or message. File photo
Twitter unveiled a plan for addressing deepfake videos and other manipulated media that skeptics fear could skew elections.
Twitter’s new proposal, laid out in a blog post on Monday, would place a notice next to tweets found sharing ‘synthetic or manipulated media,’ The labels are designed to warn people before they like or share those posts.
Additionally, it is considering adding a link to the flagged post a news story showing why various sources think the media is synthetic or manipulated.
The company also said it might remove tweets with such media if they were misleading and could threaten physical safety or lead to other serious harm.
It proposed defining synthetic and manipulated media as any photo, audio or video that has been ‘significantly altered or fabricated in a way that intends to mislead people or changes its original meaning.’
This would include either deepfakes or more manually doctored ‘shallowfakes.’
Last year, Twitter banned deepfakes that digitally manipulate an individual’s face onto another person’s nude body – a widely condemned style of deepfake that has been used to superimpose celebrities onto porn videos.
Deepfakes are so named because they are made using deep learning, a form of artificial intelligence, to create fake videos of a target individual.
They are made by feeding a computer an algorithm, or set of instructions, as well as lots of images and audio of the target person.
The computer program then learns how to mimic the person’s facial expressions, mannerisms, voice and inflections.
With enough video and audio of someone, you can combine a fake video of a person with fake audio and get them to say anything you want.
In the run-up to the U.S. presidential election in November 2020, social platforms have been under pressure to tackle the threat of manipulated media, including deepfakes.
While there has not been a well-crafted deepfake video with major political consequences in the United States, the potential for manipulated video to cause turmoil was demonstrated in May by a clip of House Speaker Nancy Pelosi, manually slowed down to make her speech seem slurred.
After the Pelosi video, Facebook CEO Mark Zuckerberg was portrayed in a spoof video on Instagram in which he appears to say “whoever controls the data, controls the future.” Facebook, which owns Instagram, did not to take down the video.
In July, U.S. House of Representatives Intelligence Committee Chairman Adam Schiff wrote to the CEOs of Facebook, Twitter and Alphabet Inc’s Google asking for the companies’ plans to handle the threat of deepfake images and videos ahead of the