Experts On Twitter’s New Deepfake Policy To Alert, But Not Remove, Manipulated Content

Yesterday, Twitter shared a draft of its new deepfake policy and opened it up for public input before it goes live. It says when it sees synthetic and manipulated media intentionally trying to mislead or confuse users it will notify them, warn them before they share or like Tweets containing the content and add a link in order to inform and educate.

Commenting on the announcement are the following security experts:

Subscribe
Notify of
guest
1 Expert Comment
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
Paul Bischoff
Paul Bischoff , Privacy Advocate
InfoSec Expert
November 12, 2019 2:31 pm

Twitter\’s policy proposal is a step in the right direction and it will be interesting to see how it responds to feedback from the survey. However, a couple of big questions loom in my mind. The first is: how will deepfakes be detected? Will Twitter solely rely on people flagging deepfakes, will it use some sort of algorithm to detect altered voice and video, or a combination of both? Neither would be a perfect solution, so some people will undoubtedly come across deepfakes that will later be flagged or removed. That brings me to my next question: what is an acceptable number of views a deepfake can get before it\’s removed or properly labeled? Twitter will have to decide on thresholds to limit the reach of deep fakes (e.g. How convincing is it? How malicious is it? How many people have flagged it?), and the chances of everyone agreeing on where to draw the lines are slim. Even though Twitter is seeking user input, I don\’t think there\’s a solution that will satisfy everyone.

Last edited 3 years ago by Paul Bischoff
1
0
Would love your thoughts, please comment.x
()
x