Access and Feeds

Machine Deception: Seeing May No Longer be Believing — Deepfakes

By Dick Weisinger

Deepfakes are video and audio clips that have been altered to make it appear that real people say or do things that they never said nor did. Machine learning and video and audio editing software has become so sophisticated that detecting a deepfake from the real thing has become extremely difficult.

The implications of deepfakes and the spread of disinformation are huge. Used unethically, disinformation can easily be posted and spread quickly across social media to misinform and confuse and spread propaganda.

Matthew Dixon, assistant professor at the Illinois Institute of Technology’s Stuart School of Business, said that “once a person has been digitally cloned it is possible to then generate fake video footage of them saying anything, including speaking words of malicious propaganda on social media. The average social-media follower would be unable to discern that the video was fake.”

Unfortunately, the technology to create deepfakes is moving at a much quicker pace than technology to identify the fakes. A blog on AIMultiple identifies at least 20 companies devoted to creating products to identify deception technology.

William Galson, senior fellow at the Brookings Institute, wrote that “if AI is reaching the point where it will be virtually impossible to detect audio and video representations of people saying things they never said (and even doing things they never did), seeing will no longer be believing, and we will have to decide for ourselves—without reliable evidence—whom or what to believe. “

Digg This
Reddit This
Stumble Now!
Buzz This
Vote on DZone
Share on Facebook
Bookmark this on Delicious
Kick It on DotNetKicks.com
Shout it
Share on LinkedIn
Bookmark this on Technorati
Post on Twitter
Google Buzz (aka. Google Reader)

Leave a Reply

Your email address will not be published. Required fields are marked *

*

17 + twenty =