October 15, 2024

Deepfakes Drive Concern Over Government Security and Election Interference

By: John Tuttle

We’ve all seen deepfakes at this point. Whether it’s the feverish re-envisioning of The Lord of the Rings as it might have been (as directed by Wes Anderson) or Joe Biden and Donald Trump’s viral cover of “Video Killed the Radio Star,” we have witnessed the entertainment value this technology has to offer. These kinds of effects, though polished for a higher level of detail, brought youthful Luke Skywalker to the screen in series like The Mandalorian and The Book of Boba Fett.

At the same time, this powerful tool can be used to destroy reputations and wreck lives. Deepfake porn increased in popularity over the past few years, leading to non consensual graphic images (some of a vengeful nature) and smear campaigns that humiliated many women and caused some to lose jobs and educational opportunities. Also, some women have expressed that the knowledge that someone deliberately used your likeness to create porn causes you to look at those around you differently, wondering who would do such a thing.

Deepfakes alter our perception of reality by making us question (or fail to question) the authenticity of an image and making us question, in the case of the porn victims, who is responsible for the abuse and what their motives are.

The motives of other deepfakers deviate from revenge tactics and sexual objectification and focus on altering the public image of figures holding high offices. The product could be a doctored photo or video they hope resonates with their audience and is received in humor, anger, or — worst of all — good faith. When a deepfaked video is convincing, that is when it becomes most dangerous.

In September, California governor Gavin Newsom gave his approval for three new AI-focused laws. Another bill he signed defines as a criminal act the making of sexually explicit deepfake content that depicts a real person. Other California bills to successfully pass Newsom’s desk are geared toward eliminating election interference from deepfake content.

Meanwhile, the public wants to get a pulse on what the presidential nominees’ views are on AI policy. Trump favors limited government restriction in the industry, and Harris expresses concerns over what damage AI can do when poorly programmed or when maliciously used.

During his presidency, Trump signed executive orders for the government to research, develop, and use AI tech. J.D. Vance, Trump’s running mate for VP, says he shares concerns over the harmful impact artificial intelligence can have, although he thinks the fear is breeding “overregulation” of these tools and their developers.

Harris, for one, has good reason to see the far-reaching applications of AI-doctored images as harmful. A recent article from the Washington Post delves into the detection tools used to seek out deepfake pictures and videos. One infamous example of doctored footage was an altered video clip of Harris, changing the Vice President’s speech pattern to give the illusion of her hemming and hawing her way through her sentences. 

While White House and European Union officials continue to press the tech industry for further development of detection methods, most AI detection tools are imperfect, like their human makers. These tools, per a 2023 analysis cited by the Post, range from 25%-82% accuracy. Detectors are also less accurate when gauging images that include dark-skinned people. Even though they use a series of checking algorithms, these checkpoints can sometimes be bypassed since software edits can mislead the detectors.

This uncertainty in detecting AI alterations only escalates the building fears, something J.D. Vance wishes to avoid. Fear itself isn’t desirable, but our country, and our world, are trying to figure out if those fears are warranted.

To use a poetic, scriptural image, deepfakers are “painters of lies” (Sirach 51:5). They distort reality into an alluring canvas to reinforce your preconceived perspectives or confirm your deepest fears. Since anyone can get verified these days, how hard is it for someone to set up a verified X account posing as a government leader and upload deepfaked content declaring war on another country? Deepfake content serves as a possible threat to the safety of individuals’ wellbeing, election honesty, and government security.

Hany Farid, professor at UC Berkely, says deepfakes are “making it so that we don’t trust or believe anything or anybody.” That’s especially so with regards to online content. But that attitude is how the American people have judged the daily news cycle for a while now. As it is, much of the public has grown dubious about the legitimacy of any visual content they find on social media. But perhaps this is for the better. Perhaps it will force each of us to look deeper, do some research, and get to the bottom of things, to the truth.