The Election meets AI Deception

Roughly one year ago, my timeline was bombarded with images of former president Donald Trump being arrested, tackled by the police, and donning an orange jumpsuit in his mugshot. At the time, the public was anticipating the announcement of his arrest and the release of his real mugshot, so a few Twitter users decided to speed up the process and give the people what they wanted via artificial intelligence (AI).  

When I first encountered the mugshot image while casually scrolling, my split second reaction was to take it at face value. I was not seeing any immediate signs of generated image flaws (distorted hands, extra limbs etc.) and I figured his likeness was captured enough for the image to be real. But, instinctively, I had to do a double take and confirm from a reputable source. After about 15 seconds of searching, sure enough, the images were confirmed to be fake. Even though it was for a short moment, it was one of the first times amidst the influx of AI generated images where I was duped for immediately trusting an image’s validity.

For that whole week, these Trump AI images were practically unavoidable on social media— and as I opened the comments section, users of varying demographics seemed to be buying into them on account of either discontent or satisfaction. Perhaps it was a mixture of anticipation, the dramatic spectacle of it all, and the high profile subject that made these images go viral for their believability. This incident on the internet is not necessarily unique. In fact, there are manipulated images that are decades old, whose original forms are still veiled to the public. With that being said, it would be an understatement to argue that the rise of AI complicates things to an unprecedented degree. 

Widely accessible to the public, technically simple, and capable of something adjacent to photorealism, AI generated images brew the perfect storm for disinformation. When you situate this phenomenon in the context of the 2024 US presidential election–a political event fundamentally tied to notions of obscured truths and propaganda– the consequences are potentially catastrophic.

Though the aforementioned example highlights a narrative that the creator, Elliot Higgins, said to be tinged in humor and satire–originally posting them in thread with the context of their AI (Midjourney) origins— it still exemplifies how quickly context can be dismissed if it is not directly tied to the original post itself. This becomes an even bigger issue when you consider how many more have been made with malicious intent. It was reported by the BBC that many images featuring President Joe Biden and Donald Trump are being created and shared by conservatives and members of the Republican Party, which is significant because the content largely targets marginalized populations, like the Black community, whose vote has been historically and dominantly Democratic.

As much election-related misinformation originates, the most viral of these targeted images were posted on Facebook in March 2024. One of the generated images depicts Trump in the center of a large group of Black individuals, his arm around their shoulders, all smiling with the AI trademarked smoothed skin and dulled features. Another displays a similar message, except in this one he is centered in the middle of multiple Black men as they all sit on a porch. This image is particularly striking, because the location strategically alludes to potential outreach done by the former president in the physical community. If understood as real photographs, at the very least, they function to instill a false sense of support and construct new narratives that fail to represent reality. It also reiterates how the Black community and their bodies are continuously positioned as pawns employed to signify virtue in others. Further, it is another example of Blackness being used to adorn and propel a white counterpart.

On social media specifically, where we are met with mass amounts of unverified information delivered by word of mouth, it becomes increasingly vital to be cautious of user intention when sharing information supplemented by AI visual and audio aids. In the upcoming months we must be vigilant to all images, audio, and even video related to the election–those that feature politicians and those that do not. Practices like utilizing fact checking websites such as FactCheck.org or APNews.com, employing AI-generated detection software, reverse-image searching to find sources, verifying media by cross checking reports, and even slowing down your consumption of news headlines and their featured images can all be very helpful. 

Images visually communicate to the subconscious in ways that are not always readily apparent, so it is imperative that there are steps towards explicitly marking AI images for what they truly are. We have seen this slowly manifest through X’s Community Note feature, where users can submit context for misleading posts. However, publications also have a responsibility to not repost the images unless they are internally marked as false as well. Small changes like these can mitigate the images’ spread— making the difference between an informed and a misinformed voter.

Gabrielle Jones

Gabrielle Jones is a junior studying Media, Culture, and Communication. She is passionate about exploring the ways media can be used as a catalyst for social change and as an outlet for creativity. Always wrapped up in new music, movies, or books, she enjoys discovering and discussing compelling stories. Some of her interests include going to concerts and seeing films at local theaters around the city.

Previous
Previous

Don’t Walk, RUN: Gen Z’s Infatuation with Dupe Culture

Next
Next

Why is My Face Being Rated?