The 2024 presidential election is quickly approaching, and AI’s vast influence has made it unlike any other. With the potential concerns about AI, people may wonder how it will impact the 2024 presidential election.
The Pew Research Center found that “57% of U.S. adults — including nearly-identical shares of Republicans and Democrats — say they are extremely or very concerned that people or organizations seeking to influence the election will use AI to create and distribute or misleading information.”
So far, people have seen AI used in fake celebrity endorsements, misleading images, polling public opinion and more. Companies are making efforts to prevent the spread of misinformation. However, it is still more important than ever to be cautious of the content you see online.
One example of AI being used to spread misinformation is Taylor Swift’s presidential endorsement. On Sept. 11, Swift created a post on Instagram that entailed her support for Vice President Kamala Harris.
“I was made aware that AI of ‘me’ falsely endorsing Donald Trump’s presidential run was posted to his site. It really conjured up my fears around AI, and the dangers of spreading misinformation,” Swift said in the Instagram post.
Originally, the image was generated to gain support for President Joe Biden in December 2023. AI was used to create an intentionally-misleading derivative, and Donald Trump eventually shared it on Truth Social. Millions of people, including Donald Trump himself, believed that the endorsement was true. He said he did not generate the images and called AI “very dangerous.” After dispelling the rumors, Swift shared a link for people to register and over 400,000 people clicked it.
As for companies working to combat the misinformation, companies like X have stated in their policy regarding AI misleading information that “You may not share synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm.”
Even though X has a policy on misleading media, X seems to be one of the top social media outlets that produces and shares this misleading content, a vice exacerbated by its owner, Elon Musk.
Earlier in September, Musk posted a fake AI generated picture of Harris dressed in a red communist uniform with a caption reading, “Kamala vows to be a communist dictator on day one. Can you believe she wears this outfit!?” This image received nearly 60 million views.
Though many younger people could see through the obvious falsehood, many older people are not accustomed to AI and are vulnerable to believing the misinformation. AI will only continue to improve and in the future, it may become even harder for generations that grew up with AI to discern what is true and false online.
Though AI has been used for misinformation, it has also proven to be a useful tool when polling public opinion. Polling is important because knowing what the core issues are will provide insight to the candidates and influence the topics they speak about, even considering how quickly information and the news cycle moves. Overall, this can help voters make a more informed choice based on what is important to them.
AI chatbot tools like “Engage” help researchers interview thousands of potential voters and quickly analyze their answers to provide insight on their opinions. According to the Missouri Independent, female voters in particular have even reported that they feel more comfortable sharing their true opinions to these AI chatbots.
Additionally, some companies are gathering data from public sources like social media and voting records by using “Sentiment analysis AI” where both tone and context are used to produce information. According to Bruce Schneier, a security technologist at Harvard Kennedy School, “All of the work in polling is turning the answers that humans give into usable data.”
Both of these methods can be more cost effective and much faster than traditional methods of polling, but they are not always the most accurate. Using AI for polling can vastly increase the number of respondents, but like with any method of polling, there will inevitably be flaws in the data due to people misleading pollsters, groups being left out, and comparatively low response rates.
As of now, OpenAI (ChatGPT) has also spoken up on its misuse surrounding the upcoming 2024 election and have stated that they are endorsing the “Protect Election from Deceptive AI Act,” a bipartisan bill proposed by Senators in the United States Senate.
According to the U.S. Congress, “The bill generally prohibits individuals, political committees, and other entities from knowingly distributing materially deceptive AI-generated audio or visual media of a federal candidate, or in carrying out a federal election activity, with the intent to (1) influence an election, or (2) solicit funds.”
Though it is unlikely the bill will be enacted in time for this election, lawmakers and companies like OpenAI are working toward preventing abuse, providing transparency around AI, and improving access to accurate information. If the act passes, this would pressure other companies to comply and prevent AI-generated misinformation from being spread in future elections.
Most of these bills and policies are still in the early stages and will take time to become law. Due to that, most policies regarding AI being used for misinformation are vague and followed loosely. This means it is up to voters themselves to fact check their sources, hold the information behind those sources accountable, and not allow themselves to be easily influenced by what they have seen and heard.
Do research, become informed, understand what each candidate’s plan is, and make the best personal decision when participating in the 2024 presidential election vote.