In recent years, deepfakes have become a common subject in political discussions, often seen as the next major threat to democratic societies, because of their potential to worsen the information disorder. Their highly realistic nature has increased concerns that they could be used as a powerful tool for new and more effective disinformation, both in terms of credibility and ease of creation. However, reality has proven to be more complicated than many experts initially anticipated. Synthetic media have indeed entered political communication, but their uses go far beyond simple disinformation. This does not mean that deepfakes are not a threat. On the contrary, focusing solely on disinformation risks hiding the deeper, systemic issues caused by deepfakes, as well as the opportunities this technology may offer. deepfakes have the potential to reshape how information is perceived overall, acting as an epistemic threat by eroding trust in information and making it harder to gain new knowledge.
Before exploring the broader implications of political deepfakes, the persistence of deepfakes as a political threat raises an initial question: what exactly is a deepfake? Although the origin of the term is easy to trace, it was first used in relation to synthetic media in 2017 by a Reddit user called deepfake. The definition of what counts as a deepfake has broadened over time, leading to several competing interpretations.
Even at the level of dictionary definitions, subtle but significant differences become apparent. Showing the co-existence of different interpretations according to the cultural differences, the Merriam-Webster dictionary (American English) defines deepfakes as “an image or recording that has been convincingly altered and manipulated to misrepresent someone as doing or saying something that was not actually done or said,” while the Oxford English Dictionary (British English) provides a slightly different framing, defining a deepfake as “Any of various media, esp. a video, that has been digitally manipulated to replace one person’s likeness with that of another, often used maliciously to show someone doing something that he or she did not do.”
While both definitions recognize the range of formats and uses that deepfakes can assume, the latter adds a moral dimension by explicitly linking the technology to malicious intent. This difference in interpretation is also reflected in the way organizations and governments have addressed deepfakes through communications and legislation.
In the United States, for example, the TAKE IT DOWN Act, published in 2025, defines deepfakes as “digital forgery,” specifically as “an intimate visual depiction of an identifiable individual created or altered using AI or other technological means,” with an identifiable individual defined as someone “who appears in whole or in part in an intimate visual depiction” and “whose face, likeness, or other distinguishing characteristic […] is displayed in connection with such intimate visual depiction.” A similar approach is reflected in the DEEPFAKE Act, introduced in 2023, which refers to deepfakes as “technological false personation record.” On the other hand, the PRC’s latest legislation addressing deepfake content online broadly refers to “AI-generated synthetic content,” including its definition “text, images, audio, video, virtual scenes, or other information that is generated or synthesized using AI technology.”
These definitions, in a sense, represent two extremes: the former adopts a narrow view of deepfakes, focusing on impersonation of real individuals and malicious intent, while the latter covers almost all forms of AI-generated media, regardless of the subject involved or the purpose behind their creation and sharing. Importantly, different definitions lead to different implementations of the law. In particular, the broadness of the legal definition determines the amount of content that must be moderated.
It is interesting to observe that this definitional divergence is also evident across Europe. The European Union, for example, defines deepfakes in the AI Act as “AI-generated or manipulated image, audio, or video content that resembles existing persons, objects, places, entities, or events and would falsely appear to a person to be authentic or truthful,” while the UK’s approach resembles a more nuanced version of the US model, focusing on impersonation and malicious intent, yet recognizing that intent does not necessarily have to be malevolent. In this regard, academic literature has increasingly acknowledged the dual-use nature of deepfake technology in the political sphere, including not only disinformation but also satire or forms of political expression protected as free speech. Accordingly, in this piece, we adopt the broader interpretation of deepfakes, acknowledging that this is an informed, but ultimately partisan, choice.
Instead of being limited to a narrow set of malicious practices, deepfakes have increasingly been used in various political contexts, sometimes in ways that challenge the idea that their main purpose is disinformation. In recent years, especially after the popularization and easier access to generative AI technologies, several examples of deepfakes used for political purposes have emerged, both within and outside electoral processes. The 2024 election cycle offers several notable examples. In India’s national election, deepfakes were used to create disinformation by fabricating scandals involving Bollywood stars. Similarly, in the US 2024 presidential elections, president Trump used AI-generated images to defame the Democratic counterpart. A well-known example of deepfake disinformation during wartime is the early deepfake video of Ukrainian President Volodymyr Zelensky claiming Ukraine’s surrender, which circulated as wartime propaganda.
At the same time, we can also observe more positive applications of the technology. During the 2024 Indian elections, for example, politicians used deepfake technology to expand their message reach by creating videos of themselves speaking in different languages to connect with diverse audiences. Deepfakes have also been used to enhance candidates’ public images and communicate more effectively with their voting bases during the last US elections, such as impersonating public figures (and their fandom) giving support to candidate D.J. Trump. Lastly, in conflict zones, AI-generated synthetic media can be used to spread narratives, and related content, that would be otherwise censored by social media platform’s policies, as reported in the case of Gaza-related activism, where graphic depictions of facts happening in the war zone are often blocked by the platform as too explicit.
Compared to expectations less than a decade ago, the increased accessibility and versatility of deepfake creation tools have dramatically expanded the potential uses of this technology. While this growth has not prevented harmful applications, such as disinformation, it has allowed uses focused on increasing accessibility and bypassing perceived censorship on sensitive political issues. Yet even when recognizing that deepfakes are not inherently disinformative or malicious, is the widespread proliferation of this type of content ultimately good for the information environment?
Deepfakes are often presented as a new, distinct, and potentially more perilous evolution of existing disinformation practices. This argument rests on the fact that deepfakes enable the circulation of highly realistic content, depicting people, places, or events, shared with the explicit intent of misleading audiences. This concern mainly stems from the assumption that deepfake disinformation potential relies on what has been defined as the realism heuristic, a cognitive shortcut that can be summarized as “seeing is believing.” From this perspective, deepfake technology boosts disinformation by creating seemingly visual proof of events that never happened. In online settings, especially on social media platforms, users rarely process information carefully, instead relying on heuristic cues like visual realism. As a result, deepfake-based disinformation is seen as especially disruptive.
While this argument is theoretically convincing, recent empirical studies have started to complicate this narrative. These findings suggest that although deepfakes in different formats (e.g., image, video, or audio) can indeed be disinformative, they are not necessarily more effective at misleading audiences than other, non-deepfake forms of disinformation (e.g., written news, out of context images), showing that the visual aspect might not be enough to make the disinformation more credible.
A second hypothesis regarding the risks posed by deepfakes shifts focus from viewing them as isolated pieces of disinformation to seeing them as a technology capable of causing systemic effects. Because of their potential to distort the realism heuristic, deepfakes may weaken the foundation on which citizens judge the credibility of information overall. For this reason, Fallis describes deepfakes as an epistemic threat: in a world where anything can be easily forged—including visual evidence, which has traditionally been seen as one of the most trustworthy forms of proof because it closely mirrors direct personal observation—people’s ability to gain new knowledge about the world becomes significantly limited. If visual proof itself becomes questionable, the very basis for trusting information is compromised.
When applied to social media platforms, this epistemic vulnerability appears as a “spillover effect” of deepfakes on authentic news. In other words, just the awareness that deepfakes exist can decrease trust not only in manipulated content but also in genuine information. Early research seems to support this idea: both experimental and field studies have found that exposure to deepfake content leads to lower levels of trust and perceived news credibility, even when participants are evaluating non-synthetic news.
Finally, deepfakes might also increase political polarization or even lead to radicalization: if “seeing is no longer believing” because visual evidence can be forged, deepfakes become a handy excuse for selectively accepting information that matches people’s existing beliefs. Research already shows that online users tend to prefer content that confirms their previous attitudes; in this context, deepfake technology can help create tailored content that reinforces these beliefs, while also allowing users to dismiss opposing information as fake or AI-generated.
The greatest threats from deepfakes are not just their ability to produce more convincing disinformation, but their potential to erode our trust in information altogether. By damaging trust, deepfakes can increase skepticism, polarization, and selective belief. At the same time, the spread of deepfakes reveals existing vulnerabilities in our societies. Media credibility has steadily declined over recent decades, along with trust in traditional institutions. Deepfake proliferation should not let us forget that even before generative AI, verifying information was not always straightforward; credibility often depended on the perceived legitimacy of the source. In this context, recognizing these underlying issues, the challenges of deepfakes could also act as a catalyst for restoring legitimacy and trust in information, rather than just speeding up their decline.
Further Reading on E-International Relations

