Science Fiction, the Prophecies of Artificial Intelligence and the Transformation of Liberal Democracy

Five years ago, I published a short story as part of a science and speculative fiction anthology titled “Mundos Alternos,” or Alternative Worlds, produced by Chile’s Science Fiction and Fantasy Writers Association (ALCIFF). The piece–originally written in Spanish and later translated into English and Hungarian–narrates the case of an experimental artificial intelligence (AI) elected as president in a fictional country against flesh-and-blood candidates. It tells the story from the perspective of a sickly retired policy-maker and lobbyist whose job has been taken by technological advancements and who has been tasked by his government to train and advise the newly appointed “leader” since it doesn’t seem to comprehend complex human and political dynamics.

Although the story aimed to explore the ethical challenges of politics and decision-making guided by cold, context-lacking data analyses, I have been approached more than once regarding how that story might no longer be a futuristic and unimaginable scenario considering the rapid improvements in AI technology. While there is a broad and unsettled debate on the actual impact of these new tools on societies, it is frequently noted that introducing artificial intelligence (AI) into daily life has transformed various human interactions, prospects, and work dynamics. From how individuals tackle tasks to how organizations address problems and even how institutions consider their decision-making processes, the impact of AI is intensely discussed and examined, often without complete information and in a hectic, ever-changing environment. As soon as a new tool is introduced, there’s a palpable pressure to leverage it to gain the competitive advantages that come with technological progress. 

With the immense pressure, international competition, and transformations, it is only natural that these technological advances will trigger concerns over their effects on society and–on the matter of values–liberal democracies. In Europe, the trend has been instead to regulate and target the issue of data management and processing, putting information ownership at the center of the discussion. At the same time, other countries tend to be looser in their interpretation of the applicability of new technologies in societies. In the case of the 2024 American elections, the impact of AI has been regarded anywhere from decisive to the results to underwhelming and insignificant, according to TIME magazine

How far are the fears of science fiction writers from reality? What can AI really do to transform democracies around the world? In the last few years, as an International Relations Analyst and a Science Fiction writer, I have encountered countless promising, threatening, and misleading examples regarding the use of artificial intelligence, machine learning, and the overall use of data in political processes. In this article, I attempt to connect some commonly shared “science fiction prophesies” and my work over the years with some simple and actual examples in which we see AI changing societal dynamics. The EU and the values it promotes are at the center of the discussion of private data and information. Considering this and new legal frameworks such as the Digital Services Act, I argue that leading the “AI in governance” debate is not only a need in the process of a needed democratic transformation, with easier citizen access and more efficient governance, but an opportunity to take the lead and establish the baseline of how should AI be implemented across democracies. 

Although this piece's conclusions are highly hypothetical, they shed light on some current debates and possible scenarios that democracy advocates, politicians, and reform-seekers need to consider in analyzing and assessing new technologies. Hopefully, this will inspire readers to identify and tackle upcoming challenges in their own environments.

Artificial Intelligence in Governance and Lawmaking

Artificial intelligence is a wide and often abstract concept that can easily lose its meaning and usefulness in a debate when used indiscriminately or as a pamphletary scapegoat. NASA and Encyclopedia Britannica propose that it can be defined as the capacity of a computer to perform tasks associated with human reasoning and creativity. However, for some, this definition is falling short as AI-related tools can now outperform human capabilities in many narrow areas. People seem frightened by what they see AI achieve. From the ability to generate images, text, or even video, the use of AI and machine learning to predict procedural outcomes and even perform complex diagnoses, such as early alerts for cancer patients, these tools have entered the private and public world, improving dramatically in the last few years and dramatically impacting societies.

Still, there is no such thing as an artificial general intelligence that can exactly reproduce humanity at its core, and many advocate that it would actually be counterproductive since AI works at its best when given hyper-specific goals. In that sense, looking for efficient policy and legal outputs has been proposed as a relevant potential use for AI in governance. Interestingly, despite the cautionary image imprinted on the EU population, Europeans are in favor of incorporating these tools in search of efficiency and even replacing some public policy-related jobs with AI, according to the yearly European Tech Insights survey by the Center for the Governance of Change. The same report showed back in 2019 that up to a quarter of EU citizens are even fine with letting AI make important decisions. This shows that a future with digitalized politicians is not unthinkable. In recent years, AI politicians have been running as candidates in the United Kingdom (AI Steve) and Denmark (the Synthetic Party), and some other projects have been run in New Zealand, with SAM AI now being officially featured on government websites.

The main point with these experiments seems to be related to higher demand for technocratic decision-making, but also to push for transformations to liberal democracies, often accused of not being able to deliver at the speed needed during hectic times. AI could predict the impact of policies, generate the best decision paths for economic development according to relevant organizations such as APEC, or even ensure that legal processes are conducted without the flaws of human behavior according to institutions such as Harvard. However, there are also important risks associated with the introduction of AI into policymaking and executive administration of power, especially if this is done without the due analysis and comprehension of the impacts of technology on society. Immediate risks include the extension of biases to policy outcomes. AI systems have been proven to amplify existing societal discrimination, and the incorporation of these tools into democratic societies is feared in part for that reason. The reduction of human participation and agency in democracies, as well as the possible loss of values that are often based on ethical debates, are also put into question. AI could assess problems from a limited and even misguided utilitarian perspective, generating even bigger issues without proper due diligence and without responsibility chains or accountability—since we cannot put a software tool in prison when making a mistake that affects a citizen’s life. This is something that is already assessed in the introduction of AI agencies in cases that range from self-driving cars to autonomous weapons without much philosophical and political agreement, and which is largely referred to as AI accountability.

In democracies, the use of these tools is not uncommon. The 2020 pandemic demonstrated the ability of governments to harness their power to coordinate actions, as well as the inefficiencies of some democracies in addressing such an emergency. Half a decade later, countries are developing more sophisticated methods to utilize new tools for managing, processing, and acting upon large databases in governance. Face recognition has also become widespread in democratic countries and is no longer limited to security applications but extends to other areas. In Brazil, the Sao Paulo government even implemented these tools in their subway system to monitor commuters, which was also associated with advertising campaigns and was ultimately challenged by civil society.  The United Kingdom has also been at the forefront of surveillance and facial recognition application, with some civil society groups such as Liberty Human Rights calling it “the most intrusive mass surveillance regime of any democratic country”. The EU claims to be significantly more cautious about the use of mass surveillance, and this appears to be supported by their regulatory approach. However, the need for increased security and urgent concerns related to health, violence, and effective border control seem to conflict with those ideals. Some of the most advanced facial recognition companies, such as French IDEMIA, are based in the EU. Others, such as Mühlbauer, even cooperate with Chinese-linked companies like AISINO in their efforts to expand their market share.

There are other angles to this issue of data use. When it comes to delivering and policy-making, the use of these tools can offer immense potential. In addition to traditional surveys, which have long accompanied political perceptions, new AI-based analysis can predict, monitor, and create more sophisticated scenarios upon which politicians can apply policy changes or adapt to their public and their expectations. However, especially in regions like Europe, which has implemented policies that heavily protect citizens’ data, there is an ongoing and unavoidable debate on the ethics behind the use of individual and aggregated data in political processes and how to ensure the upholding of democratic principles in the adoption of these tools.

Unauthentic Behavior, Elections, and the Death of the Internet

Users may not have direct access to these vast databases, but widely available tools—particularly in the realm of generative AI—can easily contribute to the broader debate. In recent years, discussions surrounding information manipulation, the impact of AI in elections, and the potential to mislead the public by using AI tools have been consistently featured in public debates about technology's effects on liberal democracies. This is primarily due to the reduction in costs, coupled with a remarkable enhancement in output quality. In addition to their ability to generate large volumes of convincing text, these tools can also replicate voices, images, videos, and even behaviors of individuals participating in the public sphere. Consequently, disinformation, deepfakes, and other misleading forms of content have become accessible and menacing tools that threaten public trust in a world where we must scrutinize everything we encounter online. 

The potential of this in elections was a daring concern in 2024 due to the number of electoral processes and the hectic international events that struck the world that year. These tools can be used to fake a candidate’s face, spam with iterated content, and easily bypass language and cultural barriers. Luckily, contrary to some fatalistic projections for that year, there seems to have been only a limited impact, as mentioned earlier in this article. 

In any case, the increasing production of synthetic content is raising urgent questions about the role of humans in the consumption of information online. Much like a work of science fiction, the current discussion over the dead internet theory exposes a debate on the conventional uses of generative AI within the World Wide Web and how much of what we see and interact with on the Internet—this blog post included—can potentially be created by AI. The conspiracy goes as far as claiming that bots have actually created most digital interactions for at least the last eight years. Still, the reality, although much less drastic, presents a daring challenge for the democratic discourse. From automated YouTube channels discussing clickbait pseudoscience, going through generated images of historical events, all the way to LinkedIn posts produced by ChatGPT out of bullet points or recycled content or even academic papers increasingly resorting to these tools, much of the current debate that we read is at least assisted by artificial intelligence. Therefore, it is not unthinkable to assume that democratic processes, institutional-level conversations, and even decision making are at least influenced by AI, and that it is almost impossible to know what information we consume is purely human or not.

Conclusion: Unknown Challenges and Slow Adaptability

Discussions surrounding AI’s potential impact on democratic processes, particularly in governance, data management, and elections, reveal both the transformative power and the inherent risks of these technologies. Science fiction stories that once seemed like distant fantasies are increasingly becoming reflective of our present reality as AI moves from the realm of speculative fiction to practical implementation. From an EU point of view, these tools will provide obvious challenges and answers to the pressured democratic system, which is in urgent need of providing effective and clear solutions to skeptical and disillusioned citizens. In that sense, the introduction of AI to governance requires intelligent, realistic, and proactive actions that safeguard intrinsic democratic values. In this sense, a reactive approach is no longer enough to prevent the negative effects of AI, and neither to seize the potential advantages of governance experimentation. 

The integration of AI into governance, whether as tools to enhance efficiency or as candidates themselves, underscores the growing demand for more technocratic, data-driven decision-making. AI’s capacity to process vast amounts of data could empower politicians to make faster, more informed policy choices. However, since AI systems are only as effective as the data they are trained on, there remains a significant risk of reinforcing societal biases and inequalities, which could be aggravated if AI decision-making replaces human accountability.

Beyond governance, the growing use of AI for surveillance and large-scale data processing presents a critical dilemma for citizens' freedom and privacy. While some authoritarian regimes have exploited AI for mass surveillance and social control, democratic societies are also wrestling with facial recognition, personal data analytics, and predictive algorithms. The ethical implications of these technologies are substantial, challenging the balance between security and individual rights. 

AI’s role in elections and public discourse has introduced a new level of risk—specifically, the manipulation of information. The emergence of deepfakes, disinformation, and synthetic content generated by AI tools poses a direct threat to the integrity of democratic processes. As AI-generated content becomes increasingly sophisticated and harder to detect, it is crucial for policymakers, technologists, and citizens to devise strategies to combat disinformation and uphold the credibility of democratic institutions. 

In many ways, science fiction could have foreseen some of these challenges, reflecting a reality echoed in predictions made by authors years ago. Ultimately, however, it is the democratic community that must create solutions or identify the most effective uses of these technological innovations to better address these challenges.

Sascha Hannig is a Chilean international analyst with experience as a financial reporter. She currently consults for several international organizations, is the Executive Secretary at the recently founded Centro de Análisis para la Democracia, Chile (CAD Chile), and supports the Institute for Global Governance Research (GGR) at Hitotsubashi University as a Research Assistant. Her main fields of study are China’s global influence and the implications of science and technology in society. Hannig is a Ph.D. Student at the same institution as a JICA Scholar for the SDG Global Leaders program. In addition to her academic interests, she is a published fiction novelist with six novels under her name. Her most recent work: "Technology Transfer to Latin American Countries Drifting Away from the United States and China?" is scheduled to be published by Routledge on March 14, 2025.

You can find out more and follow Sascha on Instagram, LinkedIn and check out her novels on GoodReads.

The opinions expressed in this blog are solely those of the author and do not reflect the views of EU-VALUES Network.

Next
Next

Implications of the 2024 European Parliament Election and the Rise of Right-Wing Parties for ASEAN’s Multiculturalism, Muslim Migration, and International Relations