
Artificial intelligence: Risks for science and communication
Defining priorities and focusing on the human being are essential to guide better research with generative artificial intelligence

The world is set to face a new era of uncertainty and interconnected risks in the coming years, according to the World Economic Forum’s Global Risks Report 2024 (GRR24).
This 19th edition of the report, which consulted 1,400 business leaders across 113 countries, highlights a significant shift compared to previous surveys.
In the document, disinformation emerges as a major threat to humanity, followed by climate change and the unregulated use of generative artificial intelligence (GenAI).
For the first time, “false information and disinformation” ranks highest among the top 10 global risks over the next two years, driven by the rise of new GenAI tools.
These advancements have made it easier to create “deepfake” content—fake videos, images, and audio with a high degree of realism—posing significant risks to communication and science in general.
As South Korean philosopher Byung-Chul Han observes, “at a certain point, however, globalized production is no longer productive, but destructive; information is no longer informative, but deformative; communication is no longer communicative, but merely cumulative.”
In what I’ve been calling the ‘age of the prompt,’ we must maintain some form of control over the hallucinatory distractions created by generative artificial intelligence.
Without this, we risk losing our grip on fact and reality.
Beyond the results of the 2024 presidential elections across the globe, it is likely that “perceptions of reality will become even more polarized, influencing public discourse on issues ranging from public health to the environment and social justice,” as stated in another excerpt from the World Economic Forum’s Report.
Global Concern
As the first topic of global concern, disinformation is not confined to the day-to-day activities of politics.
The tactics used to distort the long history of civilization by focusing on only one side of the story are as dangerous as the narratives that now threaten democracies worldwide.
Whether it’s denying vaccines or asserting that the Earth is flat.
In this context, the corporations dominating the information technology (Big Tech) sector—companies founded in the 1970s, such as Microsoft (1975) and Apple (1976), followed by Amazon (1994), Google (1998), and Meta (2004, originally Facebook)—hold significant power.
The asymmetry in power is evident in the coordinated discourse between various social actors and media outlets that echo the corporations’ positions, as historian Brendan Mackie from the University of California, Berkeley, explains.
According to him, Big Tech employs the same colonization strategies as the West India Company, the commercial organizations established to explore the African and American continents in the 17th century.
Because we voluntarily immerse ourselves in digital networks, they keep us engaged (through likes and consumption) in the moment.
However, in a fragmented global landscape, technologies are unlikely to contain their most dangerous capabilities, as the World Economic Forum’s Report highlights.
Urgent Regulation
With the rise of generative AI, which grants a wide range of state and non-state actors access to a superhuman realm, we are giving them the knowledge to conceptualize and develop new tools without the necessary ethical involvement of humans.
This is why processes involving guidelines, press councils, self-regulation, or regulation (through government laws) are essential.
We must keep in mind that we are discussing platforms like ChatGPT and DALL-E (from OpenAI), Bard (from Alphabet-Google), and Midjourney, among others that have yet to emerge. Their owners hold substantial power, and the rules governing their operation or use remain unclear.
This fragility is largely due to the changing dynamics between people, the information age, and the internet, which have fostered a more superficial form of interaction among people.
In this environment, the virtual world has become more significant than the real one because it is more comfortable, as explained by American author Michiko Kakutani in The Death of Truth.
This represents a perilous technological leap, reminiscent of Black Mirror, the science fiction television series that explores the unintended consequences of new technologies. It will further blur our ability to distinguish between what is real and what has been dreamed up by GenAI.
Pollyana Ferrari is a lecturer in Communication and Education at the Pontifical Catholic University of São Paulo (PUC-SP), where she also teaches in the Graduate Studies Program in Intelligence Technologies and Digital Design (TIDD). She holds a PhD in Social Communication from the School of Communications and Arts at the University of São Paulo (ECA-USP). She is a journalist and author of 11 books on Digital Communication.
*
This article may be republished online under the CC-BY-NC-ND Creative Commons license.
The text must not be edited and the author(s) and source (Science Arena) must be credited.