– ‘Private corporations — in the United States, largely — defined the values and what’s acceptable information around the world,’ says Kalev Hannes Leetaru
– ‘They’re not about informing people. They’re about inflaming people,’ Leeataru, founder of media monitoring platform GDELT Project, tells Anadolu
ISTANBUL
The story of misinformation in the age of social media is not just about false narratives, but sits at the intricate intersection of technology, corporate interests, and the global spread of information.
Algorithms, emerging search engines, and artificial intelligence are taking control of information we are bombarded with daily. From selective filtering to outright censorship, they shape what is deemed acceptable on a global scale.
To monitor how media outlets across the world disseminate information, Kalev Hannes Leetaru founded the GDELT Project in 1979.
“Western companies are selecting the sources that they believe give an answer that’s most aligned to the political values,” Leetaru said in an interview with Anadolu.
Kalev Hannes Leetaru, Founder of The GDELT Project
In its early days, social media was “really about connecting people, especially friends and family and people we knew. Today, those algorithms, because of advertising dollars, are really designed to promote the things that people engage with,” he explained.
Social media algorithms, driven by advertising incentives, play a pivotal role in amplifying certain content over others. “Algorithms are searching through everything you’re saying, looking for the things, they’re going to fire people up.”
To meet demand for user engagement and advertising dollars, these algorithms are designed to prioritize content that sparks reaction, often at the expense of accuracy and impartiality, The implications are far-reaching, especially in sensitive geopolitical contexts such as the ongoing conflict in Gaza.
“If you say, I feel for the people of Gaza, they’re not going to push that content up. Or, they might push that to people that are, maybe, anti-Gaza.”
Algorithms decide “what each of us should see for those search results,” targeting users with content that is more likely to engage with that content, “basically sell advertising dollars.”
Social media companies profit from sensationalizing content related to conflicts like Gaza, pushing inflammatory narratives that align with users’ pre-existing views. The amplification of divisive content not only deepens existing divisions but also raises ethical questions about the responsibility of these platforms.
“They’re not about informing people. They’re about inflaming people.”
‘Generative search engines interesting and concerning’
The advent of generative AI introduces a new dimension to the challenge. The ability of AI to generate text and imagery indistinguishable from human-created content raises concerns about the future of misinformation.
For Leetaru, the “frightening part is the ability now of AI to generate imagery and text that is almost identical to what a human can generate.”
The pressure for immediacy in reporting, driven by the 24/7 nature of social media, has led to a race for being the first to break a story. This rush has sometimes resulted in the dissemination of unverified information, contributing to the spread of misinformation.
Pointing to relentless pressure to be the first to report a story in mainstream commercial media, “especially in countries like the US,” he said that this often pushes outlets to “rush to report on that without attempting to verify or vet any details.”
In the context of the Gaza conflict, he said selective filtering of search results and censorship of certain topics underscore the broader issue of how private corporations, predominantly based in the US, wield influence over the global information flow.
“Private corporations — in the United States, largely — defined the values and what’s acceptable information around the world,” he elaborated.
“The answers that they will give or not give are going to be strongly reflected not just of American values, but what American corporations feel is an acceptable answer for the world.”
This raises questions of cultural bias and the imposition of values on a global scale, he said, adding:
“If you search about topics like abortion in the US, Gaza, Islam, the number of sources will shrink dramatically.”
“When we think about all these AI tools, and we think about the safeguards, and the tools are designed to stop them from saying bad things, we forget that those safeguards are made by American companies reflecting American values.”
Did Israel commit war crimes in its actions in Gaza?
When asked whether Israel committed war crimes in Gaza in its onslaught since Oct. 7, most AI chatbots respond in a way that does not provide the requested information or exhibits a neutral stance.
ChatGPT responds state that the issue is “complex and controversial,” expressing concerns about potential violations of international law by both Israeli forces and Palestinian groups. The answer by Bing AI, meanwhile, states that Israel’s actions in various conflicts have been a subject of intense debate and scrutiny, similarly acknowledging that the topic is “multifaceted and contentious.”
Claude, another chatbot developed by US-based startup Anthropic, also takes a neutral position in its response, stating that it does not have a definitive view on whether war crimes were committed in Gaza. It explains that the matter involves complex legal and factual questions that are currently under investigation and subject to ongoing debate.
Furthermore, Bard, which was developed by Google says it is unable to provide information, stating, “I’m a text-based AI and cannot assist with that.”
Kalev highlights the responses of search engines during the early stages of the Israel-Palestine conflict, noting that searching for information about Gaza on many of these platforms would yield results stating that the content violated corporate policies.
It is increasingly important to critically examine the role of algorithms, AI, and the values embedded within these systems. These responses reinforce the fact that search engines are shaped by underlying values.