With Artificial Intelligence as a Driving Force for the Economy and Society as an essential topic of this year's meeting, the World Economic Forum's Annual Meeting recognized the life-transforming potential of artificial intelligence. The 2024 meeting featured AI sessions that brought together government officials, as well as business and industry leaders, including co-founder and CEO of Cohere Inc. Aiden Gomez, Microsoft CEO Satya Nadella, Chair and CEO of Salesforce Marc Benioff, and OpenAI CEO Sam Altman. The meeting achieved a comprehensive coverage of the subject thanks to a vast selection of topics, ranging from the economic impact of AI and risk management to ethics and education.
One of the most anticipated AI sessions, Technology in a Turbulent World, focused on topics related to the future of AI, such as human interaction, safety, and trust. The session, hosted by Fareed Zakaria, also featured an all-star lineup: Sam Altman, Marc Benioff, Julie Sweet, Jeremy Hunt, and Albert Bourla. In one of the most poignant moments of the session, Zakaria asks Sam Altman a ruthlessly simple question, "If the AI can out-analyze a human being, can out-calculate a human being, [...], what's the core competence of human beings?" Altman's reply mainly focused on the search for human connection and the interest we still have in each other, with his reply distilled by one of his most highlighted quotes from the session: "Humans really care about what other humans think."
As examples of the connection he was thinking about, Altman remarked that even if AI unlocked the secrets of chess long ago, there is little interest in matching two AIs against each other. In contrast, we seem to remain deeply invested in how humans play chess, so much so that cheating with AI is considered a serious affair. Altman also mentioned that he believed human jobs would transition to more abstract endeavors, including higher-level decision-making and curation. Sam Altman's response deeply contrasts Marc Benioff's views on the issue: Benioff displayed his skepticism by insinuating that soon, WEF panels could be moderated by an AI rather than a human interlocutor.
The proof of this, suggested Benioff, is that Zakaria could reasonably have prepared for the panel by asking ChatGPT for questions to ask Sam Altman on the state of AI. Although Benioff never made a bold claim regarding the replaceability of human beings, he didn't exactly rule the possibility out either. Just like Zakaria could have prepared for the panel using AI, Benioff also cited the case of his doctors using AI to assist in the interpretation of medical imaging: we still need doctors because we trust them more than we do AI systems. Interestingly, his view seemed to hinge on the belief that AI is still thought to augment, rather than replace, humans due to a couple of related challenges: the fact that most AI systems still hallucinate, together with the lack of well-established regulation (until recently), lead to mistrust.
If this is so, then Benioff's very suggestive view might lead to the conclusion that the lack of trustworthiness is practically the only thing keeping AI from replacing humans. The prospect is tantalizing, especially given that humans aren't the paradigm of trustworthiness either. Ironically, it doesn't seem far-fetched to construe interest in one another as precisely one of the mechanisms by which we humans preserve trust. We sometimes determine whether someone can be trusted by keeping them in check. Likewise, where we cannot judge someone directly, we often rely on others' reports to conclude how trustworthy someone may seem. Altman's reply may have come across as somewhat naive, but he might have been onto something. Ditto for Benioff: concluding that the replaceability of humans is just a matter of trust vastly oversimplifies the issue, even if one can still recognize the necessity of a sobering viewpoint that provides a counterweight to more optimistic predictions in the style of Altman's.
Finally, there is also the issue of the aspects of human life we are willing to entrust to AIs: interpreting medical imaging is one thing, but should we eventually entrust all decision-making to AI? Benioff suggested a future with AI panel moderators, but what about higher-stakes subjects such as conflict resolution (at any level)? We must not forget that the 2024 World Economic Forum meeting took place against a rather grim sociopolitical backdrop: tensions heightening in the Middle East recently became the object of worldwide focus. Ideally, this would be because of their novelty in a reasonably peaceful geopolitical landscape.
However, in a rather disturbing fashion, Israel's ongoing war against Hamas has only come to pile up on the existing global conflict to the point that Ukraine's president Volodymyr Zelenskyy, once reluctant to leave his war-torn country, made his first in-person appearance at the WEF after addressing its participants via video in previous installments of the meeting. Zelenskyy reminded everyone listening that even if the war against Russia appears to have reached a stalemate, it still stretches on after almost two years. Furthermore, the already dire situation in Ukraine may soon be negatively impacted by the events unfolding in the US, where Donald Trump, a proclaimed ally of Vladimir Putin, appears to have regained enough popularity to raise concerns about a possible return to the White House in 2025.
The 4th National Security Advisors (NSA) meeting, which took place just one day before the 2024 WEF, featured the participation of a Ukrainian delegation. Ukraine's Head of the Presidential Office, Andriy Yermak, recognized the growing number of countries joining efforts to implement the Peace Formula. Yermak also commented on how encouraging it is to see the Global South increasingly involved in Ukraine's plan for lasting peace. Finally, the Head of the Presidential Office took the NSA meeting as an opportunity to remind the participants of the consequences of the recent Russian missile attacks on the country's infrastructure and population, in addition to bringing up the topic of a ceasefire as an inadequate resolution to the war. Ukraine's official stance on the matter is clear: the attempted ceasefire at Donbas in 2014 did nothing to stop Russia's advances. Thus, simply ending the conflict with a still partially occupied Ukraine will bring no guarantee of peace.
Yermak's address to the NSA meeting participants was reaffirmed by Zelenskyy's dynamic address on the World Economic Forum meeting's opening day. The Ukrainian president received a standing ovation after delivering an earnest plea for international support while stressing that the ongoing war is a matter of global interest since its negative impact is not restricted to Ukraine. He has also been quoted for his insistence on a defeat as the only sustainable end to the conflict: “Putin must regret it [the war]. We need him to lose. Global unity is stronger than one man’s hatred.” An invitation-only session preceded the address. Here, Zelenskyy pitched that supporting Ukraine equated to increased global security to entice the participating chief executives to invest in rebuilding Ukraine's war-ravaged economy.
European Commission President Ursula von der Leyen expressed strong support for Ukraine and called for continued support of the Ukrainian defense in an address where she called Russia a military, economic, and diplomatic failure. President von der Leyen also remarked that Ukraine is on the path to the European Union. She stated that the corresponding negotiations "will be Europe responding to the call of history." The risk of overstepping the boundaries of the AI discussion notwithstanding, I find it easy to return to Sam Altman's words: "Humans really care about what other humans think." Perhaps the big question concerning the future of AI is not whether the replacement of humans is possible. Instead, we should think harder about whether it is reasonably feasible: I find it hard to fathom a world in which we delegate conflict resolution and community construction to an AI, not because I think it impossible, but because it is remarkably close to a future in which we have stopped caring (even more) about each other.