In BriefThought leaders convened with the goal of developing a roadmap for nations to follow as we enter a future where humans are no longer the only sentient species on the planet. So, how can we govern AI?
This weekend, Futurism got exclusive access to a closed-door round table on the global governance of AI. The event was organized by the AI Initiative from the Future Society at Harvard Kennedy and H.E. Omar bin Sultan Al Olama, the UAE’s Minister of State for Artificial Intelligence. With over 50 of the world’s foremost thinkers, leaders, and practitioners of AI in attendance, the conversation was—to be cliché—a hotbed for debate.
These thought leaders convened with the goal of developing a road map for nations to follow as we transition to future where humans are no longer the only sentient species on the planet.
During one session, which was focused on how to develop rules to govern AI, a panelist opened the discussion by implying that values are universal. As such, his thought ran, there really shouldn’t be problems when trying to develop a set of basic ethics to govern AI. “Ethics is one. Right? There are not ten,” the panelist said. “I mean, no one thinks that killing is a good thing.”
Broad and deep disagreement was instantaneous.
A Difficult Discussion
A fellow panelist noted that, while the universality of ethics may exist in theory, it exists only in theory. Reality is far more complex. “Once we start talking about privacy rights, everyone has a very different view,” he noted. And he highlighted how nations value even those things we consider the most basic and fundamental, like human life, differently. “Once we start considering rights for women and minorities, nations don’t agree,” he said.
There was a general consensus regarding this point, and another panelist offered a potential solution, suggesting that one way forward may be developing regional ethics. “If we are adopting the same policies in the West, and then the nations in the East are adopting the same policies, then those nations should come together to reduce redundancies,” she said. “From there, we can find our commonalities.”
Others spoke out, noting that, as long as various players continue to have competing goals — preserving jobs, preserving the economy, optimizing government efficiency, saving the environment, satisfying investors — there is little hope for any consensus. “What do we want to say we actually value?” asked one exasperated man. “Until we make that decision, all of these talks are just B.S.,” he concluded.
The conversation turned to who should lead the regulatory efforts. They couldn’t even agree on this.
“Who is going to lead an international cooperation? Because we have a lot of international organizations,” one man noted as the conversation turned.
“Do we really want to say this is about ‘the world’?” another shot back, asserting that the group had no right to talk about “the world” given that a significant portion of the planet wasn’t represented.”I’m not sure how many people are from the global south. We are blessed with one person from Japan, but we’re mostly all western,” he said.
From there, the conversation spun out. “One global hub isn’t possible at this point,” a panelist said, “What we should be pushing for is just more international cooperation.”
The panelist who had posed the question responded, “So you think there is no need to form one cohesive whole for everything that is going on?”
“I think it would be beneficial in some ways,” the respondent conceded, “but it’s just too early.”
Another who had observed the conversation’s many turns succinctly summed the consensus, stating that we have a long way to go before we can begin speaking in definitive terms about international cooperative efforts. “I’m not sure if we are ready for the global level,” he said. “There’s so much research still being done. We need to solve many things before we come to this traditional standardization.”
The frustration was palpable in both words and countenances. “I think it’s too late for a lot of things, like the governance of people’s data in the States [the United States],” one panelist pointed out. The conversation wound down after this lamentable fact was noted.
Yet, the desire to say something decisive, something that seemed to inspire more hope, was strong. One man spoke up and quietly ventured that some solution may not be that far beyond our reach. “I mean, you can regulate [AI] though. We’ve chosen not to modify human genomes, for example,” he said.
But of course, that’s not entirely true: China does not have strong regulations surrounding gene editing. Already, trials are underway.
Questions and Interest
If the absence of solutions here surprises you, you likely aren’t too familiar with artificial intelligence or how young this industry truly is. There’s still a lot yet to be determined. In fact, at this point, basically all we have are questions and problems, which is precisely why this round table took place — to begin discussions about clear and tangible goals.
And these conversations, intense as they are, serve as proof that, while we’re short on solutions at the moment, there’s no shortage of interest.
Towards the end of the conversation, one panelist noted this point, a slight hint of hope in his voice. “The number of both technical papers and start-up companies has exploded in recent years,” he offered. “It’s amazing. But we’re still pretty small. We see the same faces at all these conferences. We still have a chance to make solutions.”
Though frustrations abound, and the specifics may still be a bit murky, one thing is clear: if you’re setting out to build the future of AI, there are worse places you could be than in a room with over 50 of the world’s leading minds.