clock menu more-arrow no yes mobile

Here’s what you missed at the global AI debate

Doha Debates hosted four of the world’s foremost AI experts — here’s what they had to say about the challenges of artificial intelligence.

This advertising content was produced in collaboration between Vox Creative and our sponsor, without involvement from Vox Media editorial staff.

The April 3 debate on artificial intelligence took many sharp turns and thoughtful steps in search of solutions to AI’s urgent challenges, from weaponized drones and military robots to data breaches and job automation — as well as amplify its opportunities, from curing diseases and mitigating climate disaster to saving lives. The debate tackled the most contentious question of all: Will AI help or harm humans globally?

Who stands to gain the most, who loses the most, who welcomes AI the most, and who fears its consequences most acutely were the questions driving the debate among four AI experts, including Nick Bostrom, the Swedish philosopher who sounded the alarm. Bostrom cautioned that AI’s short-term benefits conceal lethal long-term risks: It could destroy humanity. Kenyan data scientist Muthoni Wanyoike argued for optimism, predicting that AI will improve equality among nations. British author Dex Torricke-Barton welcomed AI but warned about tech-illiterate lawmakers and slow-to-adapt governments. And Joy Buolamwini, a Ghanaian American computer scientist, called for public oversight to reduce AI’s built-in bias.

The debate, part of Doha Debates’ new season as a forum for solutions to the world’s biggest challenges, aimed for common ground, and the speakers found it, agreeing that AI is inevitably rising and the goal is to harness its power safely.

Moderator Ghida Fakhry set out careful distinctions between weak AI, strong AI and superintelligent AI, with increasing risks and rewards, and asked whether AI is a “fundamental threat to the future of humankind” or if “the benefits outweigh the risks.”

Debate moderator Ghida Fakhry.

Who better to make the case for caution than the philosopher who wrote the book on it? The author of Superintelligence, Bostrom argued that AI’s near-term benefits can lead to long-term destruction, including human extinction, if AI slips free of human control.

“I don’t think it is ridiculous to have a conversation about lethal autonomous weapons,” Bostrom said, “even though there are other problems in the world today. If you want to prevent human society from going down this avenue of leveraging AI to make more lethal weapons, it’s easier to do so before there are large arsenals deployed.”

Debate speaker Nick Bostrom.

Bostrom said there’s an “overhyping” of AI’s possibilities now and an “underhyping” of its future possibilities, with “risks to the very survival of our species.”

His alarms drew a sharp contrast to Muthoni Wanyoike’s optimism. Wanyoike welcomed AI as a force for economic empowerment, especially for women and girls in Africa, where she is co-founder of Nairobi’s Women in Machine Learning and Data Science. Wanyoike celebrated AI with “mindful optimism,” which “disentangles us from the fear and fantasy of an AI apocalypse.”

Doomsday fears make headlines, Wanyoike said, but AI “promises to bring unparalleled benefits” by lifting “millions out of poverty” and improving the lives of independent farmers in Africa, up to 60 percent of whom are “women, who until the advent of mobile technology had little to no access to financial technology.”

“From where I stand, this is a future filled with a strong African voice,” Wanyoike said, “with strong African youth representation — representation of African women, African scientists and African innovators. And not just Africans but representation of the whole world.”

Debate speaker Muthoni Wanyoike.

In the moderator’s follow-up, she pressed Wanyoike on whether her optimism withstands the harsh intrusion of government surveillance and AI weapons, asking not “who controls AI” but if “AI might eventually control us.”

”AI has not caused the wars that we have right now,” Wanyoike answered. “We as humans have the ability to ruin the universe and destroy it completely, or to ensure our continued existence.”

Dex Torricke-Barton took the floor next, welcoming AI and raising concerns not about AI but about lagging lawmakers who routinely criticize tech leaders for every ethical misstep, without understanding the technology they are trying to regulate. Torricke-Barton argued that fears of runaway robots and life-crushing AI are sensationalized.

”How many people in this audience have a smartphone?” Torricke-Barton asked. “That’s pretty much all of you. AI is built into all the apps and services you’re using today, and the world hasn’t ended yet.”

AI can harm, Torricke-Barton said, but “we have to stay calm and put that into perspective...Think of everything wrong with the world today,” from climate change to the refugee crisis, “and really, killer robots is the thing that gets you out of bed in the morning and gets you really angry? That sounds like you’ve lost perspective to me.”

Debate speaker Dex Torricke-Barton.

A solution, he proposed, is for tech leaders and policymakers to work together rather than blaming the private sector. “If you’re looking for the tech industry to solve all of our problems with AI, I’m sorry to say you’re probably deluding yourself. The problems we face are societal.”

In the debate’s most revealing moment, the moderator challenged Torricke-Barton’s defense of tech companies, asking if Torricke-Barton — a former communications executive at Google and Facebook — wasn’t “letting the Googles and Facebooks of this world a little easily off the hook. Don’t you think that the Mark Zuckerbergs and other leaders should be taken to task?”

Torricke-Barton took the question to heart, saying lawmakers should spend more time learning about the very technology they criticize: “Politicians love to deflect from the real question, which is, why do we have a society that is so deeply divided?”

Debate speaker Joy Buolamwini.

The debate turned to the topic of social and political equality when Joy Buolamwini called out “discrimination built into” algorithms and data sets. A computer scientist at MIT’s Media Lab and founder of the Algorithmic Justice League, Buolamwini said AI’s promoters are “overconfident and underprepared” to tackle “abuse and bias.”

Without public oversight, AI can amplify inequality and “compound the very social inequalities that its champions hope to overcome.”

Machine bias is well-documented: Predictive policing has been shown to misidentify and inflate the criminality risks of people of color. With some remarkable video during her presentation, Buolamwini showed how facial recognition algorithms could often fail to recognize black faces. In addition, she argued that human exploitation is baked into the data mining of vulnerable communities: “We are witnessing the exploitation of the data wealth of the global south,” she said, referring to low-income countries primarily in Africa, Asia and Latin America.

The debate was livestreamed on Twitter @DohaDebates, where viewers voted for solutions, announced by debate correspondent Nelufar Hedayat.

The tweets hit all the notes — support, skepticism and fear of AI:

The livestream votes came in throughout the debate:

Common ground, not division, was the debate’s focus, promoted by the debate’s next speaker.

Moderator Ghida Fakhry with debate connector Govinda Clayton.

Govinda Clayton, the debate’s bridge-building “connector” and a conflict-resolution expert, tied the arguments together and framed the challenge: AI’s rise is inevitable, but how soon and how widely should regulations be implemented?

The moderator opened the floor to audience questions, with sharp input from students at Qatar Foundation’s Education City, the innovative collection of top universities in Doha, including the debate’s host venue, Northwestern University in Qatar.

Fakhry welcomed comments from viewers in Gaza, Palestine; Nairobi, Kenya; and Oakland, California, where audience members joined the debate through walk-in Portals equipped with livestream devices from the design team at Shared_Studios.

The debate ended as constructively as it began — as a conversation, not a contest, to find a promising path to an AI future that expands opportunities and reduces risks for as many people as possible. The night came to a close by inviting everyone to share solutions @DohaDebates with the hashtag #DearWorld.

Mark your calendars for Doha Debates’ next debate on July 24, 2019.

More From Doha Debates

Why the impact of AI is up for debate

The rewards of robots and AI seem promising — but who has been left out of the conversation? A deep dive into the definitions of AI that drive the debate.

How an unlikely friendship can teach us to bridge the political divide

An Iraqi-American refugee and a Trump supporter met on opposite sides of a political rally. Then they found common ground. Now, they open up in the new documentary (Un)divided.

4 ways to engage more personally with the global refugee crisis

Advertiser Content From Doha Debates logo