Is Artificial Intelligence Going Down the Path of Nuclear Weapons?

The rapid and widespread development of AI may represent a triumph of technical accomplishment. But that doesn't mean it won't take humanity into the dangerous unknown.

30 September 2022, Bremen: Robots play soccer against each other at the German Research Center for Artificial Intelligence in Bremen (DFKI). (Photo by Sina Schuldt/picture alliance via Getty Images)

This story is syndicated from the Substack newsletter Big Technology; subscribe for free here.

Sign Up For Our Daily Newsletter

By clicking submit, you agree to our <a rel="nofollow noreferer" href="http://observermedia.com/terms">terms of service</a> and acknowledge we may use your information to send you emails, product samples, and promotions on this website and other properties. You can opt out anytime.

See all of our newsletters

In front of a packed house last week at Amsterdam’s World Summit AI last week, I asked senior researchers at Meta (META), Google (GOOGL), IBM, and The University of Sussex to speak up if they did not want AI to mirror human intelligence. After a few silent moments, no hands went up.

The response reflected the AI industry’s ambition to build human-level cognition, even if it might lose control of it. AI is not sentient now—and won’t be for some time, if ever—but a determined AI industry is already releasing programs that can chat, see, and draw like humans as it tries to get there. And as it marches on, it risks having its progress careen into the dangerous unknown.

“I don’t think you can close Pandora’s box,” said Grady Booch, chief scientist at IBM, of eventual human-level AI. “Much like nuclear weapons, the cat is out of the bag.”

Comparing AI’s progress to nuclear weapons is apt but incomplete. AI researchers may emulate nuclear scientists’ desire to achieve technical progress despite the consequences—even if the danger is on different levels. Yet more people will access AI technology than the few governments that possess nuclear weapons, so there’s little chance of similar restraint. The industry is already showing an inability to keep up with its frenzy of breakthroughs.

The difficulty of containing AI was evident earlier this year after OpenAI introduced Dall-E, its AI art program. From the outset, OpenAI ran Dall-E with thoughtful rules to mitigate its downsides and a slow rollout to assess its impact. But as Dall-E picked up traction, even OpenAI admitted there was little it could do about copycats. “I can only speak to OpenAI,” said OpenAI researcher Lama Ahmad when asked about potential emulators.

Dall-E copycats arrived soon after and with fewer restrictions. Competitors including Stable Diffusion and Midjourney democratized a powerful technology without the barriers, and everyone started making AI pictures. Dall-E, which only onboarded 1,000 new users per week until late last month, then opened up to everyone.

Similar patterns are bound to emerge as more AI technology breaks through, regardless of the guardrails original developers employ.

It’s admittedly a strange time to discuss whether AI can mirror human intelligence—and what weird things will happen along the way—because much of what AI does today is elementary. The shortcomings and challenges of current systems are easy to point out, and many in the field prefer not to engage with longer-term questions (like whether AI can become sentient) since they believe their energy is better spent on immediate problems. Shorttermists and longtermists are two separate factions in the AI world.

As we’ve seen this year, however, AI advances in a hurry. Progress in large language models made chatbots smarter, and we’re now discussing their sentience (or, more accurately, lack of). AI art was not in the public imagination last year, and it’s everywhere now. AI is also now creating videos from strings of text. Even if you’re a short-termist, the long term can arrive ahead of schedule. I was surprised by how many AI scientists said aloud they couldn’t—and didn’t want to—define consciousness.

There is an option, of course, to not be like the nuclear weapon scientists. To think differently than how J. Robert Oppenheimer, who led work on the atomic bomb, put it. “When you see something that is technically sweet,” he said, “you go ahead and do it and you argue about what to do about it only after you have had your technical success.”

Perhaps more thought this time would lead to a better outcome.

Is Artificial Intelligence Going Down the Path of Nuclear Weapons?