The development of superintelligent AI threatens to kick off a new technological Cold War, warned a leading artificial intelligence researcher who foresees the emerging technology tipping the military balance between the U.S. and its rivals.
Dan Hendrycks, the director of the Center for AI Safety, delivered his warning on the Threat Status weekly podcast, explaining how superintelligent AI systems risk destabilizing geopolitics.
He defined superintelligent AI as “smarter than all people at basically all things.”
Along with other prominent AI researchers, he authored a paper titled “Superintelligence Strategy” that proposed a deterrence strategy for AI systems similar to that of the Cold War nuclear strategy.
“In the Cold War, we had deterrence with mutual destruction. We had nonproliferation of fissile materials to rogue actors, and we needed to focus on containment of the Soviet Union,” Mr. Hendrycks said. “And in the case of China, that largely looks like strategic competition. So ours is also about deterrence, nonproliferation and strategic competition.”
However, AI’s dual-use nature also allows nations to provide positive benefits to consumers through superintelligent systems.
SEE ALSO: LISTEN: How dangerous is superintelligent AI?
“Since it has both these positive and negative applications, it’s not all or nothing good, or all or nothing bad,” he said. “You can be surgical about preventing some of the instability and making sure rogue actors don’t get it.”
If they acquire superintelligent AI systems, rogue actors could lead to significant instability in the defense world.
Mr. Hendrycks explained that, in the future, rival nations may acquire superintelligence and fail to institute proper guardrails against its innovation, rapidly upsetting the balance.
“Maybe they’d come up with a new advance, like a new way of creating an anti-ballistic missile system, which would remove second strike or they have transparent oceans so they can find nuclear submarines,” he said. “And this sort of stuff would be very destabilizing. If there’s a really smart AI system that’s just good at making tons and tons of progress on things suddenly.”
This becomes increasingly dangerous, Mr. Hendrycks said, due to superintelligent AI’s potential resistance to modification. According to Mr. Hendrycks, as AI models scale up they have a stronger resistance to having their value systems changed. He says that researchers will need to work hard to create models that reflect the values of the public, especially as AI becomes more powerful.
“Imagine you had an assembly of randomly sampled citizens, and they would vote on what they think is good or bad. That’s the thing you could align it to,” Mr. Hendrycks said. “But we were finding that as the model scale up, they have a stronger desire or preference not to be modified, not to have their values changed. So, potentially, as they become more capable, this would be a problem.”