Some of the smartest (and biggest) names in AI are once again flagging concerns about harnessing the technology. The dangers of AI are wide-ranging and addressing them may not be a high enough priority for the companies tasked with harnessing the technology, several leaders in the AI field and researchers following the technology are saying.
Let’s start with last week’s (fresh) turnover at OpenAI, which saw co-founder Ilya Sutskever and research scientist Jan Leike quit the AI giant. Both Sutskever and Leike had been key members of OpenAI’s “ superalignmentteam,” which is tasked with assuring that superintelligence remains aligned with human intent even as it outstrips our intelligence.
AI security is lagging behind: While Sutskever left the company quietly, Leike made clear that his departure stemmed from concerns that the company was not devoting sufficient resources — particularly computing resources — to researching AI safety. Leike was also quick to point out the heavy social responsibility AI developers are carrying. “Building smarter-than-human machines is an inherently dangerous endeavor,” Leike said in his resignation thread onX. “OpenAI is shouldering an enormous responsibility on behalf of all of humanity.”
There are others sounding the alarm: Professor Geoffrey Hinton, a pioneer of the neural network theory that underpins the technology, recently spoke to the BBC about AI’s potential social effects and military threats. In Hinton’s view, the value created by AI is going to disproportionately accrue to the rich, “and not the people whose jobs get lost.” Hinton left Google in 2023 after having an “epiphany” about the dangers of superintelligence.
Hinton has suggested that a universal basic income (UBI) might assuage this dislocation by providing an avenue for transferring wealth to displaced workers — though the benefits and feasibility of UBI as a panacea for AI-related labor market shifts remain up for debate. He is also calling for new “Geneva Conventions” to regulate the international application of AI technology to warfare, saying that the best position would be a wholesale ban on the use of military AI.