AI Tech: A Risk to Humanity?

 by David Kelly May 31, 2023

The New American Magazine

https://thenewamerican.com/ai-tech-a-risk-to-humanity 

Userba011d64_201/iStock/Getty Images Plus

To bring light to the potential severe risks associated with artificial intelligence (AI), the Center for AI Safety released a statement Tuesday, signed by hundreds of tech executives and scholars, warning that AI technology should be considered a societal risk to all humanity.


“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the succinct statement declared. The growing list of signatories includes tech executives of Google’s DeepMind, the ChatGPT developer OpenAI, and Microsoft, along with authors, professors, and scientists who are concerned over AI technology risks. 


Sam Altman, who is CEO of the ChatGPT creator OpenAI, is a leader in AI technology. According to The Washington Post, “Altman and others have been at the forefront of the field, pushing new ‘generative’ AI to the masses, such as image generators and chatbots that can have humanlike conversations, summarize text and write computer code. OpenAI’s ChatGPT bot was the first to launch to the public in November, kicking off an arms race that led Microsoft and Google to launch their own versions earlier this year.” 


With the AI “arms race” quickly bringing the technology into everyday life, a number of people in the AI community want to slow down the tech’s growth, warning of the potential risks of a doomsday-type scenario in which the technology could lead to the end of humanity. They believe that slowing down the growth of AI technology would allow time for policymakers to build in safeguards against any existential risk.   


However, there is another side to the discussion on AI risks, as the Post shared: “Skeptics also point out that companies that sell AI tools can benefit from the widespread idea that they are more powerful than they actually are — and they can front-run potential regulation on shorter-term risks if they hype up those that are longer term.” 


Tuesday’s statement was not the first to raise alarms over AI technology’s potential threat to our existence. In March a group of business leaders had signed a letter “sponsored by the Future of Life Institute, a non-profit that is part of the long termism movement — a school of philosophy that focuses on long-term risks to humanity that is popular with tech billionaires.” 


As quoted by the Post, that letter stated:

Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.


Concerns over AI threats to humanity are not new. A 2019 Vox article questioning the dangers of emerging technology asked if we should be worried about AI and whether it will “transform the world — and maybe annihilate it.” 


Current AI systems frequently exhibit unintended behavior. We’ve seen AIs that find shortcuts or even cheat rather than learn to play a game fairly, figure out ways to alter their score rather than earning points through play, and otherwise take steps we don’t expect — all to meet the goal their creators set. 


As AI systems get more powerful, unintended behavior may become less charming and more dangerous. Experts have argued that powerful AI systems, whatever goals we give them, are likely to have certain predictable behavior patterns. They’ll try to accumulate more resources, which will help them achieve any goal. They’ll try to discourage us from shutting them off, since that’d make it impossible to achieve their goals. And they’ll try to keep their goals stable, which means it will be hard to edit or “tweak” them once they’re running. Even systems that don’t exhibit unintended behavior now are likely to do so when they have more resources available. 


Vox concluded that AI technology is “similar to launching a rocket (Musk, with more of a flair for the dramatic, said it’s like summoning a demon.) The core idea is that once we have a general AI, we’ll have few options to steer it — so all the steering work needs to be done before the AI even exists, and it’s worth starting on today.” 


Getting the AI steering to work — or imposing regulations on AI — is what the recent letters are seeking. The Future of Life Institute’s March letter reportedly asked for a six-month pause on building new powerful AI tools, and “called on governments to step in and enforce a ‘moratorium’ on AI development if the companies don’t willingly agree to one.” 


To address AI tech and attempt to ensure safety and “fundamental rights,” the European Union (EU) and the United States have begun the process of building policy and governance over AI technology. The EU released a detailed proposal for regulating AI in 2021, although it has not been adopted into law.  


In early May, the Biden administration announced a meeting with AI leaders to discuss “new actions that will further promote responsible American innovation in artificial intelligence (AI) and protect people’s rights and safety.” That meeting was intended to “emphasize the importance of driving responsible, trustworthy, and ethical innovation” in AI tech development, while mitigating “risks and potential harms to individuals and our society.” 


The administration’s “new actions” to promote responsible innovation “include the landmark Blueprint for an AI Bill of Rights and related executive actions announced last fall, as well as the AI Risk Management Framework and a roadmap for standing up a National AI Research Resource released earlier this year.”


Is AI tech a risk to humanity? Perhaps. But what may be of more concern for now is the loss of privacy, liberty, and freedom that new government policies always create.