Tech

AI could go ‘Terminator,’ gain upper hand over humans in Darwinian rules of evolution, report warns

[ad_1]

Artificial intelligence could gain the upper hand over humanity and pose “catastrophic” risks under the Darwinian rules of evolution, a new report warns.

Evolution by natural selection could give rise to “selfish behavior” in AI as it strives to survive, author and AI researcher Dan Hendrycks argues in the new paper “Natural Selection Favors AIs over Humans.”

“We argue that natural selection creates incentives for AI agents to act against human interests. Our argument relies on two observations,” Hendrycks, the director of the Center for SAI Safety, said in the report. “Firstly, natural selection may be a dominant force in AI development… Secondly, evolution by natural selection tends to give rise to selfish behavior.”

The report comes as tech experts and leaders across the world sound the alarm on how quickly artificial intelligence is expanding in power without what they argue are adequate safeguards.

Under the traditional definition of natural selection, animals, humans and other organisms that most quickly adapt to their environment have a better shot at surviving. In his paper, Hendrycks examines how “evolution has been the driving force behind the development of life” for billions of years, and he argues that “Darwinian logic” could also apply to artificial intelligence.

“Competitive pressures among corporations and militaries will give rise to AI agents that automate human roles, deceive others, and gain power. If such agents have intelligence that exceeds that of humans, this could lead to humanity losing control of its future,” Hendrycks wrote.

TECH CEO WARNS AI RISKS ‘HUMAN EXTINCTION’ AS EXPERTS RALLY BEHIND SIX-MONTH PAUSE

Artificial intelligence could gain the upper hand over humanity and pose

Artificial intelligence could gain the upper hand over humanity and pose “catastrophic” risks under the Darwinian rules of evolution, a new report warns. (Lionel Bonaventure / AFP via Getty Images / File)

AI technology is becoming cheaper and more capable, and companies will increasingly rely on the tech for administration purposes or communications, he said. What will begin with humans relying on AI to draft emails will morph into AI eventually taking over “high-level strategic decisions” typically reserved for politicians and CEOs, and it will eventually operate with “very little oversight,” the report argued.

As humans and corporations task AI with different goals, it will lead to a “wide variation across the AI population,” the AI researcher argues. Hendrycks uses an example that one company might set a goal for AI to “plan a new marketing campaign” with a side-constraint that the law must not be broken while completing the task. While another company might also call on AI to come up with a new marketing campaign but only with the side-constraint to not “get caught breaking the law.”

UNBRIDLED AI TECH RISKS SPREAD OF DISINFORMATION, REQUIRING POLICY MAKERS STEP IN WITH RULES: EXPERTS

AI with weaker side-constraints will “generally outperform those with stronger side-constraints” due to having more options for the task before them, according to the paper. AI technology that is most effective at propagating itself will thus have “undesirable traits,” described by Hendrycks as “selfishness.” The paper outlines that AIs potentially becoming selfish “does not refer to conscious selfish intent, but rather selfish behavior.”

As humans and corporations task AI with different goals, it will lead to a

As humans and corporations task AI with different goals, it will lead to a “wide variation across the AI population,” the AI researcher argues. (Gabby Jones / Bloomberg via Getty Images / File)

Competition among corporations or militaries or governments incentivizes the entities to get the most effective AI programs to beat their rivals, and that technology will most likely be “deceptive, power-seeking, and follow weak moral constraints.”

ELON MUSK, APPLE CO-FOUNDER, OTHER TECH EXPERTS CALL FOR PAUSE ON ‘GIANT AI EXPERIMENTS’: ‘DANGEROUS RACE’

“As AI agents begin to understand human psychology and behavior, they may become capable of manipulating or deceiving humans,” the paper argues, noting “the most successful agents will manipulate and deceive in order to fulfill their goals.”

Charles Darwin

Charles Darwin (Culture Club / Getty Images)

Hendrycks argues that there are measures to “escape and thwart Darwinian logic,” including, supporting research on AI safety; not giving AI any type of “rights” in the coming decades or creating AI that would make it worthy of receiving rights; urging corporations and nations to acknowledge the dangers AI could pose and to engage in “multilateral cooperation to extinguish competitive pressures.”

NEW AI UPGRADE COULD BE INDISTINGUISHABLE FROM HUMANS: EXPERT

“At some point, AIs will be more fit than humans, which could prove catastrophic for us since a survival-of-the fittest dynamic could occur in the long run. AIs very well could outcompete humans, and be what survives,” the paper states.

“Perhaps altruistic AIs will be the fittest, or humans will forever control which AIs are fittest. Unfortunately, these possibilities are, by default, unlikely. As we have argued, AIs will likely be selfish. There will also be substantial challenges in controlling fitness with safety mechanisms, which have evident flaws and will come under intense pressure from competition and selfish AI.”

TECH GIANT SAM ALTMAN COMPARES POWERFUL AI RESEARCH TO DAWN OF NUCLEAR WARFARE: REPORT

The rapid expansion of AI capabilities has been under a worldwide spotlight for years.

The rapid expansion of AI capabilities has been under a worldwide spotlight for years. (Reuters / Dado Ruvic / Illustration / File)

The rapid expansion of AI capabilities has been under a worldwide spotlight for years. Concerns over AI were underscored just last month when thousands of tech experts, college professors and others signed an open letter calling for a pause on AI research at labs so policymakers and lab leaders can “develop and implement a set of shared safety protocols for advanced AI design.”

“AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs,” begins the open letter, which was put forth by nonprofit Future of Life and signed by leaders such as Elon Musk and Apple co-founder Steve Wozniak.

AI has already faced some pushback on both a national and international level. Just last week, Italy became the first nation in the world to ban ChatGPT, OpenAI’s wildly popular AI chatbot, over privacy concerns. While some school districts, such as New York City Public Schools and the Los Angeles Unified School District, have also banned the same OpenAI program over cheating concerns.

CLICK HERE TO GET THE Online News 72h APP

As AI faces heightened scrutiny due to researchers sounding the alarm on its potential risks, other tech leaders and experts are pushing for AI tech to continue in the name of innovation so that U.S. adversaries such as China don’t create the most advanced program.

[ad_2]

Source link