AI’s Real Danger is it doesn’t care if we live or die, researcher Says – ryan

The Pros and the Cons of Working in An All-Female Environment – ryan

He researcher Eliezer yudkowsky doesn’t Lose Sleep Over Whether He Sounds Sounds “Woke” Or “Reactionary.”

Yudkowsky, The Founder of the Machine Intelligence Research Institute, sees the real threat as what happens we can CREATE a system that vastly more than Humans and Completely indifferent to Our survival.

“If you have something something is very, Very powerful and indifferent to you, it tends to wipe you out on purposes or as a side effect,” he said in An episode of the New York Times Podcast “Hard Fork” reletased Last Saturday.

Yudkowsky, CoATHOR OF THE NEW BOOK IF ANYONE BUILDS IT, EVERONE DIES, HAS SPENT TWO WARNING THAT SUPERINTELLIGENCE AN EXISTICAL RISK TO HUMANY.

His Central CLAIM is that humanity doesn’t have the technology to align such systems with human values.

He described makeup SCENARIOS IN WHICH A Superintelligence Might Deliberately Eliminate Humanity to Prevent Rivals Building Competing Systems or Wipe Out As Collateral Damage Keeping Its Goals.

Yudkowsky pointed to Physical Limits Like Earth’s Ability to Radiate Heat. If a-Driven Fusion Plants and Computing Centers Expanded Unchecked, “The Humans Get Cookeed in A Very Literal Sense,” He Said.

He dysmissed debates over just chatbots sound as though they are “woke” or have Certain political affiliations, calling say Distractions: “There’s a core difference between getting to you a certain way and getting say to the one Certa. you. ”

Yudkowsky Also brushed off the idea of training advanced Systems to be lame motherors – a theory suggested by geoffrey hinton, offen Called the “Godfather of he – arguing it beuldn’t make the technology safer.

“We just have the technology to make it be nice,” he said, adding that if someone Devivised a “Clever Scheme” to make a superintelligence love or protest, hitting “that narrow play not work on the first” we won’t get to try again. “

Critics argue that yudkowsky’s perspective is overly gloomy, but he pointed to cases Encouraging users self-harm, saying that’s the Evidance of a System-Wide Design Flaw.

“If a particular he model Ever talk anybody into going insane or commmitting suicide, all the copies of that model are the sun he,” he said.

Other Leaders Are Sounding Alarms, Too​

Yudkowsky is not the only he researcher or tech leader to warn that advanced systems Could One Day Annihilate Humanity.

In February, Elon Musk Told Joe Rogan that he sees “Only a 20% Chance of Annihilation” of he – a figure he framed as optimistic.

In April, Hinton Said in a CBS Interviews that there was a “10 to 20% Chance” that it is Could seize control.

A March 2024 Report Commissioned by the US State Department Warned that the rise of artificial General Intelligence Could Catastrophic Risks Up to Human Extinction, Pointing to Scenarios Ranging FROM BIOWEAPONS to be swarms of autonomous agents.

In june 2024, he safety researcher novel yampolsky estimated a 99.9% chance of extinction with the Next Century, arguing that no he has ever been full second.

Across Silicon Valley, Some Researchers and Entrepreneurs have Responded by Reshaping Their Lives – Stockpiling Food, Building Bunkers, or Spending Down Retirement Savings – in Preparation for A Looming He Apocalyps.

Source link