trib logo
ad-image
ad-image

Time to freak out over AI’s growing power?

by WorldTribune Staff, October 8, 2023

On March 22 of this year, an open letter signed by more than 33,000 individuals, including Elon Musk and Apple co-founder Steve Wozniak, called for a six-month moratorium on the development of Artificial Intelligence (AI) experiments.

"We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4," the letter said.

The letter stated: "As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.

"Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable."

In an op-ed for Time magazine, the individual regarded as the founder of the AI field insisted that pausing AI developments is not enough.

"We need to shut it all down," Eliezer Yudkowsky wrote.

"If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter," Yudkowsky wrote.

Yudkowsky, a decision theorist from the U.S. who leads research at the Machine Intelligence Research Institute, has been working on aligning Artificial General Intelligence since 2001.

"I have respect for everyone who stepped up and signed" the letter. "It’s an improvement on the margin," Yudkowsky wrote. "I refrained from signing because I think the letter is understating the seriousness of the situation and asking for too little to solve it."

The key issue, Yudkowsky noted, "is not 'human-competitive' intelligence (as the open letter puts it); it’s what happens after AI gets to smarter-than-human intelligence."

If AI does get to smarter-than-human intelligence, Yudkowsky said "the likely result of humanity facing down an opposed superhuman intelligence is a total loss. Valid metaphors include 'a 10-year-old trying to play chess against Stockfish 15,' 'the 11th century trying to fight the 21st century,' and 'Australopithecus trying to fight Homo sapiens.' "

Yudkowsky continued: "It took more than 60 years between when the notion of Artificial Intelligence was first proposed and studied, and for us to reach today’s capabilities. Solving safety of superhuman intelligence — not perfect safety, safety in the sense of 'not killing literally everyone' — could very reasonably take at least half that long. And the thing about trying this with superhuman intelligence is that if you get that wrong on the first try, you do not get to learn from your mistakes, because you are dead. Humanity does not learn from the mistake and dust itself off and try again, as in other challenges we’ve overcome in our history, because we are all gone."

Conservative radio host Dan Bongino called it the "single-most frightening article I've ever read."


Please Support Real Journalism


Hello! . . . . Intelligence . . . . Publish

aitkover by is licensed under Screen Grab

This website uses essential cookies for site operation. We would also like to set optional cookies to help us improve our site and to analyze web traffic, as described in the Privacy Compliance. You may accept or reject the use of optional cookies by clicking the Accept or Reject button.

ACCEPT
REJECT