Fri. Nov 8th, 2024
Occasional Digest - a story for you

OpenAI CEO Sam Altman is among the hundreds of tech leaders who signed a dire "Statement on AI Risk" on Tuesday about the technology's potential threat to humankind, saying "mitigating the risk of extinction from AI should be a global priority." Photo by Jim Lo Scalzo/EPA-EFE

OpenAI CEO Sam Altman is among the hundreds of tech leaders who signed a dire “Statement on AI Risk” on Tuesday about the technology’s potential threat to humankind, saying “mitigating the risk of extinction from AI should be a global priority.” Photo by Jim Lo Scalzo/EPA-EFE

May 30 (UPI) — Artificial intelligence researchers, scientists and tech industry leaders issued a dire warning Tuesday about AI’s threat to humankind and the “risk of extinction.”

Sam Altman, chief executive officer of ChatGPT-maker OpenAI, and Geoffrey Hinton, the artificial intelligence pioneer known as the ‘Godfather of AI’ who recently quit Google to focus on AI threat issues, joined hundreds of tech leaders to sign a single-sentence, 22-word “Statement on AI Risk.”

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” The statement was posted by the San Francisco-based nonprofit Center for AI Safety on its website.

“We didn’t want to push for a very large menu of 30 potential interventions. When that happens, it dilutes the message,” Dan Hendrycks, executive director of the Center for AI Safety, explained.

In addition to signatures from high-level executives at Microsoft and Google, dozens of professors at MIT, Harvard and Stanford also signed the statement.

“There’s a variety of people from all top universities in various different fields who are concerned by this and think that this is a global priority,” Hendrycks said. “So we had to get people to sort of come out of the closet, so to speak, on this issue, because many were sort of silently speaking among each other.”

While some in the tech industry express concern over AI’s level of sophistication and its potential to become impossible to control in the future, others doubt the predictions arguing that AI is still unable to handle mundane tasks like driving a car.

Tuesday’s “Statement on AI Risk” is not the first dire warning from tech experts about the potential impact of artificial intelligence. Earlier this year, Elon Musk and thousands of tech leaders called for a six-month pause in the AI race to prevent “profound risks to society and humanity.”

The open letter to AI labs was signed in March by Musk, Apple co-founder Steve Wozniak, politician Andrew Yang and thousands of other big-named tech experts. As of Tuesday, the letter had nearly 32,000 signatures.

“Advanced AI could represent a profound change in the history of life on Earth and should be planned for and managed with commensurate care and resources,” the letter, published by the nonprofit Future of Life Institute, warned.

A number of countries currently are working to enact AI regulations; the European Union leads the way with its AI Act, which is expected to be approved sometime this year.

Lawmakers in Washington announced last month that they are also working on new legislation to govern AI tools in the United States that, according to Sen. Chuck Schumer, D-N.Y., could “prevent potentially catastrophic damage to our country.”

Source link