Skip to main content

AI Poses ‘Risk of Extinction’, Should Be ‘Global Priority’ Alongside Pandemics, Wars: Experts

Global leaders should be working to reduce "the risk of extinction" from artificial intelligence technology, a group of industry chiefs and experts warned on Tuesday.

A one-line statement signed by dozens of specialists, including Sam Altman whose firm OpenAI created the ChatGPT bot, said tackling the risks from AI should be "a global priority alongside other societal-scale risks such as pandemics and nuclear war".

ChatGPT burst into the spotlight late last year, demonstrating an ability to generate essays, poems and conversations from the briefest of prompts.

The program's wild success sparked a gold rush with billions of dollars of investment into the field, but critics and insiders have raised the alarm.

Common worries include the possibility that chatbots could flood the web with disinformation, that biased algorithms will churn out racist material, or that AI-powered automation could lay waste to entire industries.

Superintelligent machines

The latest statement, housed on the website of US-based non-profit Center for AI Safety, gave no detail of the potential existential threat posed by AI.

The center said the "succinct statement" was meant to open up a discussion on the dangers of the technology.

Several of the signatories, including Geoffrey Hinton, who created some of the technology underlying AI systems and is known as one of the godfathers of the industry, have made similar warnings in the past.

Their biggest worry has been the rise of so-called artificial general intelligence (AGI) -- a loosely defined concept for a moment when machines become capable of performing wide-ranging functions and can develop their own programming.

The fear is that humans would no longer have control over superintelligent machines, which experts have warned could have disastrous consequences for the species and the planet.

Dozens of academics and specialists from companies including Google and Microsoft -- both leaders in the AI field -- signed the statement.

It comes two months after Tesla boss Elon Musk and hundreds of others issued an open letter calling for a pause in the development of such technology until it could be shown to be safe.

However, Musk's letter sparked widespread criticism that dire warnings of societal collapse were hugely exaggerated and often reflected the talking points of AI boosters.

US academic Emily Bender, who co-wrote an influential papers criticising AI, said the March letter, signed by hundreds of notable figures, was "dripping with AI hype".

'Surprisingly non-biased'

Bender and other critics have slammed AI firms for refusing to publish the sources of their data or reveal how it is processed -- the so-called "black box" problem.

Among the criticism is that the algorithms could be trained on racist, sexist or politically biased material.

Altman, who is currently touring the world in a bid to help shape the global conversation around AI, has hinted several times at the global threat posed by the technology his firm is developing.

"If something goes wrong with AI, no gas mask is going to help you," he told a small group of journalists in Paris last Friday.

But he defended his firm's refusal to publish the source data, saying critics really just wanted to know if the models were biased.

"How it does on a racial bias test is what matters there," he said, adding that the latest model was "surprisingly non-biased".


Samsung Galaxy A34 5G was recently launched by the company in India alongside the more expensive Galaxy A54 5G smartphone. How does this phone fare against the Nothing Phone 1 and the iQoo Neo 7? We discuss this and more on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.


from Gadgets 360 https://ift.tt/Rj1dqkt

Comments

Popular posts from this blog

Softbank CEO Says He is Heavy User of ChatGPT Speaks to OpenAIs Sam Altman Often

SoftBank Group 's Chief Executive Masayoshi Son said on Tuesday he is a "heavy user" of ChatGPT, the artificial intelligence-powered chatbot from Microsoft -backed startup OpenAI. Son said he is speaking "almost everyday" to OpenAI CEO Sam Altman , who has made high-profile visits to Tokyo this year as he looks to capitalise on interest in generative AI and exert influence on the regulation of the burgeoning technology around the world. "I am chatting with ChatGPT everyday - I am a heavy user," Son told shareholders of the group's telecoms subsidiary. Son has stepped back from public pronouncements in recent months to focus on the planned listing of chip designer Arm as his technology investment conglomerate books heavy loss due to the sliding value of its portfolio. The group holds its annual general meeting on Wednesday with the market looking for details of Son's investment outlook at a time when excitement over AI is driving capital...

ChatGPT Maker OpenAI Faces Scrutiny in Germany Over Use of Personal Data

Germany is joining other European countries in scrutinising the use of personal data by the popular AI chatbot ChatGPT and demanding answers from its US maker OpenAI , a regulator said Monday. Regional data protection authorities in Europe's top economy have compiled a questionnaire for OpenAI and expect a response by June 11, said Marit Hansen, commissioner for the northern state of Schleswig-Holstein. "We want to know if a data protection impact assessment has been carried out and if the data protection risks are under control," Hansen told AFP. "We are asking OpenAI for information on issues that stem from the European General Data Protection Regulation (GDPR)." German authorities want to verify whether OpenAI under EU law sufficiently informs people whose data is used by ChatGPT that they "have rights, for example to access, correct or even delete their data," she said. It is also necessary to "clarify how these rights can be exercised...