6.6 C
New York
Wednesday, April 3, 2024

AI specialists disown Musk-backed marketing campaign citing their analysis By Reuters



© Reuters. FILE PHOTO: Tesla founder Elon Musk attends Offshore Northern Seas 2022 in Stavanger, Norway August 29, 2022. NTB/Carina Johansen through REUTERS

By Martin Coulter

LONDON (Reuters) -4 synthetic intelligence specialists have expressed concern after their work was cited in an open letter – co-signed by Elon Musk – demanding an pressing pause in analysis.

The letter, dated March 22 and with greater than 1,800 signatures by Friday, referred to as for a six-month circuit-breaker within the growth of techniques “extra highly effective” than Microsoft-backed OpenAI’s new GPT-4, which may maintain human-like dialog, compose songs and summarise prolonged paperwork.

Since GPT-4’s predecessor ChatGPT was launched final 12 months, rival firms have rushed to launch comparable merchandise.

The open letter says AI techniques with “human-competitive intelligence” pose profound dangers to humanity, citing 12 items of analysis from specialists together with college lecturers in addition to present and former workers of OpenAI, Google (NASDAQ:) and its subsidiary DeepMind.

Civil society teams within the U.S. and EU have since pressed lawmakers to rein in OpenAI’s analysis. OpenAI didn’t instantly reply to requests for remark.

Critics have accused the Way forward for Life Institute (FLI), the organisation behind the letter which is primarily funded by the Musk Basis, of prioritising imagined apocalyptic eventualities over extra rapid issues about AI, comparable to racist or sexist biases.

Among the many analysis cited was “On the Risks of Stochastic Parrots”, a paper co-authored by Margaret Mitchell, who beforehand oversaw moral AI analysis at Google.

Mitchell, now chief moral scientist at AI agency Hugging Face, criticised the letter, telling Reuters it was unclear what counted as “extra highly effective than GPT4”.

“By treating plenty of questionable concepts as a given, the letter asserts a set of priorities and a story on AI that advantages the supporters of FLI,” she mentioned. “Ignoring lively harms proper now could be a privilege that a few of us haven’t got.”

Mitchell and her co-authors — Timnit Gebru, Emily M. Bender, and Angelina McMillan-Main — subsequently revealed a response to the letter, accusing its authors of “fearmongering and AI hype”.

“It’s harmful to distract ourselves with a fantasized AI-enabled utopia or apocalypse which guarantees both a ‘flourishing’ or ‘doubtlessly catastrophic’ future,” they wrote.

“Accountability correctly lies not with the artefacts however with their builders.”

FLI president Max Tegmark instructed Reuters the marketing campaign was not an try to hinder OpenAI’s company benefit.

“It is fairly hilarious. I’ve seen individuals say, ‘Elon Musk is making an attempt to decelerate the competitors,'” he mentioned, including that Musk had no function in drafting the letter. “This isn’t about one firm.”

RISKS NOW

Shiri Dori-Hacohen, an assistant professor on the College of Connecticut, instructed Reuters she agreed with some factors within the letter, however took subject with the best way by which her work was cited.

She final 12 months co-authored a analysis paper arguing the widespread use of AI already posed severe dangers.

Her analysis argued the present-day use of AI techniques may affect decision-making in relation to local weather change, nuclear struggle, and different existential threats.

She mentioned: “AI doesn’t want to succeed in human-level intelligence to exacerbate these dangers.

“There are non-existential dangers which can be actually, actually essential, however do not obtain the identical sort of Hollywood-level consideration.”

Requested to touch upon the criticism, FLI’s Tegmark mentioned each short-term and long-term dangers of AI ought to be taken significantly.

“If we cite somebody, it simply means we declare they’re endorsing that sentence. It does not imply they’re endorsing the letter, or we recommend all the things they assume,” he instructed Reuters.

Dan Hendrycks, director of the California-based Middle for AI Security, who was additionally cited within the letter, stood by its contents, telling Reuters it was smart to think about black swan occasions – these which seem unlikely, however would have devastating penalties.

The open letter additionally warned that generative AI instruments could possibly be used to flood the web with “propaganda and untruth”.

Dori-Hacohen mentioned it was “fairly wealthy” for Musk to have signed it, citing a reported rise in misinformation on Twitter following his acquisition of the platform, documented by civil society group Frequent Trigger and others.

Musk and Twitter didn’t instantly reply to requests for remark.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles