Saturday, July 6, 2024
HomeTechnologyOpenAI, Google ignoring risks in race for advanced AI, should allow 'right...

OpenAI, Google ignoring risks in race for advanced AI, should allow ‘right to warn’ public: employees



A group of AI whistleblowers claim tech giants like Google and ChatGPT creator OpenAI are locked in a reckless race to develop technology that could endanger humanity – and demanded “a right to warn” the public in an open letter Tuesday.

Signed by current and former employees of OpenAI, Google DeepMind and Anthropic, the open letter cautioned that “AI companies have strong financial incentives to avoid effective oversight” and cited a lack of federal rules on developing advanced AI.

The workers point to potential risks including the spread of misinformation, worsening inequality and even “loss of control of autonomous AI systems potentially resulting in human extinction” – especially as OpenAI and other firms pursue so-called advanced general intelligence, with capacities on par with or surpassing the human mind.

Sam Altman is the CEO of ChatGPT creator OpenAI. Getty Images

“Companies are racing to develop and deploy ever more powerful artificial intelligence, disregarding the risks and impact of AI,” former OpenAI employee Daniel Kokotajlo, one of the letter’s organizers, said in a statement. “I decided to leave OpenAI because I lost hope that they would act responsibly, particularly as they pursue artificial general intelligence.

“They and others have bought into the ‘move fast and break things’ approach and that is the opposite of what is needed for technology this powerful and this poorly understood,” Kokotajlo added.

The letter drew endorsements by two prominent experts known as the “Godfathers of AI” — Geoffrey Hinton, who warned last year that the threat of rogue AI was “more urgent” to humanity than climate change, and Canadian computer scientist Yoshua Bengio. Famed British AI researcher Stuart Russell also backed the letter.

The letter asks AI giants to commit to four principles designed to boost transparency and protect whistleblowers who speak out publicly.

Those include an agreement not to retaliate against employees who speak out about safety concerns and to support an anonymous system for whistleblowers to alert the public and regulators about risks.

Geoffrey Hinton, a famed researcher known as the “Godfather of AI,” endorsed the letter. REUTERS

The AI firms are also asked to allow a “culture of open criticism” so long as no trade secrets are disclosed, and pledge not to enter into or enforce non-disparagement agreements or non-disclosure agreements.

As of Tuesday morning, the letter’s signers include a total of 13 AI workers. Of that total, 11 are formerly or currently employed by OpenAI, including Kokotajlo, Jacob Hilton, William Saunders, Carroll Wainwright and Daniel Ziegler.

“There should be ways to share information about risks with independent experts, governments, and the public,” said Saunders. “Today, the people with the most knowledge about how frontier AI systems work and the risks related to their deployment are not fully free to speak because of possible retaliation and overly broad confidentiality agreements.”

DeepMind is a Google-owned AI research lab. REUTERS

Other signers included former Google DeepMind employee Ramana Kumar and current employee Neel Nanda, who formerly worked at Anthropic.

When reached for comment, an OpenAI spokesperson said the company has a proven track record of not releasing AI products until necessary safeguards were in place.

“We’re proud of our track record providing the most capable and safest A.I. systems and believe in our scientific approach to addressing risk,” OpenAI said in a statement.

11 current and former OpenAI employees signed the letter. NurPhoto via Getty Images

“We agree that rigorous debate is crucial given the significance of this technology, and we’ll continue to engage with governments, civil society and other communities around the world,” the company added

Google and Anthropic did not immediately return requests for comment.

The letter was published just days after revelations that OpenAI has dissolved its “Superalignment” safety team, whose responsibilities included creating safety measures for advanced general intelligence (AGI) systems that “could lead to the disempowerment of humanity or even human extinction.”

Two OpenAI executives who led the team, co-founder Ilya Sutskever and Jan Leike, have since resigned from the company. Leike blasted the firm on his way out the door, claiming that safety had “taken a backseat to shiny products.”

Elsewhere, former OpenAI board member Helen Toner – who was part of the group that briefly succeeded in ousting Sam Altman as the firm’s CEO last year – alleged that he had repeatedly lied during her tenure.

Toner claimed that she and other board members did not learn about ChatGPT’s launch in November 2022 from Altman and instead found out about its debut on Twitter.

Former OpenAI board member Helen Toner has been critical of Sam Altman’s leadership. Getty Images for Vox Media

OpenAI has since established a new safety oversight committee that includes Altman as it begins training the new version of the AI model that powers ChatGPT.

The company pushed back on Toner’s allegations, noting that an outside review had determined that safety concerns were not a factor in Altman’s removal.



Source: NYPOST

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -

Most Popular