AI workers demand stronger whistleblower protections in open letter


Current and former employees of leading artificial intelligence companies such as OpenAI, Google DeepMind and Anthropic have signed a group of contracts. open letter calls for greater transparency and protection from retaliation for those who speak out about the potential concerns of artificial intelligence. “Unless there is effective government oversight of these corporations, current and former employees are among the few who can hold them publicly accountable,” the letter, published Tuesday, said. “However, extensive non-disclosure agreements prevent us from voicing our concerns except for companies that fail to address these issues.”

The letter arrives a few weeks later Vox investigation OpenAI recently found that it was trying to silence departing employees by forcing them to choose between signing an aggressive non-disparagement agreement or risking losing their equity in the company. Following the report, OpenAI CEO Sam Altman he said said he was “really embarrassed” by the provision, and claimed it had been removed from recent exit documents, though it was unclear whether it would remain in effect for some employees.

Among the 13 signatories are former OpenAI employees Jacob Hinton, William Saunders and Daniel Kokotajlo. Kokotajlo he said he resigned from the company after losing confidence that it would responsibly build artificial general intelligence, a term used for AI systems that are as intelligent or smarter than humans. Endorsed by prominent AI experts Geoffrey Hinton, Yoshua Bengio and Stuart Russell, the letter expresses serious concern about the lack of effective government oversight of AI and the financial incentives that drive tech giants to invest in the technology. The authors warn that the unchecked pursuit of powerful AI systems could lead to the spread of misinformation, exacerbated inequality, and even the loss of human control over autonomous systems, potentially leading to human extinction.

“There’s a lot we don’t understand about how these systems work and whether they’ll be in the best interest of humans as they become smarter and perhaps surpass human-level intelligence in all areas.” he wrote About Kokotajlo X. “Meanwhile, there is very little control over this technology. Instead, we trust companies that build them to drive themselves, even as profit motives and excitement about technology drive them to “move fast and disrupt things.” “It’s dangerous to silence researchers and intimidate them into retaliation when we are currently one of the only people in a position to warn the public.”

OpenAI, Google and Anthropic did not immediately respond to a request for comment from Engadget. One statement sent BloombergAn OpenAI spokesperson said the company is proud of its “track record of delivering the most capable and safest AI systems” and believes in its “scientific approach to addressing risk.” He added: “We agree that serious discussion is essential given the importance of this technology, and we will continue to engage with governments, civil society and other communities around the world.”

The signatories call on AI companies to adhere to four key principles:

  • Refrain from retaliating against employees who raise safety concerns

  • Supporting an anonymous system for whistleblowers to alert the public and regulators of risks

  • Allowing a culture of open criticism

  • Avoiding nondisclosure or nondisclosure agreements that limit employee access

The letter outlines OpenAI’s experiences, incl breakdown his “superalignment” security team and departure co-founder Ilya Sutskever and Jan Leyke are key figures criticized the company’s preference for “shiny products” over security.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *