44.1 C
Monday, June 17, 2024

Google employees tried to block СhatGPT rival’s rollout – media

Two AI reviewers struggled to sound the alarm that Bard was spewing inaccurate and harmful claims

Two employees in Google’s Responsible Innovation department tried and failed to block the release of its AI chatbot Bard last month, warning it was prone to “inaccurate and dangerous statements,” the New York Times revealed on Friday, citing insiders familiar with the process. 

The apprehensive product reviewers had already noted issues with AI large language models such as Bard and its arch-competitor ChatGPT when Google’s chief lawyer met with research and safety executives to inform them the company was prioritizing AI over all else.  

The pair’s concerns about the chatbot producing false information, hurting users who became emotionally attached, or even unleashing “tech-facilitated violence” through synthetic mass harassment were subsequently downplayed by Responsible Innovation supervisor Jen Gennai, the sources claimed. While the reviewers had urged Google to wait before releasing Bard, Gennai allegedly edited their report to remove that recommendation entirely. 

Read more

Will humans become an extension of their machines? ChatGPT may have irrevocable consequences for learning and decision-making

Gennai defended her actions to the Times, pointing out that the reviewers were not supposed to share their opinions on whether to proceed, since Bard was just an experiment. She claimed to have improved the report, having “corrected inaccurate assumptions, and actually added more risks and harms that needed consideration.” This made the finished product safer, she insisted. 

Google credited Gennai for its decision to release Bard as a “limited experiment,” but the chatbot is still set to be fully integrated into Google’s market-dominating search engine “soon,” according to Google’s own website. 

Google has squelched employee rebellions on the AI issue before. Last year, it fired Blake Lemoine after he claimed Bard’s predecessor LaMDA (Language Model for Dialogue Applications) had become sentient, while researcher El Mahdi El Mhamdi resigned after the company prohibited him from publishing a paper warning of cybersecurity vulnerabilities in large language models such as Bard. In 2020, AI researcher Timnit Gebru was let go after publishing research accusing Google of insufficient caution in AI development.

However, a growing faction of AI researchers, tech executives, and other influential futurists have weighed in against the fast-moving “progress” of Google and its competitors at Microsoft and ChatGPT maker OpenAI until effective safeguards can be imposed on the technology. A recent open letter calling for a six-month moratorium on “giant AI experiments” attracted thousands of signatories, including OpenAI co-founder Elon Musk and Apple co-founder Steve Wozniak.

The technology’s potential to upend society by rendering many human occupations (or humans themselves) obsolete is central to many experts’ warnings, though lesser risks such as data breaches – which have already occurred at OpenAI – are also cited frequently.

April 11, 2023 at 12:20AM

Most Popular Articles