In a reversal of previous policies the tech giant will now permit the technology to be used for developing arms and surveillance tools
Google has significantly revised its artificial intelligence principles, removing earlier restrictions on using the technology for developing weaponry and surveillance tools. The update, announced on Tuesday, alters the company’s prior stance against applications that could cause “overall harm.”
In 2018, Google established a set of AI principles in response to criticism over its involvement in military endeavors, such as a US Department of Defense project that involved the use of AI to process data and identify targets for combat operations. The original guidelines explicitly stated that Google would not design or deploy AI for use in weapons or technologies that cause or directly facilitate injury to people, or for surveillance that violates internationally accepted norms.
The latest version of Google’s AI principles, however, has scrubbed these points. Instead, Google DeepMind CEO Demis Hassabis and senior executive for technology and society James Manyika have published a new list of the tech giant’s “core tenants” regarding the use of AI. These include a focus on innovation and collaboration and a statement that “democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights.”
Margaret Mitchell, who had previously co-led Google’s ethical AI team, told Bloomberg the removal of the ‘harm’ clause may suggest that the company will now work on “deploying technology directly that can kill people.”
Read more
According to The Washington Post, the tech giant has collaborated with the Israeli military since the early weeks of the Gaza war, competing with Amazon to provide artificial intelligence services. Shortly after the October 2023 Hamas attack on Israel, Google’s cloud division worked to grant the Israel Defense Forces access to AI tools, despite the company’s public assertions of limiting involvement to civilian government ministries, the paper reported last month, citing internal company documents.
Google’s reversal of its policy comes amid continued concerns over the dangers posed by AI to humanity. Geoffrey Hinton, a pioneering figure in AI and recipient of the 2024 Nobel Prize in physics, warned late last year that the technology could potentially lead to human extinction within the next three decades, a likelihood he sees as being up to 20%.
Hinton has warned that AI systems could eventually surpass human intelligence, escape human control and potentially cause catastrophic harm to humanity. He has urged significant resources be allocated towards AI safety and ethical use of the technology and that proactive measures be developed.
February 05, 2025 at 11:29PM
RT