Google pledges not to use AI for weapons or surveillance


Google's principles say it will not pursue AI applications meant to cause physical injury, that tie into surveillance "violating internationally accepted norms of human rights", or that present greater "material risk of harm" than countervailing benefits.

The restriction could help Google management defuse months of protest by thousands of employees against the company's work with the United States military to identify objects in drone video.

Last month, some of the company's employees resigned over Project Maven, a Google contract with the Pentagon that involves drone analysis, Gizmodo reported. But the company said it would keep working with the military in other areas, giving its cloud business the chance to pursue future lucrative government deals.

Aside from making the principles public, Pichai didn't specify how Google or its parent Alphabet would be accountable for conforming to them.

Google's Project Maven with the US Defence Department came under fire from company employees concerned about the direction it was taking the company. Several employees said that they did not think the principles went far enough to hold Google accountable-for instance, Google's AI guidelines include a nod to following "principles of worldwide law" but do not explicitly commit to following global human rights law.

"We will not be pursuing follow on contracts for the Maven project", Greene said, but noted that the company would see the current contract through.

Several Google employees, including former CEO and board of directors Chairman Eric Schmidt and Matt Cutts, who used to run Google's search spam division, have quit Google in the last few years to work for the Pentagon. And they should only be made available for purposes that fall in line with the above. Google and its big technology rivals have become leading sellers of AI tools, which enable computers to review large data sets to make predictions and identify patterns and anomalies faster than humans could.

The goal of this project was to process and catalog drone imagery, and Google's rank-and-file workers were none too pleased.

Google's Chief Executive Officer Sundar Pichai specified seven objectives for the use of artificial intelligence in a blog post on Thursday.

The internal and external protests put Google in a hard position as it aims to recenter its business around the development and use of artificial intelligence. The Web giant - famous for its past "Don't be evil" mantra - is in the running for two multibillion-dollar Defense Department contracts for office and cloud services. Asaro praised Google's ethical principles for their commitment to building socially beneficial AI, avoiding bias, and building in privacy and accountability.

Google's decision to restrict military work has inspired criticism from members of Congress.

"Ultimately, how the company enacts these principles is what will matter more than statements such as this", Asaro said.