In A Welcome Move, Google Decides On Not Using AI For Weapons, Surveillance


Google pledged not to use its powerful artificial intelligence for weapons, illegal surveillance, and technologies that cause “overall harm”, according to media reports. However, the company clarified that it would keep working with the military in other areas, giving its cloud business the chance to pursue future lucrative government deals. 


Sundar Pichai, the chief executive officer for Alphabet Inc.’s Google, released a set of principles on 8 June after a revolt by thousands of employees of the internet giant.

What does the charter say?

The charter sets “concrete standards” for how Google will design its AI research, implement its software tools and steer clear of certain work, Pichai said in a blog post.

“How AI is developed and used will have a significant impact on society for many years to come,”

Pichai wrote.

“As a leader in AI, we feel a special responsibility to get this right.”

Some Google employees and outside critics cautiously welcomed the principles, although they felt that the language has given the company ample wiggle room in future decisions. 

ALSO READ: Google now says controversial AI voice calling system will identify itself to humans

The new principles by the company do not intend to pursue AI applications for weapons and technologies that “gather or use information for surveillance,” in violation of accepted human rights laws. The principles also state that the company will work to avoid “unjust impacts” in its AI algorithms by injecting racial, sexual or political bias into automated decision-making.

“While we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas,”

Pichai wrote. He pointed out that these projects would only augment the critical work of these organisations and help in keeping civilians safe.


The seven principles were drawn up to quell concern over Google’s work on Project Maven, a Defense Department initiative to apply AI tools to drone footage. Staff protests forced Google to retreat from the contract last week. 

Maximum Google’s employees had pleaded to CEO Sundar Pichai in a letter to stop providing Pentagon the technology which could be used to improve the accuracy of drone attacks.

The signers of the letter that is circulating within the company wrote that,

“We believe that Google should not be in the business of war.”

Google’s charter: a watershed moment

This charter by Google has proved to be very important not just for the company but also AI as a field. Technology giants, like Google, have developed software and services that give machines more control over decisions. However, these capabilities are now spreading to more industries, such as automotive, healthcare and government sectors. A driving force behind the spread is the easier access to AI building blocks that Google, Inc., and Microsoft Corp. have provided through their cloud services. 

AI advances are helping medical research and providing other benefits. But the use of the technology in other areas has sparked concern among lawmakers and advocacy groups.

Microsoft CEO Satya Nadella proposed similar principles in 2016, without mentioning the military.

Academics opine that the principles regarding surveillance were not specific enough, They feel it is a bit vague in places but it is a start. Google will integrate the principles into existing product-review processes and plans to set up an internal review board to enforce the guidelines this year, according to a person familiar with the company.

“While this is our chosen approach to AI development, we also understand that there is room for many voices in this conversation,”

Pichai wrote in the blog post.

“And we will continue to share what we have learned about ways to improve AI technologies and practices.” 

ALSO READ: Rs 1 lakh Imposed On Google, Facebook & Microsoft By SC Regarding Offensive Content