Sundar Pichai Reveals Google’s Seven Principles For AI; Says Technology Will Not Be Used for Weapons

pichai web
Image Courtesy: DNA India

Google’s collaboration with Pentagon over an AI project called ‘Project Maven’ drew significant criticism from employees, who strongly urged CEO Sundar Pichai to cut ties with Pentagon as the technology could be used for a destructive purpose. Following the resignation of some employees over Google’s involvement in the project, the company eventually decided not to renew its contract with the US Department of Defense.

Learning from the experience, Google’s CEO Sundar Pichai has penned a blog post in which he has mentioned in clear terms that Google won’t develop or deploy AI technology that could be used for detrimental or damaging purposes, and has outlined seven principles which the company will follow when it comes to the development and application of AI.

Sundar Pichai Reveals Google’s Seven Principles For AI; Says Technology Will Not Be Used for Weapons

How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right.

Pichai mentions in the post that these principles are not just theoretical concepts, but they are concrete standards that will dictate the direction in which Google’s research in AI goes. Here are the seven principles outlined by Pichai:

1. Socially Beneficial

Google will pursue AI development in different areas only after thoroughly assessing the socio-economic and cultural impact it can have, and making sure that the benefits of a particular product or technology outweigh the downsides.

2. Avoid Discrimination and Bias

Google will make sure that AI algorithms and datasets reflect a transparent and fair approach, and do not originate any sentiments of bias or discrimination based on race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.

3. Safety Checks

Google will implement strong safeguards to ensure that AI systems do not pose any potential risk, by closely monitoring them and adhering to stringent safety protocols during the development phase.

4. Accountability

As per Pichai’s blog post, Google’s AI research and systems ‘will be subject to appropriate human direction and control’, and that its team will be open to relevant feedbacks, queries for explanations and appeals.

5. Incorporation of Privacy Design Principles

These include giving an opportunity for notice and consent, encouraging the implementation of privacy protection tools, transparency and high focus on safely handling user data.

6. High Standards of Scientific Excellence

Google will adopt a multidisciplinary approach when it comes to the development in the field of AI, by sharing knowledge and conducting research so that domains like medicine, environmental science and chemistry can equally benefit from the advancements.

7. Checking Abusive Application

Google will check the abusive application of AI by carefully assessing the purpose of a technology, uniqueness and accessibility, the scale of impact and the nature of the company’s involvement in a project.

Pichai also revealed four application areas in which Google will refrain from designing or deploying AI technology – weapons, technologies that cause direct or indirect harm, technologies that facilitate mass surveillance in violation of internationally accepted norms, and those which contradict internationally accepted principles of law and human rights.

While these principles may not quieten fears about AI taking over the world eventually, it does reveal that Google is working to manage the sentiments of users and experts in its work.

SOURCE GoogleBlog
comment Comments 0
Leave a Reply