• mobile_menu_1_icon

    Individuals

    • Certifications

    • Programs

  • mobile_menu_2_icon

    Enterprises

  • mobile_menu_3_icon

    Resources

  • mobile_menu_4_icon

    About

Mobile Header Background
Desktop-Menu-bar

Ethical AI: Lessons from Google AI Principles

12553
single-featured-image
By John White Last Updated on Jun 14, 2021

Is your organization using AI/machine learning for several of its products, or planning to use AI models widely for forthcoming products? Do you have a set of AI guiding principles in place for stakeholders such as product managers, data scientists, and machine learning researchers to make sure that safe and unbiased AI is used for developing AI-based solutions? Are you planning to create AI guiding principles for other AI stakeholders, including business stakeholders, customers, and partners?

Suggested Read: Artificial Intelligence business potential

If the answer to the above questions is not “yes,” you should start thinking about laying down AI guiding principles, sooner than later, to help everyone from the executive team to product management to data scientists plan, build, test, deploy, and govern your AI-based products. The rapidly growing capabilities of AI-based systems have started inviting questions from business stakeholders (including customers and partners) to provide details on the impact, governance, ethics, and accountability of AI-based products combined into various business processes and workflows. No extended can a company afford to hide some of the above details in light of IP-related or privacy concerns.

They are constructed on Google’s principles for developing AI-based products. These principles are:

  • Overall beneficial to the business
  • Avoiding unfair bias against a group of user entities
  • Ensures the safety of customer (Freedom from business risk)
  • Trustworthy (Customers can ask for an explanation)
  • Customer data privacy
  • Continuous governance
  • Built using best of AI tools & frameworks

Also read: Most popular AI models

Overall Beneficial to the Business

AI/machine learning models should be built to solve complex business problems while ensuring that the benefits outweigh any risks posed by the models. The following are a few examples of the different types of risks posed by the respective models:

Fake news model: The model predicts whether the news is fake news or not. The model has a high precision of 95% and a recall of 85%. The 85% recall reveals there is a set of news (although smaller in number) that fails to be predicted as fake, and thus, filtered by the model. However, out of all news which got predicted as fake, a 95% accuracy rate implies the model does a good job of predicting news as fake news. The benefits of this model, in my opinion, outweigh the harm done by false negatives.

fake news model

Cancer prediction model: Let’s say a model is built for guessing cancer. The precision of the model comes out to be 90%, meaning that out of all predictions made by the model, 90% is correct. So far, so good. However, the recall value is 90%. This represents the detail that out of those who are grief from cancer, the model was able to properly predict for 90% of people. Others got predicted as the false negative. Is that acceptable? I do not think so. Thus, this model won’t be accepted, as it may end up hurting more than it helps.

Avoiding Unfair Bias to a Group of User Entities (Biased or Not?)

AI/ML models often get trained with data sets with the underlying assumption that the data set selected is unbiased, or with ignorance of its actual bias. The reality is something different. While building models, both the feature set and data associated with these features need to be checked for bias. The bias would want to be tested, during both:

  • The model training phase, and
  • Once the model is built and ready to be tested for moving it into production.

Read: Machine learning and artificial intelligence

Let’s consider a few examples to understand the bias in training datasets:

Bias in an image identification model: Let’s say there is a model constructed to classify the human beings in a given image. Prejudiced bias could get introduced into the model if the model is trained with images having people of white skin color. Thus the model — when tested with images depicting people of different skin color — would not be able to classify human beings accurately.

Bias in a hiring model: Models built for hire could be subjected to bias such as hiring men or women for specific roles, or hiring those with white-sounding names, or hiring those with precise skillsets for specific positions.

Bias in a criminal prediction model: One could see bias in a criminal prediction model, for example, if a person with black skin is considered to have a higher likelihood of committing a crime once he has already done so than a person with white skin. The evaluation metrics showed signify almost 45% false positives.

One must understand that there are two different kinds of bias, whether the bias is based on experience or discrimination.

Continuous Governance

A machine learning model lifecycle includes aspects related to some of the following:

  • Data
  • Feature engineering
  • Models building (training/testing)
  • ML pipeline/infrastructure

Featured article: How do machines learn?

As part of AI guiding principles, the continuous governance controls should be put to audit features connected to all of the above. Some of the governance controls are as follows:

Summary

In this post, you learned about AI guiding principles that you should consider setting for your AI/ML team and business stakeholders, including executive management, customers, and partners for developing and governing AI-based solutions. Some of the most important AI supervisory principles include safety, bias, and trustability/explainability.