How to Remove Bias from Deep Learning Models

Bias has been a part of human life since its existence and over the years we have been trying to get rid of it and make unbiased decisions. But it just so happens that humans will always continue to hold biases and even in their efforts to remove bias from decision-making by letting algorithms do the job, they end up creating biased algorithms.

Examples of biased algorithms include the study conducted by Carnegie Mellon that showed that women were shown significantly fewer online ads for high-paying jobs than men. Or the incident where Microsoft’s Twitter bot, Tay, had to be taken down for producing racist posts.

Such biased algorithms call for severe compliance investigation, especially in matters where we are relying on them for critical decision-making. Before we get to how to remove these biases, let’s first understand how this creeps in.

Reasons for Bias in Deep Learning Models

Lack of appropriate training data is often the most common cause that leads to biased deep learning models. However, bias can infiltrate even before the data is collected at different stages of the process, and here is how –

    1. Unidentified and Unintentional Bias To pinpoint bias right at the building stage of a model might be difficult sometimes. You may not realize the impact of your data and choices until the algorithm runs. For instance, Amazon’s AI recruiting tool was found to be avoiding female candidates and when the engineers found out, they reprogrammed it to ignore explicitly gendered words like “women’s”. They quickly discovered that it was still picking up on implicitly gendered words that were highly correlated with men over women.
    2. Loopholes in Processes– The processes in deep learning don’t have the primary objective of bias detection. When they are tested for performance, computer scientists usually split their data before training into one group and use it for training, and reserve another set for validation post the training. This results in the data used to test the performance of your model having the same biases as the data used to train it and thus the model fails to flag prejudiced results.
    3. Difficulty in Defining Unbiased Algorithms– To define fairness is the most difficult thing to do but what’s more difficult is to define it in mathematical terms for algorithms. There could be multiple mathematical definitions of fairness, each mutually exclusive and it is impossible to incorporate all of them into one algorithm. For instance, should a health algorithm shortlist patients basis the severity of the treatment or ensure that both white and black patients share a healthy balance? Both could be definitions of fairness.

      Likewise, more such complications come in the way of creating unbiased deep learning models.

Managing Bias When Building AI

With biases in deep learning algorithms becoming common by the day and exposing companies to the risk of litigation, there is a need to figure out ways to reduce them. Some ways it can be achieved are –

    1. Selecting the Right Learning Model for the Problem Every AI model is unique and there is no single method to avoid bias in all of them but there are a few steps that we can take while they are in the making –

      For instance, supervised vs unsupervised learning models. While unsupervised models can learn bias from their data set, supervised models could introduce human bias even if they allow for control over bias through data.

      Non-bias could be introduced by excluding sensitive information from the model but it would still have vulnerabilities. Data scientists will have to identify the best model for a given situation and come up with different strategies for building them and practice troubleshooting before committing to it.

    2. Choose a Comprehensive Training Data Set– Ensure that your training data is diverse and comprehensive of different groups. It is inadvisable to have different models for different groups. In an instance of insufficient data for one group, you can use weighting to increase its importance in training but beware to do it with extreme caution as it may lead to new biases.

      While data scientists may do much of the heavy lifting, it is upto everyone building the deep learning model to protect it from bias in data selection.

    3. Monitoring of Performance with Real Data– Biased deep learning models are not intentional, they may have worked as expected in controlled environments. But the regulators don’t take the best intentions into account when assigning liability for ethical violations. Hence it becomes important for you to replicate real-world applications as much as possible while building algorithms.

      It may not be a great idea to use test groups for algorithms that are already in production but rather run them against real data. Check for basic biases like “Are men outnumbering females in shortlisted candidates?”

      When you’re examining data, you could be looking for two types of equality: equality of outcome and equality of opportunity. Result equality is easier to prove, but it also means you’ll knowingly accept potentially skewed data. While it’s harder to prove opportunity equality, it is at least valid morally.

      It’s often practically impossible to ensure both types of equality, but oversight and real-world testing of your models should help you achieve a healthy balance.

Post a comment

Your email address will not be published.