All the data you need.
Reducing Bias in AI
( go to the article → https://blog.insightdatascience.com/reducing-bias-in-ai-d64bc3142ae2?source=rss----d02e65779d7b---4 )
How to Reduce Bias in AI.Bias in machine learningStories of bias in machine learning algorithms have been well publicized in recent years. From sexist AI recruiting tools to racist AI sentencing guidelines, there is no shortage of disheartening articles about AI being the proponent of prejudice practices, unconscious bias, and everything in between. As we move to a world in which more and more of what we experience is determined by ‘intelligent’ algorithms, is the future as dark as some might predict?I, for one, am optimistic. While we might be a long way from social equity, the same cannot be said for unbiased AI. It is a problem that is being discussed widely, and advances are already being made to tackle the issue. In the future, I see AI forming a large part of the solution to the problem of inequity.In this blog, I’ll describe DebiasML, a practical, yet remarkably effective approach I developed for reducing bias. When compared with generative adversarial networks (GANs) on the UCI Adult Data Set, DebiasML achieved a 17% higher F1 score (while being similarly effective at reducing bias), was 10 times faster to train, is easier to implement and model agnostic!Bias versus prediction accuracyIn a nutshell, supervised machine learning algorithms take historical data and learn a mapping from input to output in order to make predictions in the future. By construction, such algorithms replicate the biases observed in the data they are trained on (and in some cases, particularly where the disadvantaged groups are minorities, they may even amplify them). Why does this matter? After all, biases could simply be viewed as correlations in the data and, without those, our algorithm wouldn’t be a very good predictor.It’s important to realize that for many problems, measures of predictive power only quantify an algorithm’s ability to replicate the past. If all you want to do is automate a process, then this might be the only metric of interest. But what if you want to do more than simply repeat past performance? What if you want to exceed it? You don’t just want to automate your existing hiring process, you want to improve it. That’s a different problem. To do that, you need to explore the state space beyond where you have already been.Why reduce bias?In some cases, the motivation for building unbiased algorithms might be ethical. Companies using biased algorithms are exposed to reputational risk. There may also be legal reasons. U.S. federal law prohibits discrimination based on the following protected characteristics: race, religion, national, origin, age, sex, pregnancy, familial status, disability status, veteran status and genetic information.But the argument for reducing bias isn’t just ethical, it’s financial too. Multiple studies have shown that teams and companies that are more diverse in gender and race significantly outperform those that aren’t. Decreasing bias in automated recruitment software is one way in which companies can hire more diversely and ensure their organizations are more representative of their customers.Measuring biasSo what does an unbiased algorithm look like? In simple terms, we can think of it as being symmetric. For example, in an algorithm that decides who gets a job, the probability of being hired if you’re male should be equal to the probability of being hired if you’re female.To quantify bias, I define a metric called the bias factor which, for a given characteristic, is simply a measure of how many times more likely a given prediction is for one group over another. So, if the probability of males being hired is greater than the probability of females being hired, the bias factor tells us how many times more likely you are to be hired as a male and is given by:Example: Bias factorMore generally, let the sensitive and target features both be binary and denoted Z and Y, respectively. The bias factor is then defined as,Definition: Bias factorTaking the maximum of a quotient and its reciprocal ensures the bias factor is always greater than or equal to one, and is exactly one in the case where there is no bias.DataTo tackle this problem, I used census income data, which divides the population into two groups: high earners (with annual income greater than $50K) and low earners. There are 32,561 data points with 14 features, and the dataset is imbalanced, with three quarters of the population earning less than $50K. Our task will be to train a model to identify high earners while reducing gender and racial bias in our predictions.A quick look at our data shows that the population is overwhelmingly male and white.Data: Population breakdowns for sensitive featuresLooking at the proportions of each of group in our sensitive features which are high earners gives us a concrete way of identifying biases in our data.Data: Proportions (not numbers) of high earners in each group for our sensitive featuresFrom the above plots we can see that, for the sensitive feature sex, the bias is in favor of males, and for the sensitive feature race, the bias is in favor of white and Asian people. Based on this, we convert race into a single binary feature, classifying people as either white/Asian or neither. In doing this, we find that the bias factors in our data for sex and race are 2.8 and 2.88 in favor of male and white/Asian people, respectively.Bias in predictionsTo see how bias an algorithm could be, I trained a three layer, fully connected neural network to identify high earners and then made predictions on a test set.Classifier: Three layer fully connected neural networkSure enough, I found my classifier to be biased. The probability of predicting men in the test set as high earners was 2.8 times that for women and white/Asians were 4.4 times more likely to be predicted as high earners. It’s noteworthy that the algorithm is not just replicating, but amplifying racial bias in the data.Guidelines from the U.S. Equal Employment Opportunity Commission (EEOC) determine that a company’s selection system has an adverse impact on a particular group if the bias factor is greater than 1.25. We use this as a starting point and aim to reduce the bias factor to below 1.25.Why not just omit the sensitive features?.An obvious question you might be asking at this point is, why don’t I just omit the sensitive features? The short answer is that it doesn’t work. Even after removing a sensitive feature, its correlation with other features in the data remain. Often, data contain features that are strongly correlated, which means that removing the sensitive feature has little to no impact in reducing bias. A good example of this is zip code which, in many areas, is so strongly correlated to race that it effectively serves as a proxy.How to tackle the problem?In the paper “Fairness Constraints: Mechanisms for Fair Classification (2017)”, Zafar et al. define the concept of decision boundary fairness and incorporate this by introducing a constraint on the optimization process during training. In this excellent blog post, Stijn Tonk takes inspiration from the paper “Learning to Pivot with Adversarial Networks (2017)” and uses GANs to train a model in which the distribution of predictions for different groups (for a given characteristic) match.DebiasMLIn developing DebiasML, I took a different approach, starting with a higher-level view. There are three key components of any supervised machine learning algorithm one could change:Training data: this is the data we use to learn the mapping from input to output.Model: this determines the complexity of relationships in the data we are able to replicate.Success metric: this is the metric we use to judge the performance of our model in training, i.e. the cost function.We’ve already seen that the bias in our algorithm is coming from the data, so why not try to fix that? If we have a clear picture of what the world we want to live in looks like, we can nudge our training data in that direction. Any decent model trained for accuracy on unbiased data should produce unbiased predictions. There are a number of benefits of this approach:Changes to data are transparent. We can easily see the impact by looking at the resulting changes in correlation.It’s model agnostic. GANs and neural networks are pretty cool, but can be overkill for many problems. DebiasML leaves model choice to the data scientist.This sounds like a great idea, but how does one “fix” the data?Naive random oversamplingWe’ve established that our data is biased simply by looking at the proportions of high earners in different groups, so why not try to fix the data by addressing this asymmetry directly? In DebiasML, I do this using naive random oversampling which is a technique that’s used extensively in data imbalance problems but has not been widely adopted for solving the issue of bias.DebiasML: Naive random oversamplingTo illustrate the idea in the example I’ve been using, for sex, the majority class would be high earning males and the minority class would be high earning females. The latter would be the data points to oversample. Reducing bias from two dimensions (both gender and race) gets a little more complicated. The majority class here is high earning white/Asian males. There are three minority classes we separately oversample from, each by different amounts: high earning white/Asian women, high earning non-white/Asian men and high earning non-white/Asian women.Bias reduction and prediction performanceIt turns out that for our census income data, this works remarkably well! The graphs below show the distribution (probability density functions) of predicted probabilities on the test set, when trained on different variations of the data. For the top row, I trained on the original data; for the second row, I trained on the same data with the sensitive features omitted; and, for the bottom row, I trained on the oversampled data. Low probabilities (on the x-axis) correspond to low earners and higher probabilities correspond to high earners. The bias factors in each case are shown on the graphs, and the prediction performance is provided in the table further down.Results: Training on the original data (top row), the original data with sensitive features removed (middle row) and DebiasML oversampled data (bottom row)As we move down the rows of graphs, we can see that the weight for the disadvantaged class is being redistributed and shifted to the right. It is noteworthy that omitting the sensitive feature may not reduce bias at all; it proves somewhat effective in reducing the racial bias factor (from 4.35 to 2.55) but the same cannot be said for gender bias, where the bias factor actually increases slightly (from 2.79 to 2.92).Results: Prediction performanceWhen training on the oversampled data we can see (from the graphs) that the bias factors have reduced significantly (and importantly below 1.25) with only a small degradation of 2% in both accuracy and F1 score (as shown in the table above).So, DebiasML clearly works, but how does it compare to other methods currently being used? Let’s take a look.DebiasML vs GANsThe graphs below show the distribution of predictions again, this time using GANs to reduce bias. A comparison of the prediction performance is given in the table that follows.Results: Generative Adversarial NetworkResults: Prediction performance comparison of DebiasML with GANsComparing GANs with DebiasML you can see the results are similar on bias reduction and accuracy, but DebiasML results in a significantly higher F1 score. This means that DebiasML is better, overall, at finding the people we are actually interested in finding. In this case, it’s high earners, but in another application, you could imagine that it might be people you want to hire.Another notable difference between DebiasML and the GAN is the impact on the distribution of predictions. While GANs act to match the full distributions for the sensitive features, DebiasML does not. Instead, it pushes the weight of the distributions for the disadvantaged classes to the right, leading to a bimodal distribution.DebiasML summaryWhen compared with GANs on the UCI Adult dataset, DebiasML:achieves a 17% higher F1 score while being similarly effective at reducing gender and racial bias,is 10X faster to train with fewer additional parameters to learn, andis model agnostic and transparent, making it an incredibly practical solution.One disadvantage of DebiasML is that the number of distinct pools from which one needs to oversample increases exponentially with the number of sensitive features.Why does DebiasML work?At a high level, oversampling attacks the problem of bias at its core by attempting to eliminate information in the data which shows that, for a given characteristic, one group has an advantage over another. Oversampling a subset of data points is exactly equivalent to increasing the weight of their contribution to the cost function in training. Given this, one way to view oversampling of some points is that we essentially prioritize their correct classification higher, which in turn affects the decision boundary.Implemented as a re-weighting of data points in the cost function, oversampling adds no additional expense in training for a given set of oversampled data, making it extremely efficient. In comparison, using GANs approximately doubles the size of the model in training and requires an order of magnitude more iterations to train the additional parameters.AcknowledgementsThanks to Holly Szafarek, Emmanuel Ameisen, Adrien Treuille, Ismael Juma and Harini Sridhar for their helpful comments and feedback on this post. Thanks also to program directors Matt Rubashkin and Amber Roberts and everyone at Insight Data Science for providing the unique environment in which I was able to work on this problem.More DebiasMLGitHub: https://github.com/leenamurgai/debias-mlPresentation slides: http://bit.ly/debias-ml-slideStreamlit report: http://bit.ly/debias-ml-streamlitAre you interested in working on high-impact projects and transitioning to a career in data? Sign up to learn more about the Insight Fellows programs and start your application today.Reducing Bias in AI was originally published in Insight Fellows Program on Medium, where people are continuing the conversation by highlighting and responding to this story.
Back All Articles
advert template