Machine learning algorithms use statistical techniques to give computer systems the ability to “learn” from data to make predictions or classifications. However, a model’s predictions depend on the data used to train it. If data represents human bias (unconscious or not), the predictions and classifications of the model are going to reflect and reinforce these biases. It is a common misconception to believe that complex models are unbiased because they objectively evaluate raw data, they actually encode human prejudice and bias. These biases are unfortunately encoded into the models that create and reinforce social injustices such as rates of recidivism and other race generalizations. The design of a system must always take into consideration individuals who receive the implications of its output. This project aims to quantify the bias against minority groups using various types of Machine Learning models. We generate confusion matrices to analyze the precision and recall of each model with respect to different divisions of people according to sex, race, and age.