In Google We Trust

How Bias in Algorithms leads to discrimination

The thesis »In Google We Trust« is part of the Bachelor’s project ALL WATCHED OVER BY ALGORITHMS OF LOVING GRACE. It seeks out to show how bias in algorithms leads to discrimination.
1. The first part gives an insight into the general perception of algorithms and presents various myths surrounding them, to then demystify them.
2. The second part explains the theoretical principle of homophily (love for love of the same) and why it can turn out to be problematic using the assumption that similarity breeds connection to sort and filter users into online neighborhoods.
3. The third part points out situations where Google’s algorithms lead to inequality and discrimination online: On one hand they reproduce so-called neat history, that obliviates and marginalizes underrepresented groups, and on the other hand Google published an application that actively discriminated against people by categorizing them incorrectly, due to the data used to train the classification AI.