Face Value: How Human Influence Plays a Role in Perpetuating Bias Within Human-Algorithm Interactions


  • Alice Zhang Stanford University


Bias is an inevitable part of the human experience, and it shapes how we think and behave. Unfortunately, bias is not beneficial in many situations. When humans interact with technology, they tend to translate this bias across each exchange. Algorithms, which do not have the privilege of understanding the consequences of bias, may then exacerbate the issue and establish a positive feedback loop that continues to discriminate against certain groups of people. With the increase in human-computer interactions, then, comes an increased need to address this issue. In this study, using a qualitative methodology incorporating the synthesis of past research, I analyze the role of human influence as a designer and as a user and discuss the impact each has on the feedback loop with a particular focus on the recidivism score algorithm COMPAS. I discovered that both explicit designer choice and intrinsic underrepresentation of certain groups in past research play a part in designer influence, while users can influence bias through how they interpret algorithmic results (in which they are either unaware of the issue or value convenience over objectivity). The complex nature of this situation, however, must still be taken into account; there is no single, conclusive reason for human behavior in these interactions, and this study should not determine one perspective as more valid than the other. In that manner, a reasonable way to begin addressing this issue is by first addressing its overarching theme—the lack of careful thinking surrounding algorithmic influence and interpretation.






Research Articles