This week’s readings offered a variety of perspectives about the harms and complications of machine learning and AI. The ProPublica article, Machine Bias, by Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner took both a quantitative and qualitative approach to understanding the potential bias inherent to risk assessment algorithms. Their findings suggest that risk assessment scores, commonly used in courtrooms around the country as a way to assess someone’s recidivism, are more likely to label people of color as high-risk offenders compared to their white counterparts. Their article couples data analysis with interview-based responses from individuals directly affected by this system. With visuals embedded throughout, this article seems to have the most general audience in mind as the language is very accessible and the images allow the reader to connect with the article in a more humanistic and emotional way. They also break down their data analysis in a way that is translatable to a broad array of individuals which makes its impact potentially more far reaching than the other two publications we read this week.
The paper presented at the 2022 ACM Conference on Fairness, Accountability, and Transparency, Accountability in an Algorithmic Society: Relationality, Responsibility, and Transparency, definitely read like an academic paper. I enjoyed reading this piece because it talked about how we can operationalize accountability and the authors drew on several scholars and theories from other fields to help support their claims. The interdisciplinary nature of this article encompassed how I think we should be discussing public interest technology issues, since they often don’t exist in isolation and require analysis from social, political, economic, and technical perspectives. This paper did take me quite a while to read, though, and I feel its impact is probably limited to academic and professional audiences.
Lastly, the We are AI comics by Julia Stoyanovich and Falaah Arif Khan offered a creative and refreshing way to educate the public about AI. The addition of colorful images and relatable examples to aid in explaining how AI works and its potential implications is a great way to make this information accessible and useful to the broader public. The comics were something that I would feel comfortable sharing with many people, regardless of their age or education level. The note at the beginning of each comic was also helpful for letting people know in a simple and direct way how these materials can be shared and how to properly credit the authors. We talked about the importance of acknowledgment and giving proper credit when it’s due, especially in the tech space, so I think them including that page on the front of their comics helps to actively combat that issue and make it easy for people to credit their creativity.
I think each of these publications are successful in communicating the complexities of machine learning and AI in different ways. The ProPublica article was empirically-driven, but humanistic. The ACM conference paper was well-articulated and well-supported, but is likely to only be accessible to those who are already familiar with the literature. The AI comics were fun, creative, and simplistic, but it is difficult to image how people would find these comics if they weren’t already looking at Data, Responsibly’s website.