ProPublica’s “Machine Bias” article, ACM’s “Accountability in an Algorithmic Society: Relationality, Responsibility, and Robustness in Machine Learning” conference paper, and Julia Stoyanovich and Falaah Arif Khan’s “We are AI” comic series each introduce radically different approaches to discussing Artificial Intelligence from a Public Technology perspective.
The “We are AI” comics helped to depict the foundational principles of AI in a visual medium that helped elucidate analogies that were being used. Stoyanovich and Khan’s comics serve as accessible resources for teaching the basics of AI. Given the prevalence of algorithmic systems in our daily lives, the “We are AI” comics series brings into focus some of those systems that we interact with both invisibly and visibly. While teaching some of the key terms related to algorithms, Stoyanovich and Khan also go beyond to raise questions about algorithmic morality and our role in engaging critically with these systems. I believe that the value of the comic approach in “We are AI” is powerful in lowering the educational barrier to learning about AI in a way that takes away the mystery and perceived objectiveness often present when discussing AI. Of course, given the comic medium, Stoyanovich and Khan are not able to provide an in-depth investigation into the examples they discuss – and I think that while this is not one of their goals, it can be considered a potential pitfall. As someone with an educational background in computer science, I see “We are AI” as a great resource that helped me better adjust my own understanding of AI and introduced relevant concepts related to morality that often get excluded in computer-science focused educational spaces.
For more in-depth investigations into the huge impact of algorithmic systems in people’s lives, ProPublica’s article serves as an alternative point of entry that calls for accountability in the use of risk assessments throughout the criminal justice system. In the ProPublic article written by Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner, the authors provide an extensive examination currently lacking from regulatory bodies, such as the U.S. Sentencing Commission. ProPublica’s approach is compelling in showing what accountability can look like, rooted in real-life examples of bias as observed in Northpointe’s COMPAS risk assessment software. One of the pitfalls from ProPublica’s article is the lack of introduction to the building blocks of AI – there is somewhat of an underlying assumption of AI comprehension. Similarly, the paper “Accountability in an Algorithmic Society: Relationality, Responsibility, and Robustness in Machine Learning” by A. Feder Cooper, et. al, is aimed at an audience of experts attending the annual ACM Computer and Communications Security Conference. For their intended audience, Cooper’s article is effective at providing a robust introduction to the concepts of accountability in AI and new developing frameworks for navigating accountability in an increasingly complex algorithmic society. As such, the ACM conference paper introduces some actionable potential through the relational accountability framework – however it remains a bit more disconnected from the real-world experiences of the impact of algorithmic bias.
Each of these approaches are tailored to the audiences they are meant to serve – and while they may each contain some pitfalls, alongside one another they serve as a sturdy, multi-faceted body of work for learning about AI, thinking about our role as humans in calling for and understanding what accountability can look like, and implementing frameworks of accountability to the algorithmic systems we build and engage with.