Approximating the Truth: A Comparative Study

AI Comics:

These comics are excellent first resources for schoolchildren and those first encountering the subject material. While this form of reportage might attract a broad audience, it glosses over much of the nuances of data bias, algorithmic development, and coding, and paints a simplistic picture of the existing situation surrounding the presented issues. Complexity and depth are lost at the expense of accessibility to a wider audience.

Ultimately, this creative form of storytelling, and how the information is gathered and presented, is a wonderful resource for younger learners and those with lower levels of literacy, both in the sense of comprehension of words on a page, and in the form of digital literacy. These comics could be also printed and distributed as a zine for English-speaking learners.

ProPublica Article:

This example of data journalism is perhaps most effective in the way it compares various people who are considered ‘low risk’ versus ‘high risk’ in the industrial prison complex. The low risk individuals are, without exception, white. The high risk individuals are exclusively BIPOC. The algorithm rated Black, Indigenous, and People of Color offenders as higher risk than their white and non-BIPOC counterparts, regardless of the nature of the offense and whether or not it was a first offense. Most of the high risk BIPOC individuals went on never to experience trouble with the law again, while their supposedly ‘low risk’ counterparts went on to commit serious or multiple crimes.

In one example given, two DUI drivers were arrested. A light-skinned, perhaps Latino man, who had a history of 3 previous DUIs, as well as a battery charge, was rated a ‘1’ on the risk scale (lowest risk) while a dark-skinned Black woman had two previous misdemeanors and was never convicted of another crime despite being assigned a ‘high risk’ of ‘6.’ The lighter-skinned man went on to commit “domestic violence battery,” despite being considered at a lower risk of recidivism. The risk here is not something inherent in BIPOC individuals.

The perception of risk, rather, lies in algorithmic bias, as BIPOC people were more likely to be arrested, unfairly policed, and heavily surveilled in the past. The algorithm shows a skewed version of reality, one influenced by unfair policing practices as well as an industrial prison complex that is an extension of Jim Crow, chattel slavery, imperialism, and settler-colonialism. BIPOC people who find themselves arrested will also find that the algorithms which decide whether they will be punished proportionately to their crimes are compromised and are quite literally wired against them.

Too many BIPOC people have served disproportionally long sentences for small, petty crimes. And too many white people have been given token sentences and go on to commit worse and more numerous crimes. This is because the industrial prison complex is broken; it is rare that it manages to rehabilitate inmates, and rarer still that it prepares people for life after prison.

This article does an excellent job at potentially mobilizing the public and can be used in advocacy work and to potentially change minds and discriminatory policies, as well as going a long way to ensuring justice is restored – to those who have been unfairly policed, to those who have gotten away with heinous crimes by virtue of being white, and the future victims of those who truly will go on to re-offend, although it is clear that predictive technologies have a long way to go in this arena.

ACM Conference Paper:

This academic journal article highlights issues of accountability and transparency and is intended for an audience in academia or in a university setting. It has the most salient implications for researchers and academics working in this field.  

In contrast to the other readings, this article is relatively dense and inaccessible to those outside of academia. Its reach is limited by its complexity and the language it uses – which includes jargon we as Digital Humanities or Data Visualization scholars are familiar with that others with dissimilar backgrounds might look upon and read without fully comprehending.

Although this paper raises many salient points about bias in machine learning and AI, it is relatively obscure and might not be of interest to the general population, although it is incredibly relevant in discourse about holding human and corporate parties accountable for harm caused by algorithms, instead of diffusing that blame by blaming the victims, “bugs,” the “computer as scapegoat,” or citing the problem of “many hands.”

There are many ways of informing the public, and there is more nuance and proverbial shades of grey than might otherwise be expected, especially in our digital era. Different audiences call for different outreach methods.