Barriers of Entry: How Selecting a Medium can Influence Interpretation

There are many different tools that can enable someone to reach an audience. Depending on your context, objective, and specific demographic can all influence the writer (or whoever wishes to share information) in determining which is best suited for their desired purpose. This week, we looked at three different types of media: a webcomic, a conference paper and an article from ProPublica. Each of these different types of media are able to innately provide different means of reaching certain demographics efficiently. Unfortunately, they also possess their own innate drawbacks as well.

The webcomic, “We Are AI”, by Julia Stoyanovich and Falaah Arif Khan is an effective way to introduce readers to a desired topic, while simultaneously keeping the reader engaged. Comics allow the reader to absorb information through text, which is then further compounded through infographics, or pictures. By using multiple ways to transmit information, a comic can effectively help the reader learn and take in the information. Another benefit to a comic, is that It can allow you to express complex topics in a more informal way, with usage of figures of speech like analogies. In “We Are AI”, the Authors deftly navigate the complex topics of algorithms and AI. An effective example of this is the comparison of algorithms to baking. By taking something the reader is familiar with, and replacing it with something else, the authors enable the reader to naturally understand the consequences of Ai, not just How it works. This also increases readability of the content. It removes all industry jargon that could confuse and instead uses a more refined approach through contextual association. With all of these benefits, naturally there are downsides. If someone is not familiar with the format of a comic, it can be hard to understand which bubble to read first. It also is limited in scope. Because information is divided between text and graphics, it is very difficult to expand on topics due to lack of space, else you would have too long of a comic. While comics are effective in introducing an audience to new information in forms that are easy to digest, it can become difficult to go deeper into detail and expand further on certain topics. To overcome this, Stoyanovich and Khan released multiple comics, each tackling a different aspect.

The article “Machine Bias”, published by ProPublica, is another medium that is effective in disseminating information. Newspapers like ProPublica often already have a following. Their targeted audience is most often the one who is seeking information. Much of the time, all they need to do is provide a catchy title for the particular topic. However, articles struggle with a similar problem that comics due, which is limited space. To prevent the reader from losing interest articles must remain succent. Articles and also introduce graphical information through graphs and photos, but they are only explained through reference from the text or by a short description. Another pitfall of articles is that they are freely able to cite claims without further explanation. This labor is handed off to the reader, who must travel to the cited source and read more about it there.

The AMC conference paper solves the pitfalls of the two latter media, at the expense of succinctness, readability, and graphics to keep the reader engaged. This form has the highest barrier of entry, but also provides the highest content of information. The Authors Cooper, A. Feder, et al., are free to use industrial jargon, reference other sources, and expand on topics as they see fit. They are not limited in size and space, although it is helpful to remain mindful.

Who you want to reach, and how much information you’d like to deliver are all influential factors someone must take into consideration when disseminating information. Each medium will have its benefits which consequentially means there are also drawbacks.

Tailoring Communications About AI to Different Audiences

This week’s readings offered a variety of perspectives about the harms and complications of machine learning and AI. The ProPublica article, Machine Bias, by Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner took both a quantitative and qualitative approach to understanding the potential bias inherent to risk assessment algorithms. Their findings suggest that risk assessment scores, commonly used in courtrooms around the country as a way to assess someone’s recidivism, are more likely to label people of color as high-risk offenders compared to their white counterparts. Their article couples data analysis with interview-based responses from individuals directly affected by this system. With visuals embedded throughout, this article seems to have the most general audience in mind as the language is very accessible and the images allow the reader to connect with the article in a more humanistic and emotional way. They also break down their data analysis in a way that is translatable to a broad array of individuals which makes its impact potentially more far reaching than the other two publications we read this week. 

The paper presented at the 2022 ACM Conference on Fairness, Accountability, and Transparency, Accountability in an Algorithmic Society: Relationality, Responsibility, and Transparency, definitely read like an academic paper. I enjoyed reading this piece because it talked about how we can operationalize accountability and the authors drew on several scholars and theories from other fields to help support their claims. The interdisciplinary nature of this article encompassed how I think we should be discussing public interest technology issues, since they often don’t exist in isolation and require analysis from social, political, economic, and technical perspectives. This paper did take me quite a while to read, though, and I feel its impact is probably limited to academic and professional audiences. 

Lastly, the We are AI comics by Julia Stoyanovich and Falaah Arif Khan offered a creative and refreshing way to educate the public about AI. The addition of colorful images and relatable examples to aid in explaining how AI works and its potential implications is a great way to make this information accessible and useful to the broader public. The comics were something that I would feel comfortable sharing with many people, regardless of their age or education level. The note at the beginning of each comic was also helpful for letting people know in a simple and direct way how these materials can be shared and how to properly credit the authors. We talked about the importance of acknowledgment and giving proper credit when it’s due, especially in the tech space, so I think them including that page on the front of their comics helps to actively combat that issue and make it easy for people to credit their creativity. 

I think each of these publications are successful in communicating the complexities of machine learning and AI in different ways. The ProPublica article was empirically-driven, but humanistic. The ACM conference paper was well-articulated and well-supported, but is likely to only be accessible to those who are already familiar with the literature. The AI comics were fun, creative, and simplistic, but it is difficult to image how people would find these comics if they weren’t already looking at Data, Responsibly’s website. 

The Value of Different Approaches to Discussing Accountability in AI

ProPublica’s “Machine Bias” article, ACM’s “Accountability in an Algorithmic Society: Relationality, Responsibility, and Robustness in Machine Learning” conference paper, and Julia Stoyanovich and Falaah Arif Khan’s “We are AI” comic series each introduce radically different approaches to discussing Artificial Intelligence from a Public Technology perspective.

The “We are AI” comics helped to depict the foundational principles of AI in a visual medium that helped elucidate analogies that were being used. Stoyanovich and Khan’s comics serve as accessible resources for teaching the basics of AI. Given the prevalence of algorithmic systems in our daily lives, the “We are AI” comics series brings into focus some of those systems that we interact with both invisibly and visibly. While teaching some of the key terms related to algorithms, Stoyanovich and Khan also go beyond to raise questions about algorithmic morality and our role in engaging critically with these systems. I believe that the value of the comic approach in “We are AI” is powerful in lowering the educational barrier to learning about AI in a way that takes away the mystery and perceived objectiveness often present when discussing AI. Of course, given the comic medium, Stoyanovich and Khan are not able to provide an in-depth investigation into the examples they discuss – and I think that while this is not one of their goals, it can be considered a potential pitfall. As someone with an educational background in computer science, I see “We are AI” as a great resource that helped me better adjust my own understanding of AI and introduced relevant concepts related to morality that often get excluded in computer-science focused educational spaces.

For more in-depth investigations into the huge impact of algorithmic systems in people’s lives, ProPublica’s article serves as an alternative point of entry that calls for accountability in the use of risk assessments throughout the criminal justice system. In the ProPublic article written by Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner, the authors provide an extensive examination currently lacking from regulatory bodies, such as the U.S. Sentencing Commission. ProPublica’s approach is compelling in showing what accountability can look like, rooted in real-life examples of bias as observed in Northpointe’s COMPAS risk assessment software. One of the pitfalls from ProPublica’s article is the lack of introduction to the building blocks of AI – there is somewhat of an underlying assumption of AI comprehension. Similarly, the paper “Accountability in an Algorithmic Society: Relationality, Responsibility, and Robustness in Machine Learning” by A. Feder Cooper, et. al, is aimed at an audience of experts attending the annual ACM Computer and Communications Security Conference. For their intended audience, Cooper’s article is effective at providing a robust introduction to the concepts of accountability in AI and new developing frameworks for navigating accountability in an increasingly complex algorithmic society. As such, the ACM conference paper introduces some actionable potential through the relational accountability framework – however it remains a bit more disconnected from the real-world experiences of the impact of algorithmic bias.

Each of these approaches are tailored to the audiences they are meant to serve – and while they may each contain some pitfalls, alongside one another they serve as a sturdy, multi-faceted body of work for learning about AI, thinking about our role as humans in calling for and understanding what accountability can look like, and implementing frameworks of accountability to the algorithmic systems we build and engage with.

Stories, Scholarship, and Sketches: Navigating Different Approaches to AI Ethics

I will try to take a personal approach to analyze each of the three major readings we did this week and connect it with the questions we have been asked to think about for the blog post.

Imagine that you are commuting to work in the morning on the subway and just searching for some articles to read around the topic of bias in algorithms. Maybe you even work in public policy or journalism or just somebody really into AI ethics and advocacy. You come across the ProPublica article on your phone. The article paints a clear message of real-world problems caused by racial bias in algorithms. The article can make you think of a friend who once faced some form of discrimination and feel a strong connection to the issue. Or maybe as a public policymaker, you think of your constituents and how best to serve the public cause. The article feels close to the heart. Somewhat like a call to action. Yet, you wonder if it’s a bit too straightforward and if there might be more to the story.

A few days later, while in a library, you stumble upon the ACM conference paper as you research maybe for a potential research paper. It feels like a heavy paper, filled with academic jargon. You recall that one philosophy class you took in college. The paper dives deep, reminding you of those intense classroom debates on ethics. It’s thorough and enlightening, but you can’t help feeling a bit lost in the complex terms. You wonder how many people outside of academia would connect with this.

Then, one rather boring evening as you are doomed to scrolling social media, you learn about AI Comics and since you are already bored with nothing else to do, you decide to see what is about. The visuals instantly grab your attention. You’re reminded of those educational comics you loved as a kid. This one explains algorithm ideas in a fun, engaging way. As you flip through the pages, the colorful illustrations simplify those tricky concepts, making them feel approachable. Yet, there’s a nagging feeling that some of the depth might be missing in favor of appealing visuals.

Each of these sources, with their respective unique style, feels like a different conversation you might have with friends: one urgent and rooted in reality, the second intellectually stimulating, and the third creatively engaging.

One challenge might be thinking about how each of these approaches can be brought together to build a more well-rounded conversation about the ethics of algorithms, or even making step-by-step instructions for somebody very interested in the topic. Maybe starting with a more easy-to-understand, engaging manner to introduce the topic, slowly getting into the rather academic and thought-provoking aspect and finally rounding it up with analysis, laying out plans for future development and providing insights for public policymakers and advocacy groups on how to ensure that the public interest is best served.

Sharing Information and Engaging Audiences, Evaluating Approaches

We Are AI Comics  

The comic’s goal is to introduce readers to algorithmic terms, processes, and potential harms in human use. These comics show AI is not separate from people’s lives and to raise awareness about its implications. For the design of the comic, drawings and text are used to attract are larger audience. The examples illustrated are common enough to most people and are short enough to be read in the extra time the average person may have available. Out of the three texts assigned, the comic is the most accessible. The language used and how it is presented is not overly academic and is rolled out over time, so as not to overwhelm the reader. The platform of choice is digital, which is often the most accessible form. It can be shared via a link, or the entire pdf can be sent as a small file. The pitfalls of this approach include oversimplifying the technology’s scope and underestimating its potential harm. This comic is hosted on a portfolio website and not where one seeks comics, which decreases its discoverability. This approach would need to rely heavily on readers sharing to reach a broader audience. 

“Machine Bias.”    

This article exposes the harms of using predictive, data-driven algorithms in criminal sentencing. The article is intended to bring awareness and garner empathy around this issue with the public. It can also be a medium for which to hold institutions accountable for their use of problematic software. The article’s design is standard for modern journalism, with photography and data charts woven into the text. This article is accessible. The language is not overly complex, combining storytelling and information. You can read the article on both the publication’s website and news aggregator platforms. The format of articles has also been made to be easily sharable from their original platform and over social media. The discoverability is much higher than the comic and the paper. A pitfall for this approach is that the news platform could be seen as not central enough for certain portions of the population. Using personal narrative in the reporting could be seen as discrediting to some. Overall, the article is engaging, but is specific in the algorithmic harm it addresses, and doesn’t touch on other harms like the other examples this week.

“Accountability in an Algorithmic Society: Relationality, Responsibility, and Robustness in Machine Learning.”  

This paper’s purpose is for academic circulation. The paper seeks to reassess past scholarly statements on accountability in a computer-driven world and modernize theories for the algorithmic society of today. It is designed as a standard academic white paper, organized to make clear what is included in the paper and citations. When thinking about accessibility, this approach is narrow and targeted towards a specific audience. Language used in the paper is academic and the platform in which the paper can be accessed is one the average citizen will not be familiar with. The public’s discoverability would also be low. A pitfall here is its language being too academically inclined for some in the public, but as they are not the intended audience, it is not much of an issue. 

Differing types of literature to reach different audiences

Different types of articles were read and compared for their approaches taken to communicate information on technology. The ProPublica article on “Machine Bias” examines the Northpointe classification system. This is an algorithm trained to score defendants on how likely they will be to commit crimes in the future and was adopted heavily by the court system to determine the length of sentencing and if parole was an option. The article exposes that biases found in the algorithms. The audience of this type of article can be vast. Due to the manner it is written, most adults can consume the information, but it is also written to undercover truths in the system and therefore meant to educate and create change in society. There are several pictures and quotes from defendants included in this article and protecting their identity would need to be seriously considered. However, because of the ability to share more information and hear first accounts from the population affected, the story becomes more real and changes the defendants from numbers into real people being affected by the system.   

This is very different than the audience that was intended to consume “Accountability in an Algorithmic Society: Relationality, Responsibility, and Robustness in Machine Learning.” This is a dense piece meant for those who have some prior knowledge in the subject like scholars. The downfall of an article like this is that due to its style in which information is presented it will be difficult to reach a larger audience. However, it provides in depth important information that others could cite in their own work.   

Again, a very different piece of work is seen in the “We are AI” comics. The use of pictures and short snippets of readings to convey messages to readers makes it easy to consume information especially if one is not a subject expert. The use of comparisons of AI with common knowledge and metaphors that many can easily understand or know from experience, like baking, help break down complex topics. I think this could reach young adults like high school age, as well as an older population, like my grandparents. Due to it being short there could be a lack of in depth information. However, considering who the audience is it may not be necessary. Overall, this was my favorite reading and I found it very helpful in my understanding of AI and the pitfalls of it. I thought it was so unique and powerful that I share with my work colleagues.  

Three Approaches to Address Bias in Machine Learning: A Comparative Analysis

ProPublica’s “Machine Bias” Article:

ProPublica’s article, authored by Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner, takes a journalistic approach to expose racial bias in predictive policing software. The intended audience primarily includes the public, policymakers, and advocacy groups. Its value lies in its ability to raise public awareness and mobilize support for change. The article’s strength is its accessibility. It uses real-life stories and concrete examples to illustrate the bias issue in machine learning, making it relatable to a broad readership. By addressing a contentious issue through storytelling, it elicits an emotional response and motivates readers to act against bias. However, one potential pitfall is oversimplification. To make the narrative engaging, nuances of algorithmic decision-making may be lost, leaving readers with a less detailed understanding of the problem.

ACM Conference Paper “Accountability in an Algorithmic Society”:

The ACM conference paper, authored by researchers in computer science and ethics, is meant for a scholarly and technical audience. It goes deep into the details of how computer programs can sometimes be unfair and how we can make sure they’re held responsible for their actions. The strength of this method is in how thorough and precise it is, giving us a full grasp of the issue and its possible answers. One benefit is that the paper can add to the academic conversation about bias in machine learning. It lays out a plan for holding systems accountable and gives us practical solutions backed by research. However, its complexity and specialized terminology can make it difficult for regular people to understand. A potential pitfall is the limited reach; it might fail to engage a broader audience or policymakers who lack a deep technical understanding.

AI Comics by Stoyanovich and Khan:

Julia Stoyanovich and Falaah Arif Khan take an innovative approach by using comics to educate people about AI and its biases. The comics target a diverse audience, from students to professionals, making them a valuable tool for raising awareness and promoting dialogue. The comics’ value lies in their ability to simplify complex concepts, using engaging visuals and straightforward language to explain AI and its ethical challenges. The comics are accessible and inclusive, bridging the knowledge gap between experts and the general public. They are well-suited for educational settings and community outreach, helping to foster a shared understanding of AI’s implications. However, one potential pitfall is the risk of oversimplification, as complex issues may be distilled into overly simplistic narratives. Additionally, they may not have the depth required for in-depth research or policy discussions.

In conclusion, each approach has its own unique value and potential pitfalls in addressing bias in machine learning. The ProPublica article effectively raises awareness and mobilizes action, but may oversimplify the issue. The ACM conference paper offers a rigorous scholarly perspective, but may be too technical for some audiences. Stoyanovich and Khan’s AI comics bridge the gap between these approaches by providing accessible education, but may lack the depth required for advanced discussions. In the end, the most effective way to address bias in machine learning could be a blend of these three approaches. This would bring in a diverse group of people and stakeholders interested in the matter.

Exploring Varied Approaches to AI Ethics: From Investigative Journalism to Academic Discourse and Visual Storytelling

In the realm of AI ethics, the diverse approaches taken by different sources, such as the ProPublica article, the ACM conference paper, and the AI Comics, are designed to engage with their intended audiences in distinct ways. Each approach offers its own value and potential pitfalls, depending on its objectives and the target readership..

ProPublica’s Investigative Journalism:

The ProPublica article, “Machine Bias,” embodies the essence of investigative journalism. It resonates with me as a woman living in New York, as it adeptly sheds light on real-world issues through meticulous research and reporting. ProPublica addresses a wide audience, including policymakers, activists, and the general public, making it a powerful tool for advocating change and social justice. Its value lies in its ability to uncover and expose the real consequences of AI bias in criminal sentencing, pushing for accountability. However, one potential pitfall is that its complexity can be a challenge for some readers, possibly overwhelming or alienating certain segments of the audience. Additionally, the emotional tone may sometimes overshadow objective reporting.

ACM Conference Paper on Accountability:

As a graduate student, I can appreciate the academic rigor of the ACM conference paper titled “Accountability in an Algorithmic Society.” This approach is tailor-made for scholars, researchers, and professionals in the tech and AI industries. It provides a platform for in-depth discussions on algorithmic accountability, pushing the boundaries of AI ethics research. The paper’s value lies in its contribution to the academic discourse, deepening our understanding of the subject. However, its technical nature can be a barrier for non-experts and may limit its accessibility and applicability beyond academic circles. It’s crucial to ensure that this valuable insight reaches a broader audience.

AI Comics for Public Awareness:

The AI Comics series takes a creative and illustrative approach to AI ethics, they make AI ethics accessible and engaging for a wide audience, including students, educators, and the general public. They take complex concepts and simplify them through visual storytelling, fostering better understanding and awareness of AI-related issues. While accessible, a potential pitfall is the comics’ simplicity, which may lack the depth required for certain discussions on AI ethics. It’s essential to strike a balance to avoid oversimplification leading to misunderstandings or trivialization of complex ethical issues.

In conclusion, the diverse approaches to AI ethics reflect the multifaceted nature of the field. Just as the many neighborhoods of New York City contribute to its rich tapestry, these approaches offer distinct perspectives and values. ProPublica’s investigative journalism, the ACM paper’s academic rigor, and the AI Comics’ accessibility serve as essential elements in this evolving conversation.

I believe that these approaches can complement each other, much like the boroughs of the city working in harmony. By recognizing their strengths and potential pitfalls, we can collectively work toward a more comprehensive and inclusive dialogue on AI ethics. In doing so, we aim for a future where AI technology aligns with our values and principles, ensuring a responsible and equitable AI landscape.

Approximating the Truth: A Comparative Study

AI Comics:

These comics are excellent first resources for schoolchildren and those first encountering the subject material. While this form of reportage might attract a broad audience, it glosses over much of the nuances of data bias, algorithmic development, and coding, and paints a simplistic picture of the existing situation surrounding the presented issues. Complexity and depth are lost at the expense of accessibility to a wider audience.

Ultimately, this creative form of storytelling, and how the information is gathered and presented, is a wonderful resource for younger learners and those with lower levels of literacy, both in the sense of comprehension of words on a page, and in the form of digital literacy. These comics could be also printed and distributed as a zine for English-speaking learners.

ProPublica Article:

This example of data journalism is perhaps most effective in the way it compares various people who are considered ‘low risk’ versus ‘high risk’ in the industrial prison complex. The low risk individuals are, without exception, white. The high risk individuals are exclusively BIPOC. The algorithm rated Black, Indigenous, and People of Color offenders as higher risk than their white and non-BIPOC counterparts, regardless of the nature of the offense and whether or not it was a first offense. Most of the high risk BIPOC individuals went on never to experience trouble with the law again, while their supposedly ‘low risk’ counterparts went on to commit serious or multiple crimes.

In one example given, two DUI drivers were arrested. A light-skinned, perhaps Latino man, who had a history of 3 previous DUIs, as well as a battery charge, was rated a ‘1’ on the risk scale (lowest risk) while a dark-skinned Black woman had two previous misdemeanors and was never convicted of another crime despite being assigned a ‘high risk’ of ‘6.’ The lighter-skinned man went on to commit “domestic violence battery,” despite being considered at a lower risk of recidivism. The risk here is not something inherent in BIPOC individuals.

The perception of risk, rather, lies in algorithmic bias, as BIPOC people were more likely to be arrested, unfairly policed, and heavily surveilled in the past. The algorithm shows a skewed version of reality, one influenced by unfair policing practices as well as an industrial prison complex that is an extension of Jim Crow, chattel slavery, imperialism, and settler-colonialism. BIPOC people who find themselves arrested will also find that the algorithms which decide whether they will be punished proportionately to their crimes are compromised and are quite literally wired against them.

Too many BIPOC people have served disproportionally long sentences for small, petty crimes. And too many white people have been given token sentences and go on to commit worse and more numerous crimes. This is because the industrial prison complex is broken; it is rare that it manages to rehabilitate inmates, and rarer still that it prepares people for life after prison.

This article does an excellent job at potentially mobilizing the public and can be used in advocacy work and to potentially change minds and discriminatory policies, as well as going a long way to ensuring justice is restored – to those who have been unfairly policed, to those who have gotten away with heinous crimes by virtue of being white, and the future victims of those who truly will go on to re-offend, although it is clear that predictive technologies have a long way to go in this arena.

ACM Conference Paper:

This academic journal article highlights issues of accountability and transparency and is intended for an audience in academia or in a university setting. It has the most salient implications for researchers and academics working in this field.  

In contrast to the other readings, this article is relatively dense and inaccessible to those outside of academia. Its reach is limited by its complexity and the language it uses – which includes jargon we as Digital Humanities or Data Visualization scholars are familiar with that others with dissimilar backgrounds might look upon and read without fully comprehending.

Although this paper raises many salient points about bias in machine learning and AI, it is relatively obscure and might not be of interest to the general population, although it is incredibly relevant in discourse about holding human and corporate parties accountable for harm caused by algorithms, instead of diffusing that blame by blaming the victims, “bugs,” the “computer as scapegoat,” or citing the problem of “many hands.”

There are many ways of informing the public, and there is more nuance and proverbial shades of grey than might otherwise be expected, especially in our digital era. Different audiences call for different outreach methods.

Uncovering Racial Biases, Demystifying AI, Navigating Accountability Challenges in Computerization

ProPublica employed an algorithmic audit to unveil inherent racial biases within Northpointe’s risk assessment software. Independently testing the system with diverse respondents statistically exposed its bias against black individuals. Utilizing a straightforward visual bar graph, they vividly illustrated how the system favored white defendants with lower-risk ratings compared to black defendants. The article’s intended audience comprises Northpointe’s software developers, policymakers, programmers, black activists, social justice advocates, law enforcement, researchers, the judiciary, academia, the media, and a broader public. However, the limitation of this approach lies in raising awareness alone; rectifying these biases demands purposeful stakeholder engagement and consistent efforts.

In the ACM conference paper titled “Accountability in an Algorithmic Society: Relationality, Responsibility, and Robustness in Machine Learning,” the authors offer a comprehensive framework to understand the intricacies of enforcing accountability in the digital age. They draw from Nissenbaum’s moral philosophy, political theory, and social sciences, examining four accountability barriers as proposed by Nissenbaum: “Many hands,” “bug,” “computer as scapegoat,” and “ownership without liability.” They effectively elucidate how these barriers obscure accountability, primarily concerning “who is accountable,” “For what,” “To whom,” and “under what circumstances.” The authors also provide solutions to “weakening the barriers,” including developing rigorous care standards and defining acceptable levels of adverse outcomes. The paper assumes that accountability is a universal good and therefore does not spend any effort at convincing actors on the need for it. It also fails to consider accountability from the economic standpoint as this would have a greater appeal to the creators of these computer systems who are mainly motivated by economic gains.

The AI comics excel in simplifying complex AI concepts, employing everyday analogies to enhance reader comprehension. They serve as valuable educational tools, bridging the gap between non-technical audiences and experts looking to convey intricate technical phenomena to the general public. The use of images fosters immersion and better concept comprehension. Nonetheless, one drawback, is the oversimplification of intricate concepts. While beneficial for educating beginners, they may fall short in conveying the nuances and complexities of AI and machine learning to a more in-depth understanding.