Aligning the Aligment Problem

We have some work to do regarding tech..

In the chapter from The Alignment Problem by Brain Christian, the author discusses the intricacies and nuances that occur within computer-automated category systems. These intricacies result in the exploitation of bias towards minority groups, whether that is towards gender or race. As a result, these automated categorical systems have a lot of work to do in order to obtain an objective form of categorization that does not negatively affect the lives of both minority and majority groups when involving topics like race or gender. Throughout the chapter, the author discusses that negative effects as a result of biased automated categorization can affect job inquiries, photo categorization, or even whether or not a robot one builds will recognize them. However, this all happens as a result of representation. In Brian Christian's book, representation is more than the examples that are placed into computer-automated categorization systems; they are the biases that are formed through those examples and even the magnitude of which and how many of those examples are used dismay the representation one might experience when encountering a computer-automated categorization tool. For example, Brian Christian highlights one massive pop culture event when a user of Google Photos, an African American man named Aliciné, placed dozens of photos with him and his friend into a Google album, where Google placed the thematic caption "Gorillas (pg.24)." This caused massive outrage on the pop-culture sphere, which resulted in Goggle taking full accountability and fixing its algorithm system to stop its flagging of photos of Black Americans or those who are of African or Sub-Subaharan ancestry as 'Gorillas.' Yet this raised the question of how low the data sat of Black representation on Google Photo's algorithm feature was for it to produce a rude categorization of two African American males as 'Gorillas.' While this is one aspect of the way Brian Christian describes representation, another aspect of representation in this chapter is through the lens of its reflection of how we operate as a society with our innate biases. Brian stipulates that "Bias in machine-learning systems is often a direct result of the data on which the systems are training it incredibly important to understand who is represented in those datasets, and to what degree, before using them to train systems that will affect real people (pg.31)." Thus, showcasing that the examples we place are a part of the problem with representation in these categorically based algorithms, but it also the biases of the manufacturers and the society around them that perpetuate this imbalance of representation and even more inadequate representation on digital and online spheres. However, Brian furthers this point by showcasing there are some differences between representation in artificial intelligence and human intelligence by showcasing how the biases perpetuated by artificial intelligence systems are showcased in gender biases for jobs. In contrast, if human intelligence were sorting through those job applications, the same result might more or less not happen. Moreover, human intelligence is based on the human experience, the biases we are exposed to, and a process of cognitive thinking. Thus, we are aware of the complexities of human understanding surrounding world biases that extend further than computer-automated technology. Ultimately, these arguments help us as readers understand what the alignment problem truly is, which is that through machine learning and the construction of systems that reflect society's biases, nobody knows whether the information output from these systems will be helpful or showcase potential risks that can emerge within categorical algorithm systems. AI tends to learn from the algorithm put in by the developer, thus becoming a technological reflection of society's biases.