Affect Technology and Its Effects
Emotion Cognitive AI
The chapter "Affect" from Atlas of AI" by Kate Crawford discusses the historical and practical significance of emotion cognitive artificial intelligence tries to understand and respond to human emotions, but with that comes "both ethical concerns and scientific doubts" (Crawford 152). The concept of "affective computing," is talked about heavily, this is where computers can detect, analyze, grasp and emulate human emotions. Kate Crawford explains that machines' ability has the power to change how we as humans understand and depict emotion. By saying, "It promised to remove the difficult work of representing interior lives away from the purview of artists and novelists and bring it under the umbrella of a rational, knowable, and measurable rubric suit- able to laboratories, corporations, and governments" (Crawford 167) Thus, this rhetoric implies that affective computing is a new way of grasping emotion using science instead of the arts.
To further understand the historical significance of Affect computation, one must understand the terms of Affect highlighted in the text. The text defines Affect as "the primary system governing human motivation and behavior (...) involving positive and negative Feelings (Crawford.57)." This understanding of affective emotion is under the guise of Tomkins' research in affective imagery consciousness. This understanding of Affect helps Crawford establish this baseline of why emotion-cognitive artificial intelligence has become what it has become today. As a result, emotion-cognitive artificial intelligence became an industry in large part due to the realization that since humans "are incapable of truly detecting what we are feeling, then perhaps AI Systems can do it for us (Crawford 157)?" Thus, the first foundational approaches regarding emotion-cognitive AI began.
Additionally, the first approaches to emotion-cognitive AI began through the collection of human data and historical references to Physiognomy, which stems from the Ancient Greek World, where this premise of "studying a person's facial features for indications of their character ( page 161)." This is very important for understanding some of the negative factors of Emotion-cognitive AI today, as this shows early and historical references to racial profiling. Paul Ekman is one of the forefathers of historical human imputation data for recognizing facial languages, emotions, and expressions, with Technologies like the Facial Action Coding System he published in collaboration with Wallace Friesen in 1978, which would become an extensive measuring tool to identify "roughly forty distinct muscular contractions on the face and called the basic components of each facial expression (page 165)." As a result of his work, Igor Aleksander, when seeing Paul Ekman's presentation about recognizing facial languages, expressions, and emotions, planted the idea in Paul to use computers to automate this measurement Paul was seeking. Thus, as a result, facial recognition emerged, which brought scientific elements, such as objectivity, into the analysis of human emotions, expressions, and facial language, as aforementioned.
Moreover, Crawford distinguishes some negatives about affective computing. Through discussing the negatives, Crawford deliberates on the biases present in the data collected by emotion-cognitive artificial intelligence due to human imputation and collection of these data sets, which inevitably produces unethical, unfair, and oftentimes racially profiled outputs. To corroborate this, "As we saw in chapter 3, the construction of the "average" from unrest-representative training data is epistemologically suspect from the outset, with clear racial biases(Crawford 177)." This reflects societal racial bias, and this unintentionally makes the discrimination present in the AI output process. The text strongly emphasizes bias to ensure that everyone knows there are dangers to affective computing, and it is not as simple as "[making] high-stakes decisions in health, education, and criminal justice," as mentioned in the introduction ( Crawford 8). Ultimately, Crawford desires nothing more than transparency within the field of emotion-cognitive AI; and wants people to take accountability for the affective computing technologies, both good and bad. Throughout the text, Crawford consistently argues that developers must reflect upon ethical deliberation to minimize risk and harness the revolutionary potential of affective computing while diminishing its negative consequences.