Each of this week’s readings had something stick out to me above the rest of content in the text. The idea of technochauvinism, at least to me, is very intriguing, and was thus something I had in my head throughout my exploration of the texts. Admittedly, I read beyond the assigned pages in Chun’s piece, but if nothing else, this broadened my understanding of quite a few topics, technochauvinism included.
First, on the very first page of Wendy Chun’s Pattern Discrimination, she asks, quite simply, “what is recognition?” She then presents a thought-provoking comparison: the difficulty of a police officer hailing a single person on the street, compared to the difficulty of a police officer hailing a number of people equal to the number of bits traveling the internet per second (supposedly at the time of writing, 414 trillion bits) (Chun 1). In short, this demonstrates the necessity of pattern recognition, but it also sets a subtle precedent for what’s to come. Indeed, it was this comparison that led me to read beyond the assigned pages, as I saw it when I was first scrolling through the .pdf of the text. While it may simply be a product of the times, the usage of a police officer in the comparison is disturbingly fitting with the subjects presented later in the text, specifically in the realm of the “discrimination” mentioned in the title.
While Chun doesn’t directly discuss technochauvinism, much of what she writes does concern it, especially when examined with the previous comparison in mind. Essentially, a police officer is a human being, and thus is subject to emotion and human error. However, she regularly makes points about the problems with pattern recognition – for instance, about hermeneutics. Additionally, human beings are prone to bias, but Chun explicitly states that “objective analytics, devoid of any interpretation and thus of any bias, does not exist,” or to elaborate, even if a computer were able to make “superior,” unbiased analyses, upon being interpreted by a human being, they would immediately become biased (Chun 35). Even if the computer itself is unbiased, it does not matter, as the humans using the computer are biased.
On the subject of technochauvinism, Meredith Broussard’s Artificial Unintelligence: How Computers Misunderstand the World defines the term as the idea that technology is “always the solution” (Broussard 8). Broussard describes technochauvinism as a “flawed assumption,” a “red flag” (Broussard 7) and as something “often accompanied by fellow-traveler beliefs such as Ayn Randian meritocracy,” specifically citing the related, flawed idea that computers being more “‘objective'” due to their ability to reduce information down to relatively basic math (Broussard 8). Broussard’s discussions of perceived objectivity of computers have a significant amount of clear overlap with Chun’s, to be sure. Similar to how Broussard tells about how her friend’s assumptions about technological superiority stuck with her, the very term “technochauvinism” has stuck with me from the first time I heard it in class some time back.
Broussard refutes the idea that computers are better due to some form of “objectivity” through showing by way of example that computers constructed by humans capable of error (that is, all humans) are subject to the errors and biases of said humans. She writes that a “problem is usually in the machine somewhere” as a result of “poorly designed or tested code” – in other words, that the problem with a computer is the fault of the human that designed it (Broussard 8). One can say that a computer is powerfully “objective” as much as one wants, but even if this is the case, the “objective” data a computer produces is rendered moot when viewed by a human being with subjective bias or the capacity for error (again, all humans) and even then, this is assuming that this “objective” data is not influenced by computer errors (in fact, human in origin).
I want to bring to mind the idea of the platonic ideal, briefly, as I’ve noticed an interesting connection to it. Through the eyes of a technochauvinist, through a technochauvinist lens, a computer is for most intents and purposes something more objective, and thus powerful, than any human: something a human cannot replicate the function of perfectly, similar to how a human cannot perfectly reproduce a platonic ideal. If one claims a computer is a source of perfectly objective information, when that information is interpreted or reproduced by a human, it stops being perfectly objective, just as how humans attempting to replicating a platonic ideal create content further from it. This has made me wonder about the connections between computers, objectivity, technochauvinism, derivative content (parody, pastche, etc.), and rhizomatics as a whole.