Abstract Draft

“Discussing The Biases of Race and Gender in the Machine-Model Design of Smart Virtual Assistants (SVAs).”

By Asma A. Neblett

This paper briefly explores how the vernacular poetics associated with race and gender are perpetuated in the machine-model design of Smart Virtual Assistants (SVAs) since they were introduced in the early 2010s. SVAs are generally described as feminine or gendered as female, but what else is implied about the social profile of major SVAs, such as Apple’s Siri, and Amazon’s Alexa, that also connote race and determine user satisfaction? I argue that the choices made in machine-model designs for SVAs, such as Siri and Alexa, mirror the vernacular biases associated with race and gender[1], which implicitly shape user experience. This paper consults a Black Feminist analysis, informed by feminist linguistics, to briefly discuss the text analysis of machine-models in SVAs, such as Automated Speech Recognition (ASR)[2], that speak to the intersection of race and gender in SVAs, and how it may influence user experience.


[1] Henderson, Mae. Speaking in Tongues and Dancing Diaspora: Black Women Writing and Performing. Oxford: Oxford Univ. Press, 2014. Print.

[2] Koenecke, Allison, et al. “Racial disparities in Automated Speech Recognition.” Proceedings of the National Academy of Sciences Apr 2020, 117 (14) 7684-7689; DOI: 10.1073/pnas.1915768117

References:

Habler, Florian, Schwind, Valentin, and Henze, Niels. 2019.Effects of Smart Virtual Assistants’ Gender and Language.” In Proceedings of Mensch und Computer 2019 (MuC’19). Association for Computing Machinery, New York, NY, USA, 469–473. DOI:https://doi.org/10.1145/3340764.3344441