This presentation draws on data from my forthcoming book with MIT
Press to demonstrate how heteronormative and cisnormative bias pervade Silicon Valley
culture, get embedded in benchmark datasets and machine learning algorithms, and get
formalized in company policies and labor practices surrounding content moderation. The
presentation begins with an examination of workplace culture at Google, gaining insights
from Department of Labor investigations, testimonials from previous employees, and informal
surveys and discourse analysis conducted by employees during the circulation of James
Damore's infamous 'Google memo'. The presentation then moves on to examine bias embedded in
benchmark datasets like WordNet and ImageNet, both of which served as the training datasets
for Google's Image Recognition algorithms (like GoogLeNet). Lastly, the presentation turns
to Facebook's heteronormative and cisnormative content moderation policies and the
outsourced labor practices it uses to institute what Facebook has described as 'human
algorithms' to review content in accordance with these policies. Throughout the presentation
I demonstrate that we can piece together information about proprietary code by looking to
leaked documents, public records, press releases, open-source code, and benchmark datasets,
all of which, in this instance, instigate a systemic heteronormative and cisnormative bias
that is increasingly being embedded in the internet.