METAssemblage: Meta, Privacy and the Meaning of Faces

Abstract

Revelations regarding major tech companies and their collaboration with government have opened up a new dialogue regarding online monitoring, adding to preexisting concerns over self-presentation and access to personal information. There is a new awareness of “dividuals” and “data doubles” online, as in the work of scholars such as Andrejevic, Murakami-Wood and others. Users of social media are followed by shadow identities and digital doppelgängers, and the public has a growing awareness of algorithmic bias in automated systems. The digital reconstruction of users by the company Meta, for example, within a “surveillant assemblage” provokes comparison with Badley’s conceptualizations of body horror. Nowhere is this more apparent than in the Facebook profile picture. This study explores the meaning of facialization on Facebook and the implications of reducing users to faces in the context of surveillant assemblage, and Deleuze and Guattari’s writing on faces. Taken together, “data doubles” and facialization on Facebook arguably perpetuate a kind of violence against users, both symbolically and in terms of personal privacy. The paper considers the historical use of facial recognition software by Meta and Facebook, including implications for women, and also the “coded bias” of Buolamwini. What have Meta and Facebook already implemented, and what does the company hope to implement (e.g. “Deep Face”)? How might this enable Facebook to further augment or transgress users’ privacy and digital personae?



Author Information
Mario Rodriguez, American University in the Emirates, United Arab Emirates

Paper Information
Conference: ACERP2023
Stream: Philosophy - Philosophy and Technology

The full paper is not available for this title


Virtual Presentation


Comments & Feedback

Place a comment using your LinkedIn profile

Comments

Share on activity feed

Powered by WP LinkPress

Share this Research

Posted by James Alexander Gordon