Welcome to Dilemmas of Meaning, a journal at the intersection of philosophy, culture, and technology. This is the introduction to the Identity Series. While it stands alone, it also is a jumping off point for future essays. It asks the central question: How do technology, social media, and the internet impact who we are and who we become? This piece foregrounds existentialist views of identity, and considers medical advertisements, niche online communities, and Andrew Tate. Future entries will range from online queer communities and hate speech to linguistics and questions of virtual sovereignty.
Who are you, or rather, who does the algorithm think you are? As so many of our experiences occur online, either simply browsing or posting on social media sites, so much of the information we encounter is provided by an algorithm. What the algorithm presents to you is based on who you are: lifestyle, interests, perhaps even race, gender, and nationality. However, given that we form our identities at least in part (if not wholly) by the information we receive, it is uncertain how much of who we are is constituted by the algorithm itself.1 Indeed, there is the question of what role these algorithms play in determining what we find meaningful. In what follows, this essay will set the stage for the polemic that is this series on identity.
What is Identity?
Identity can be shown by the expression of A is A. That is, that there is an apparent unity between who one is and who they are becoming. It inheres the notion of sameness. Leaving technicalities of metaphysics aside, whether Heideggerian or other variants and their many implications, identity is a representation of being. Identity is an expression of one’s existence that is constituted by their existence—this is the foundation of existentialism. One is not a fixed being sutured to society as a specific form with an inborn meaning. Rather, one is simply who they make themselves to be, who they become. Thus, identity also retains the possibility of its negation, for one can freely question themselves and change it. It is in this line of thinking that the classic phrase of ‘existence precedes essence’ is stated.
One is who they are, not who they are told to be by culture or society. Yet, crucially, and while the process of identity forming occurs within society, one is not bound to identify as society demands. Identity is to be decided by the person themselves through their choices and actions. Thus, what one does is concomitant with identity formation. If they use social media, then, as will be argued, the suggestions of the algorithm infiltrate this process whether they want its input or not. The concept of identity employed here, thus, emerges from the existential call—it calls for the freedom to become who one wants to be, and to be who one is. This concept, thus, is anti-essentialist; and, to use Stuart Hall’s words,
it accepts that identities are never unified and, in late modern times, increasingly fragmented and fractured; never singular but multiply constructed across different, often intersecting and antagonistic, discourses, practices and positions […] and are constantly in the process of change and transformation.2
In a liberal society, identity is meant to be a mark of freedom. One can pursue their interests and beliefs to the fullest extent to form their sense of self and imbue their world with meaning. However, in modernity, identity forming became an obligation. With the freedom to define oneself comes the requirement. Zygmunt Bauman writes that “modernity replaces the determination of social standing with a compulsive and obligatory self-determination.”3 What was once a given, is now a task. As this new freedom to become what one wills comes with a requisite responsibility, some may choose to eschew it. Indeed, as “needing to become what one is is the feature of modern living,”4 so too is inventing technologies that assuage the difficulty of these tasks. In such a way, the solids of prescribed identity that were melted away in capitalist modernity have re-solidified to the behest of the techno-capitalistic algorithm.
Existentially, identity is, too, a call for freedom. As identity is always a process in progress, never complete, one is enjoined to transcend their facticity—that which a third-party can establish about you, such as class, height, age, gender, etc. One is to be malleable and define themselves as desired regardless of where they find themselves. While one can choose to adopt these facts as definitional of who they are, it would be erroneous to do so. Not because these aspects are fake, but rather because one’s authentic being is defined from within. Classically, Sartre terms it ’bad faith’ when one denies themselves of the freedom to find one’s authentic self; it is “a lie to oneself.”5 For example, Sartre notes a café waiter who only knows themselves as such, claims they are unable to be anything else, and thinks this is their insuperable destiny. Yet, as is argued, the cunning force of the algorithm functions with an indirect relinquishing of this freedom. Although the process of constructing your identity occurs within—yet not by—society, the algorithm presents you with information from society as if representative of your choice. Do you want to work in a café, or do you just keep seeing posts romanticizing such a job? This is not to claim that the algorithm forces people to live a certain way or is a fantastical slippery slope toward a world without real agency. Rather, it is a diagnostic questioning of whether people are becoming who they choose to be instead of who they are told, either directly from society or others or indirectly from the algorithm’s precepts.
What’s the Deal with Algorithms?
The problem for Sartre is when one is reduced to an ostensible identity. This reduction is the very process of the algorithm. Indeed, the algorithm presupposes who you are based on a data-constructed image of your person. In such a way, does the algorithm not place one’s essence prior to their existence? Rather, does the algorithm not take your existence for granted through its many attempts to render your essence from various datapoints? One’s facticity begins to determine their being. The algorithm relies on existing datasets to decide what to show someone. As more information accrues and is presented to someone on the basis of their facticity, their facticity forms the basis of their identity. Therefore, the algorithm fits people into a pre-determined context of beliefs and interests that shapes who they then become.
Yet identity is “an unfulfilled project,”6 as Bauman writes. Identity is a vessel with a requisite ullage; a never-ending project, identity is always undergoing revision as it encounters new things. If identity is this vessel, filling up with information as we go through life, it seems important to be critical of how much of that space is taken up by algorithmically suggested content. Does there exist a point in which one’s identity becomes formed by it? As the algorithm suggests content based on an identity it had part in forming, our identities are left firmly in its grip. What happens when the fundamental human process of becoming is upended by technology, or when one’s becoming is guided by data-bound preconceptions?
The crux is that your identity is chosen by you. Yet, the algorithm’s controlled dissemination of information encroaches on your choices. What you choose as your interests are up to you, it is your prerogative; but the way the algorithm suggests content to you makes it questionable whether those choices were freely decided by you. Indeed, as Hall writes, identity, as becoming, is about “how we have been represented and how that bears on how we might represent ourselves.”7 Therefore, and this is the undergirding philosophy of this series, if one’s existence constitutes their identity and is constituted within a representation, then the algorithm, by reproducing your existence from a datafied representation (datafication), penetrates and becomes part of the identity forming process, thus limiting how much of your identity is actually formed by you. This call is both descriptive and normative; we hold that the algorithm pervades identity formation and that you should care because what you choose to become should be your free and unadulterated choice.
While the exact workings of these social media algorithms are unshared, what’s problematized here is the impact of the algorithm: what rather than how. For example, if an algorithm thinks that because you are a young male from the UK that you like football (soccer), and continuously feeds you football content while you surf Instagram or watch YouTube videos, how likely is it that you will gain that interest yourself and integrate that into your identity? Of course, this is not 100%. Yet, even if only some people are converted into football enjoyers, it is still curious the impact the algorithm has on our identity formation.
1001 Consequences
Algorithms control the conversation. They connect us through our identities and designate us as part of various communities. This aspect, albeit, can be positive; social identities are meant to be how we find others and form connections. This gives us meaning through a sense of belonging as part of a group: I know who I am as a member. That the ‘For You’ page shows you dog videos, leftist memes, or make-up tutorials, it shows you what’s for you, where you can find people like you. In more niche algorithmic circles, it generates inside jokes, vocabulary, and ideas you would only know if you spent time on ‘gay TikTok’ or ‘alt-right YouTube,’ for example. Through this, algorithms also give us a shared language and culture. Indeed, the more time we spend on these platforms, the more our language is affected; language in posts—whether slang, ideas, or symbols—become more like those that the algorithm promotes to virality.8 We all know the same memes proliferated on TikTok, for example, because as something becomes popular the algorithm shows it to exponentially more people. It becomes viral. However, this can also perpetuate the conversation of the elite by deciding the popular information at the mainstream level and sustaining echo chambers at the niche level. The other side of this coin is shadowbanning: when the algorithm decides some people, ideas, or content will not be promoted. It tacitly decides, as Olúfẹ́mi O. Táíwò writes, “which topics are on the conversational agenda.”9 Therefore, algorithms control our language by determining what content is allowed to become popular, underscoring its impact on identity forming.
While this can have more innocuous consequences, like for the algorithmically constituted football fan, it can also serve more pernicious ends. As Amy Gaeta writes, an algorithm might make you think you have a mental health problem or poor gut health.10 It may trigger an eating disorder or body dysmorphia through prescribed beauty standards. Gaeta’s concern is how the algorithm both medicalizes people and perpetuates biases from a data-presumed identity. Indeed, her point is “to emphasize how social media algorithms can diffuse and spread the clinical gaze [… which] can construct categories of meaning that have real-world impacts on users’ health decisions and perspectives.”11 By targeting content suggesting medications, treatments, or awareness for different conditions, it seems that the algorithm implicitly diagnoses users based on their data profile.12 Indeed, by pathologizing identity the algorithm gives us meaning and self-knowledge through medicalized language.
The algorithm alters one’s understanding of themselves by presenting a view of them through the medical gaze. It is as if one’s screen becomes a doctor’s mirror. The medical content one sees reifies a schism between their real identity and their algorithmically constructed one. Without mentioning the many ways this could perpetuate ableism and body-image problems, targeting users with medicalized content could have the consequences of leading people to believe they have certain maladies they do not, or, on the converse, making someone with a disability question their diagnosis and condition. Although, as Gaeta mentions, this can occur when a patient meets with a real doctor or healthcare professional as well; however, they likely sought out this information and are at least receiving the information by someone qualified who can answer any follow-up questions. As it stands now, when someone casually browsing Instagram meets targeted ads for bipolar treatments, they are left alone to deal with that implied diagnosis, left alone to confront this new self-knowledge. This is not to say that people immediately consider themselves disabled upon seeing these posts, but rather that the potential to lead one to question their self-knowledge reveals the algorithm’s impact on identity.
It is within this context that David Beer asks, for example, whether we should treat algorithms as mere lines of code or as “social processes in which the social world is embodied in the substrate of the code.”13 He claims that algorithms are inextricable from the social world as they are modeled upon it and inform our understanding of the social world as it creates it. Since they are interwoven with society, discussing them outside of their social function is faulty. The algorithm gains data from the same society in which it produces information. Yet, people, like the algorithm, exist within a social ecology. People exist in a society with all its biases, cisheteropatriarchal as they are, and bring them into the social media milieu, which populates into the algorithm and subsequently feeds those biases back to us. This is not to mention that algorithms are created and designed by people and companies, both harboring these same societal biases. The algorithm thus reflects the biases we hold as people in real life as it reinforces them online. There is this potential harm of the algorithm reproducing our existing biases under the guise of new information.
Similarly, and importantly, algorithms also reproduce our identities as it analyzes them. For example, as the algorithm reduces you to data and forms your profile, you gain interest in new topics and learn new ideas from the algorithm. This new information is then integrated into your identity. There is thus a feedback loop of algorithms suggesting you new content based on your interests which then become your new interests and so on. It is a cycle of identity forming with direct influence by the algorithm. Hence, the algorithm which presences itself in the myriad texts we confront is part and parcel of our identities.
The algorithm’s prominence in our identity forming process is therefore problematic, leaving what we find meaningful susceptible to a number of influences, such as capital. With social media sites privileging advertiser-friendly content, the information we see is limited to corporate status quos. This stifles development as you only see content that the social media platform deems acceptable, which is ultimately decided by advertisers. While this dampens one’s capacity for growth, it also forces one’s identity to be within the scopes of marketable content, making their interests directly marketable. Capital’s influence on identity formation through the algorithm seems to be breeding identities most responsive to advertisements and ripe for consumption. This idea is reflected by Horkheimer and Adorno’s concept of ‘the culture industry,’ which creates and reproduces meaning to fit its image. Is identity, then, not mere currency with a value decided by and dependent on capital? Is this simply the culmination of the neoliberalization of identity?14 Indeed, if your identity is influenced by content approved by advertisers, your interests are limited to that which can be sold back to you. Therefore, you do not decide your identity, the algorithm guided by market precepts does.
Additionally, where capital and Gaeta’s fears of medicalization are concerned, consider again the problem of being made to think you have a medical issue and the power in that ability. With pharmaceutical ads already being a prevalent and pervasive force in American media we find such an influence replicated within social media algorithms. Given the understood association of algorithms and identity, the problem is amplified. As Gaeta asserts, “diagnostic advertisements reproduce and transform the clinical gaze through the vector of capitalism, but by looking at users through their data rather than at their bodies.”15 Indeed, are pharmaceutical companies not incentivized to use their capital to impel people to align with a certain condition, leading to further sales? If the algorithm leads you to think you’re bipolar, maybe you’ll try Abilify. If the algorithm perpetuates certain beauty standards, maybe you’ll try Ozempic, a medication meant for diabetes yet now a trendy weight-loss medication. Why has a drug like Ozempic become so viral that there is even the colloquial term ‘Ozempic face’ to denote a side-effect of its use? There is a real power that translates to profit in the algorithm’s role in identity formation; and, within the late-capitalist neoliberal world we inhabit, this exercise of power is standard practice. Which is to say that the utilization and impact of the algorithm on our identities is a power warranting the concern of this journal.
The Algorithm as Will and Representation
Identity is thus a function of power: who wills who they are, and how is it willed? Ernesto Laclau argues that “the constitution of a social identity is an act of power and that identity as such is power.”16 Therefore, when the algorithm inserts itself into one’s constitution of their identity, it exerts a soft power over the individual and becomes reified in power as part of their identity. As the algorithm enters one’s identity forming process it gains new power as its information becomes part of their identity. The notions of identity as power, identity through negation and exclusion—différance—have been long discussed by semioticians from Derrida to Bhaba. However, here, question not power as identity but, if identity is power, question the power of that which can infiltrate and become identity.17
For more on this type of power, consider the following example. Why do you know who Andrew Tate is and why do you have an opinion about him? The polarizing figure, and alleged sex-trafficker, was brought to newfound prominence after taking advantage of social media algorithms, making his existence known to pretty much all who used the platforms. By the algorithm showing you his content it forced you to have an opinion. It forced you to take a stand, either for or against, and your identity would thus reflect this decision. During Tate’s rise to infamy, nearly everyone knew of him and had a stance. Like Trump’s presidency in the US, Tate became another litmus test for who one is. One’s identity of being a fan or not became a signal for the group one associates with. While social media played a role in both of their successes, Tate’s, unlike Trump’s, occurred wholly in the online space. Indeed, the power of the algorithm to bombard people online with Tate’s content tacitly affected everyone’s identity that came into contact. One did not decide whether they had an opinion on Tate, and certainly almost nobody sought out his content directly; and this is not to mention the power—and subsequent harm—of proliferating blatant misogyny and bigotry to a young and impressionable audience.18 People knowing about and carrying a stance on Tate is a consequence of the algorithm. It coerces people to define themselves according to its messaging. Indeed, the chief issue being raised is that who one becomes results from the algorithm; its power to direct some media, ideas, people, etc. over others lets it pick, like a character selection screen, the set of information which we use to define ourselves.
Recall the notion from the existentialists that you choose who you become. Society can impose a meaning on your existence, but within you is the freedom to define yourself irrespective of those impositions. Even as your identity, the process of becoming, exists within society, discourse, media, and the text, within you is the capacity to transcend that and make your own meaning—a transcendence in immanence. The power of the algorithm to push you toward being interested in football, questioning one’s health, or learning about Andrew Tate restricts identity forming to society’s precepts. Your identity, thus, ostensibly becomes one you did not choose.
That the algorithm comes to furnish one’s identity not only proves the power it has but also reinforces how vulnerable to its influence we can be if it is left without critique. It invites the influence of a technology represented after a fiction of oneself with open arms. This is not to fearmonger a post-humanist dystopia and claim that people are but a techno-human amalgam. It is, rather, to elucidate the influence and impact that the algorithm presenting us with unbridled information based on one’s datafication can have. Beyond the spurious influence of capital or the existential dictum of freedom, there is the fundamental human concern of figuring out who you become from a source outside of yourself. To measure one’s self-knowledge through the algorithm leaves one beholden to it, to data. Simone de Beauvoir wrote that the emergence of humanism is the realization “that the world is not a given world, foreign to man [… but] the world willed by man.”19 Thus, the impact of the algorithm on one’s worldview presents a challenge for humanism. Indeed, if one wills what they will because of an algorithmic identity, their world is no longer willed by them alone.
Ibid., 144.
Ibid., 58.
Ibid., 2.
Ibid., 3.
More on the culture industry and the neoliberalization of identity in a future entry.
This is not to dismiss the notion of identity as power, and the consequences this can have. However, this problem is more chiefly political in nature, and less relevant, at least now, on the technological/algorithmic forces discussed here. In previous and forthcoming work, I have discussed the power of identity and the totalitarian and fascistic ends it can serve. Cf. Bauman, Modernity and the Holocaust.
This is not to dismiss the significance, but the proliferation of hate online warrants its own essay, like many of the ideas sketched in this introduction.
The algorithm doesn't know a damn thing about me because I constantly lie to it. Stop being suckers and think BEFORE you act.