Welcome to Dilemmas of Meaning, a journal at the intersection of philosophy, culture, and technology. This piece asks the central question: Why do people think AI can guide society? To answer this, it first considers why people did this for centuries with nature, secondly interrogates how this aids orders of domination, and lastly hints toward overcoming them.
Should we be more like the humble lobster? Human society has its issues and perhaps looking to our distant crustacean cousins might offer a way out. And, after all, they are natural. Of course, this is ludicrous. Similarly, should the supposed objective quality of AI, as the overly online claim, make it a perfect candidate to structure human affairs? Of course not. I find the mythologizing in nature and technology concerning. In both, we defer to an external authority as an unquestionable and ideal model for society.
When something is uncritically considered good for its being natural, it is a naturalistic fallacy. It reflects the erroneous is (natural) to the ought (good). In historian of science Lorraine Daston’s Against Nature, she asks: “Why do human beings, in many different cultures and epochs, pervasively and persistently, look to nature as a source of norms for human conduct?”1 In the ensuing discussion, I propose that there is an artificialistic fallacy functioning like the naturalistic one, which people defer to as objective truth, goodness, and beauty.
Daston defines the naturalistic fallacy as “a kind of covert smuggling operation in which cultural values are transferred to nature, and nature’s authority is then called upon to buttress those very same values.”2 Nature has been utilized by humans forever for a sense of meaning and order because it was all around us. Since nature is so replete with various and diverse orders, it is not a tall task to find a natural occurrence reflecting an experience, custom, or idea. In Daston’s words, “nature displays so many kinds of order that it is a beckoning resource with which to instantiate any particular one imagined by humans.”3
However, nature has enough orders to justify many claims. For example, while nature is often called upon to claim that homosexuality is unnatural, there exist myriad examples of queer behavior in the animal kingdom. Surely, these are also natural occurrences—we did not make the frogs gay. Yet, if I pointed to bisexual macaques or lesbian seagulls to claim homosexuality was more natural than heterosexuality it would be swiftly dismissed. Therefore, and this is the decisive point, appeals to nature only work when they align with the hegemonic order calling upon it. Thus, as nature contains multitudes of varied orders that do not follow a central and unified logic, it does not invariably support the hegemonic power calling upon it but must be made to. Indeed, as Daston reminds us, “Nature simply is; it takes a human act of imposition or projection to transmute that ‘is’ into an ‘ought.’”4 In what follows, it is shown how the faith in the objectivity that technology is said to provide follows a fallacy like the natural one, only this time more consistent with hegemony. And, it begins with a story.
Nature and Naturalization
The story here is of dualisms, of artificial/natural, male/female, objective/subjective. The drama is that the male/female dualism both mirrors, requires, and enables—if not demands—the artificial/natural dualism. These dualisms, however tenuously maintained in time, function through a perversion of difference that manifests into the two fallacies highlighted here. It is a story of the creation of order, hierarchies, and knowledge; a story which, once laid bare, thrusts meaning into ambivalence. Therefore, the conflict is whether the dualisms can crystalize the foundations of their dominance before they are fractured.
The plot follows claims Carolyn Merchant raised in her The Death of Nature and seeks to uncover, like Daston, why we look toward external sources of order to substantiate systems of human conduct. Within the interplay of these dualisms lies our answer. With that, consider some words from Merchant:
another opposing image of nature as female was also prevalent: wild and uncontrollable nature that could render violence, storms, droughts, and general chaos. Both were identified with the female sex and were projections of human perceptions onto the external world […] as the Scientific Revolution proceeded to mechanize and to rationalize the world view [… and] as Western culture became increasingly mechanized in the 1600s, the female earth and virgin earth spirit were subdued by the machine.5
Therefore, following the linkage of nature with the feminine, and the naturalization of masculine superiority, there is a hegemony of technology emboldened by its masculine coding. Naturalization, a process expounded by Pierre Bourdieu in Masculine Domination, refers to how a social idea, paradigm, or policy becomes seen as natural. When social relations of domination, such as sexism, become so ingrained in our daily practices they become understood as natural. Bourdieu says:
The particular strength of the masculine sociodicy comes from the fact that it combines and condenses two operations: it legitimates a relationship of domination by embedding it in a biological nature that is itself a naturalized social construction. 6
This phenomenon is not newly erupting from the invention of AI; it has a long history. For example, ‘father of empiricism,’ Francis Bacon, expressed a similarly gendered domination of nature, but in the pursuit of sciences rather than stable governance. Merchant quotes Bacon writing, “hound nature in her wanderings […] to lead and drive her afterward to the same place again.”7 The gendered domination of nature in the sciences and pursuit of truth, meaning, and order is obvious. Like women were argued to be, nature is too chaotic and must be controlled by the rational, scientific man. There is thus a need for an interrogation of the next rational instrument of patriarchy, technology, in its pursuit of predictability.
The Artificialistic Fallacy
With the scientific revolution came a new logic prioritizing the rationally predictable over the chaotic and indeterminable. With this, the shift from the natural to the mechanical order, came “a framework of values based on power.”8 While there were hierarchies in natural-derived power, all parts were considered organically in relation to the whole; now, mechanical-derived power becomes a direct instrument of control. However, this artificialistic fallacy reflects a paradoxical domination: we overcome nature just to submit to its successor and repeat the same domination/submission interplay again.
As Vanessa Nurock observes, the artificialistic fallacy, despite the machine being tacitly superior to nature, relies upon nature’s order to take the naturalistic fallacy even further. She writes, “Artificialization is, in fact, likely to enshrine naturalized structural habits in the machine, in code.”9 While, if you recall, Bourdieu’s concept of naturalization marks where we confuse the natural with the social, artificialization is the double confusion of the technological with a social that’s already conflated with the natural. The justifications compound to create social biases more ingrained in our lives, harder to untangle and discern natural and technological constructs from the social ones.
Like with naturalization, artificialization substantiates the biases prevalent in society with their logical, neutral, and objective attributions as shields to scrutiny. Indeed, Nurock explains “even in a technical world, where the machine obeys humans (or is supposed to), it does not obey men and women in the same way, and thus amplifies or exacerbates existing relations of domination.”10 The amplification is the sticking point making artificialization so insidious. There have been numerous pop-philosophy articles discussing how we taught AI to perceive identity. These articles, while correct in identifying that AI reflects the biases which formed it, leave people thinking that if we rid society of its biases, we can create an unbiased AI. As it neglects, in Nurock’s words, the fact that AI not only “reflects our societies but also reshapes them,”11 it misunderstands the problem as merely structural than poststructural. Said differently, the gaining of values is a discursive process whereby tech reflects society’s biases as it reshapes society continuing ad infinitum until the values of society are inseparable from technology. It begins to create the biased social structures it in itself substantiates—this is artificialization in practice.
Artificialized Objectivity
There are three reasons artificializations seem to prevail: Technology is 1) logical, 2) neutral, and therefore 3) objective. These features play into the mythologization of tech, whereby it occupies the transcendent position of meaning-making. It thus functions like faith—it is unquestioned. A central tenet of science is that its claims may be proven wrong. However, the mythologization allows artificialized ideas to evade the scruples of critics and have its truth ossified. The artificial gestalt is the objective will of patriarchy, justifying its domination through acts it previously enables.
To consider the first, the artificialization distorting the logical with the good rests upon the patriarchal naturalization of men being logical. Since we already view logic as better because of naturalized social biases, technology’s logical processes as superior easily fits into the existing hierarchical framework. Devoid of human ethical hang-ups and preoccupations with emotions, machines will guide us to a better reality that only it can provide. Singularity enthusiasts routinely profess their loyalty (in jest or not) to an AI ruler—like this example for an AI mayor from Japan—because of the non-human, non-natural traits it has. For these reasons, not only should the artificial be the guide for human conduct, but also it should literally guide humans.
Secondly, technology is also considered to make neutral conclusions. Since AI has no experiences, it has no biases. AI has the neutral and objective ‘view from nowhere.’ Its standpoint is to have no standpoint. However, this fails to consider how the technology was built, who built it, and the society it exists within. Science, nor scientists, exist in a vacuum. Science has been historically used to justify acts as abhorrent as slavery, eugenics, and genocide—surely, we would not claim that since science made the claim it is neutral.
There is the further point of contention for where technological research occurs. Those developing these Large Language Models and AI tools are not some transcendent being coding outside society, they are coded by people both pervaded by societal biases and working for large corporations. That technology can be neutral also fails to consider its development by large IPOs hoping to make a return for their shareholders. As Merchant wrote, “the mechanical order [… is] fully compatible with the directions taken by commercial capitalism.”12 Technology cannot be a neutral tool when it is created by and for the capital power of corporations.
Lastly, technology, empowered by a neutral logic, is said to operate objectively. This is desired as if we know an idea and outcome is unbiased then its recommendations can best lead us without privileging one person or group. While this is not the time to thoroughly critique Nagel, positivism, or Enlightenment ideas of a universal reason, it is the place to point out the ways in which ‘unbiased’ technology is mediated by the very biased society it exists in, rather than a value-less vacuum.
That objectivity can be attained by anything, let alone the perfectly logical AI systems, is controversial. While debunking the definitive possibility of objectivity is not the purposes here, I will, nonetheless, defer to Thomas Kuhn, whose influential text, The Structure of Scientific Revolutions, provides a decisive denial, not of objectivity, but that science can claim to know any objective truths about nature.13 Conclusively responding to critics of his assertion, he writes:
[Philosophers of science] wish, that is, to compare theories as representations of nature, as statements about 'what is really out there'. Granting that neither theory of a historical pair is true, they nonetheless seek a sense in which the latter is a better approximation to the truth. I believe nothing of that sort can be found. On the other hand, I no longer feel that anything is lost, least of all the ability to explain scientific progress, by taking this position.14
In other words, he denies that science can claim to know how close it is to attaining a truth of the world. He understands scientists as interpreting, observing, and engaging with nature, technology, and the world within a specific context; “paradigm changes do cause scientists to see the world of their research-engagement differently.”15 As expressed heretofore, since science has enabled the oppression of women, non-white races, the disabled, etc., to claim that science is objective—when used to justify bigotry or not—is to mythologize science and its servants as existing in a vacuum above and beyond the society in which their practices emerge.
While the fallacious logic of artificialization has been obviated, we must still consider the discursive power to rewrite social systems and justify them with a logic existing within itself. We must consider the power in rendering the subjective biases of hegemony objective fact, to render the idiosyncratic, universal. Code has bugs, and when random occurrences, nuances, and subtleties are misread to reflect the natural order of things, they become artificialized and are used to construct and cement the forthcoming customs. That is in the most charitable case. Real society is much more bias-laden. But, as discussed, that even an arbitrary dynamic between people in an unbiased society can be taken as meaningful by AI, reveals its limitations and potential harms.
Hyper/real
There is the added issue that dismissing AI, and the social ideas it perpetuates and reimagines, gives the illusion that these ideas otherwise are real. It is within this understanding of AI that it becomes hyperreality, where because it will provide a world so objective that it will seem fundamentally distinct from any human-made reality. By hegemonic values residing under the label of artificiality, when they are dispelled it is only the values relegated to the artificial are being criticized, rather than those values in all areas.
The hyperreal, a Baudrillardian term, refers to a world so mediated by technology that what is considered real is completely untethered from reality. It is when one thing being considered fake allows another to pass as real—whether it is or is not. For example, by dismissing only certain news channels as ‘fake news,’ others gain a badge of credibility simply by the contrast—whether they are or not. One example Jean Baudrillard gives is Disneyland, a world that’s “a play of illusions and phantasms […] presented as imaginary in order to make us believe that the rest is real.”16 When you enter Disneyland, you think you are leaving the real world and entering a fake one. Despite America being just as fake as the amusement park it houses, if we are told that one is fake, the other must be real. The problem revealed in this is Baudrillard’s point: none of it is real. Neither Disneyland nor social hierarchies are real. While the prerogative of the hegemonic order that tech serves is to make its order so artificialized that it being real is not questioned, we can see how the questioning does not solve the problem of hegemony as it does not even strip away a layer of its armor. We can say it’s fake, but in doing so the hyperreal layer appears.
The hyperreal is thus another trick in the artificialized toolbox. Dismissing AI is not the easy solution to avoiding the artificialistic fallacy that it seems. Baudrillard’s hyperreal reveals the problem, that when we critique the artificial, as we try and strip away the ‘artificialized’ layer of protection hegemony built, the hyperrealized one takes its place. To dismiss what presents itself as artificial (and as only existing within the artificial) allows the rest of the fallacious order to be considered real, hiding the false truth of the social systems it upholds. Naturalization, artificialization, and hyperreality all work as clever chicanery hiding the constructs of hegemony. ‘Pay no attention to the man behind the curtain,’ hegemony screams, lest you discover the wizard making social constructs seem real.
By playing the game hegemony creates, by allowing it to define the terms and dictate how we respond to them, it becomes hyperrealized and legitimate. The solution is to recognize the constructed-ness of domination. This was Mary Wollstonecraft’s point in her Vindication, when she exposes the naturalization of women’s intellectual inferiority as mere social policy. Pointing out the false logic these dominant social structures rely on, she argued, is essential for changing them. When the dualisms of artificial/natural, objective/subjective, and male/female are thoroughly identified, interrogated, and repudiated as constructed, rather than casually dismissed, hegemony can no longer speak. While this is easier said than done, if we can acknowledge the harm of these constructions we must, too, work to repel their instantiation in any new order, regardless of the spurious claims of innovating and improving our lives.
In “the tradition of the appropriation of nature as resource for the productions of culture,”17 as Donna Haraway puts it, there is the ardent drive for progress. In true faith to capitalistic logic, we’ve made the process of exploiting nature more efficient through automation. An algorithm can now distill the world into calculable orders ready for appropriation; it can now handle the tasks of production, reproduction, and imagination. Yet these processes are, as shown, not without bias. That technology’s idea of progress is more technology dominating society and nature is not accidental. Corporations can more quickly get their returns if society champions their developments as necessary, so that we find meaning, purpose, and logic in their endeavors is no coincidence. The hidden dualism not mentioned are the partners of patriarchy and capital, but rather than working in opposition they work in concert. Creating artificialized cultural values justifying social exploitation servicing patriarchy also supports the exploitation servicing capital, from this planet to the next.
It is not lost on me that the big push for AI is happening concurrent to heightened climate activism and emergency. The provided analysis helps explain why, rather than address the ecological concerns, tech corporations remain fixated on progressing to exploit other worlds. Corporations say there is less logic (profit) in helping our current world, and the machine agrees. That the locus of male rationality is found in advancing to the next realm to dominate is concerning, but not surprising. Nature is man’s object and plaything. While every healthy dom/sub relationship needs aftercare, there is no reciprocity in the game of scientific progress. After all, remember what Bacon said, “nature exhibits herself more clearly under the trials and vexations of art than when left to herself.”18
Daston, Against Nature, 3.
Ibid., 4.
Ibid., 57.
Ibid., 4.
Merchant, The Death of Nature, 2.
Bourdieu, Masculine Domination, 23.
Merchant, The Death of Nature, 168. Emphasis mine.
Ibid., 193.
Ibid., 76.
Ibid., 77.
Merchant, The Death of Nature, 193.
Meiland, “Kuhn, Scheffler, and Objectivity in Science,” 186.
Kuhn, “Reflections on my Critics,” 265.
Kuhn, The Structure of Scientific Revolutions, 111.
Baudrillard, Simulacra and Simulation, 12-13.
Haraway, Simians, Cyborgs, and Women: The Reinvention of Nature, 212.
Bacon, The Works of Francis Bacon vol. IV, 298.