The Ratio Club was an informal dining meet-up founded in 1949 and attended by prominent British academics to discuss cybernetics—Alan Turing was a member. During the first meeting, it was explicitly noted that no sociologist was among their ranks. Omissions such as this have informed my research into the absence of sociological thinking in the sciences of AI.
AI sciences have had an underlying vision of their “thinking machines” as being self-contained; isolated from their context of production or circulation and use, which is very contrary to the mode of thinking that sociology opens up for us. In the latter mode, the world is relational; every individual being is constituted through dynamic social interactions—even things that we do not conventionally think of as being “social.” It was this idea of the social that was missing from the AI paradigm.
The reasons for this exploration of what is social about AI are twofold. First, the notion of social implies dynamism: entities that come into relation do not remain constant, nor do the relations themselves. This means that the context in which the said relations take place must be unfolded and analyzed, in order to understand which meanings come to define the entities and practices.
Moreover, we do not necessarily think of AI technologies, these machinic entities, as being defined through their context. The general conception is that AI technologies are just code, they are “just technologies,” that they are neutral things that are instrumental to human conduct. This idea is not tenable in a sociological understanding of the world. Rather, such technologies would be seen as entities that are imbued with meaning through their context of social interaction—be it the context of their production or circulation.
It would be unfair to say that all AI research revolved around this isolated and instrumental view of intelligence. There were critics within the AI community who argued for an embedded and relational understanding of intelligence to replace the dominant perspective. Their arguments stemmed from the observation that these machinic entities exist in the world, in everyday contexts, with multiple—and not necessarily foreseeable—social capacities. These critics challenged AI practitioners to think about these processes in designing their technologies but, despite this insistence on sociality in the AI community, these arguments never translated into the further complexities of social and cultural concepts, certainly not in any consistent manner.
So sociology surely had—and has—much to contribute to the debates on what kind of a world is being produced through AI technologies. After all, the discipline deals with large-scale social connections, with a history rooted in notions of common sense, collective conscience, and a sensitivity toward dynamism—concepts that have been central to debates about how to make better AI systems. In this vein, it is possible to see AI as a social agent, one that is relationally constituted and creates change through social action. This suggests that there can be a sociology of AI.
Some sociologists specifically thought of this question in the late 1980s. In 1991, Alan Wolfe published an article in the American Journal of Sociology called “Mind, Self, Society, and Computer,” in which he asks, “Can a machine premised on parts that are as dumb as possible in any way replicate the way real human agents operate in the world?” He argued that machines lack the capacity to act in ambiguous contexts and concluded that it is the interpretive capacity of humans to act in such unknowability that gives us agency, and that it is this agency that is the central feature of “intelligence.”
In Wolfe’s writing, there is a clear fear of a loss of humanhood—or rather a confidence that these “dumb machines” cannot ever be like humans. Employing behavioral sociologist George Mead’s distinction between brain as processor and mind as meaning-maker, Wolfe claims machines will never be like humans, nor will they have agency, because they lack meaning-making capabilities. While revealing the differences between humans and machines, this argument denies machines the capacity to be social.
Excluding machines from the social realm is the result of a vision of the world that takes humans as social by their very nature—indeed, this was the idea on which the reality of the modern world was established. In this context humans are social, whereas machines are not; therefore, machines cannot be like humans. Here the concept of social is hooked to the human condition, and thus social and human become mutually defining. So whenever another entity tries to be social, it is automatically excluded from this dimension on the basis that it is not human. The notion of social becomes an exclusive dynamic and a prerequisite of the human club. Instead, the sociology of AI raises the necessity of a vision of the social dimension that does not enclose and totalize these relations within the “human condition.”
Perhaps another way to think of what is social is by focusing on change—dynamism that creates transformation. This shift shows how social relations are engendered by newness; otherwise existing relations would only be reproduced, and status quo would remain protected. Defining the notion of social through change brings the emphasis on the moment of action. And with this perspective, it would become possible to see how actions serve as the capacity to create new relations by deviating from the social order.
The capacity of machines to engender novelty is a debate in the history of AI sciences. Alan Turing, in response to Lady Lovelace’s assertion that a machine can only do “whatever we know how to order it to perform,” writes that machines take him by surprise with great frequency, especially if the conditions under which the machine is performing are not clearly defined. Turing is pointing to something that Andrew Pickering terms the “temporal emergence” of material agency. This term is used in reference to the fact that one can never know in advance what a new machine, or an old machine in novel circumstances, will do. We can only wait for it to act in the world in order to find out.
Turing opens the door for this unpredictability and further argues that a fact does not simultaneously bring its consequences to the purview of the mind. Its novelty might rest in a potentiality. There remain parts, or aspects of a fact, that remain undisclosed, that are “temporally emergent.” Therefore, even the crunching of numbers, or the undertaking of a pre-given task, can become novel and incite social change. Here, the task falls to us to develop a creative conception of machines themselves.
Employing this agential notion of the social, it becomes possible to void the concept of its inherent humanism. Further, by focusing on the moment of interaction between entities—human or nonhuman—we could indeed conceptualize the machine as social: Thinking machines, emerging from the realm of the social, create social relations, and thus are actors in the social realm. By uncoupling the notion of the social from the human it becomes possible to see agency not as bound to an entity but rather as a constellation of forces that produce an effect in the world. Agency, then, is not an attribute of human or machine. Instead, it can be thought of as a collective notion that is constituted by various entities.
It is in this sense that nonhumans in general, and AI in particular, are relevant to the sociological dimension. Viewed through a sociological lens, machines, with their capacities to encounter in a social sense, can be seen as contributing to meaning-making practices. Meaning emerges in a context, through a social encounter, and is not bound to one single actor.
Reformulating AI and recognizing its social agency brings a necessity to rethink the implications that AI has for our everyday lives and emerging futures. The questions around AI mostly occur in a technologized language, the implication being that the issues of AI can be resolved through engineering processes. This coincides with the rather neoliberal impulse of solving problems by throwing more technologies at them: if there is crime, let’s solve it by implementing surveillance systems. If truck drivers prove inefficient, let’s automate the trucks.
These projects take off without considering the socialities that they produce; without considering the social worlds that they build, because they are “just technologies,” simple instruments to human conduct, right? All the while, already oppressed and marginalized communities become subjected to increasingly rigidified and unjust practices; entire workforces face disappearance. These technologies do not “simply” enter into our social lives, they transform our worlds and relations. We encounter these machines on a daily basis and face the social order that comes about from their agential practices.
I do not want to paint a simple dark picture though. This is a complex issue that cannot benefit from a one-sided perspective. Showing that these machines are not exempt from social relations and can incite novelty implies that there are potentialities that can be explored. There are exciting relations that emerge in different contexts—for instance, generative algorithms collaborating with humans on art projects or AI players coming up with different strategies in games such as Go or Dota 2. Not to mention developments in the context of medicine, where machines collaborate with doctors for more precise diagnoses.
A similar situation can be observed in space agencies, where large quantities of data are analyzed by AI systems that then pass results on to human scientists for research purposes. So there are possibilities for different journeys to be taken with machinic entities for creative purposes.
A sociological conception of AI may prompt a more interdisciplinary approach in questioning the implications of these technologies—that it has failed to do so until now is as much a critique of sociology as it is a critique of AI research. Science and technology scholarship has long pointed out that sociology has a hard time digesting the increasingly technologized nature of reality and that the discipline should decenter from its humanist focus.
Taking up this critique, sociology can consider not just humans but other-than-humans as social as well, which will have profound consequences on sociology and how society is conceived. I think that these kinds of discussions would open the way for bridging the gap between engineering and social sciences. Perhaps this is a path for constructing a language between these “two worlds.”
An inquiry into the sociology of AI then implies that our current and dominant understanding of what is social and of society needs to evolve because the assumptions that define these categories reorganize our view of the world. As the world increasingly becomes a programmable, manageable, controllable, and closed entity, it becomes all the more important to engage critically with the meaning of social and practice some sociological imagination. What happens when we view the world not as a compilation of vast data points that have emergent patterns but as a socially connected and historically constituted web of relations? What happens when we think of the social life as inhabited by humans, machines, animals, and all sorts of “others”? This was always already the case, and now we can expand our conceptions so as to face this reality.
Ceyda Yolgörmez is a Ph.D. student in Social and Cultural Analysis at Concordia University, Montreal. Her research includes histories and sociologies of AI, questions of agency in the context of situated interactions with AI agents, and the commonsense knowledges that emerge from these interactions.