Dismantling the apparatus of domination?

In November 2021, over 140 Artificial Intelligence (AI) researchers signed a letter asking the German government to oppose developments in autonomous weapons systems. With this they attempted to draw distinctions between beneficial and destructive AI: ‘Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons – and do not want others to tarnish their field’. 1 Yet these distinctions are difficult to maintain, as the lines between productive and destructive, human and machine, science and military have long been blurred. The letter also speaks to a public epistemology of fear, which highlights a particular type of AI or a particular weaponised use as dangerous for humanity, peace or democracy. While many public debates about AI propose to draw (impossible) lines, critiques of AI on the left have focused on disrupting those lines. In this intervention, we discuss how power and labour have informed critiques of AI, which have emerged on the left, before turning to a situation where these critiques format the political practice of working with and upon AI.

Left critiques of AI have shown that public discourses and debates, such as the one above, conceal other lines, which are constitutive of how AI is produced and circulates, and how it deeply infuses our current conjuncture. Clear lines between humans and machines obscure the distinction between what Sylvia Wynter has called ‘this or that genre of being human’. 2 The separation between production and destruction obfuscates the lines between what counts as productive, non-productive and unproductive. Finally, the lines between science and the military distract from how labour is mobilised and circulates between the two worlds. Critiques of AI have tried to make these ‘other’ lines visible and shed light on the many forms of algorithmic injustice and even dehumanisation, uncover labour issues in the production of AI technologies, or reveal the energy consumption of large-scale AI models and their extractive logics. These critiques situate AI at the heart of contemporary capitalism and its violence. But are these critiques a much-needed correction of existing AI, or do they need to resist the ongoing optimisation of AI? Do they amount to an AI abolitionist perspective? Or, conversely, are progressive versions of AI needed or even possible? What aspects need to be considered from a left perspective, when it comes to the politics of AI?

Debating the role of technology as political is a well-known left theoretical problem, of course. While early readings of Marx assumed that the political functioning of technology was a simple question of property rights and that ‘dialectics’ would purge technology of any class structure, the Frankfurt School explicitly questioned this assumption. 3 Herbert Marcuse, who articulated those scruples about the role of modern technology most clearly, wrote that ‘[s]pecific purposes and interests of domination are not foisted upon technology “subsequently” and from the outside; they enter the very construction of the technical apparatus’. He argued against a straightforward appropriation, as ‘[t]echnology is always a historical-social project: in it is projected what a society and its ruling interests intend to do with men and things’. 4 Technology and technological rationality were part of an ‘all-embracing apparatus of domination’. 5 Yet if technological rationality was the contemporary form of domination, technics as instruments or techniques could become part of different political projects, of repression as well as liberation. From the 1941 reflections on ‘Some Social Implications of Modern Technology’ to One-Dimensional Man, Marcuse mobilised the ambivalence of technics as an intervention against the technological rationality of domination. ‘Technics’, he argued, ‘as a universe of instrumentalities, may increase the weakness as well as the power of man’. 6 Contemporary critiques of AI have focused on this apparatus of domination, one which is foremost driven by capitalism and colonialism. The language of AI itself is used to signify technological rationality and market value rather than as a definition of a specific range of technics.

In its current form, AI technology is indeed primarily seen as a profit-making machine: the technologies that advanced contemporary AI – machine learning and deep neural networks – have become new means of production as much as phantasmas fuelling speculation. 7 Even though we are a few years into the hype of AI, forecasts still predict a financial growth of irrational dimensions for this technology, which is projected by some in the ‘trillions’. Yet the current situation looks a little different. For 2021, the actual economic impact seemed quite low, at least if one wants to believe the McKinsey survey from the same year. 8 Only 27% of the over 1,500 participants they approached in the business world indicated that AI contributes to up to 5% of earnings before interest and taxes, and 5% is not much to begin with.

Despite this, the promise of profit continues to lead to abundant capital for start-ups as well as to a race for AI patents; the corporation that holds the most is currently IBM with over 5,538 patents, followed by Microsoft and Samsung. 9 Alongside these companies, the Chinese Academy of Sciences, Tencent and Baidu also rank high in holding machine learning and AI patents. Thus, in a report on AI by the National Security Commission in the US, patents are translated into the language of an arms race between the US and China. At over 700 pages, the report of the Commission mentions ‘China’ 604 times. Best known for being chaired by former Google CEO Eric Schmidt, the Commission entangles military and markets under the claim that China’s plans, resources and progress should concern all Americans as ‘[China] is an AI peer in many areas and an AI leader in some applications. We take seriously China’s ambition to surpass the United States as the world’s AI leader within a decade’. 10 To become a leader in these economic and military markets, however, also means turning a blind eye to the many effects of the production, circulation and consumption of AI technologies, whose most recent hype is based on advances in machine learning.

In the field of machine learning, so-called ‘deep neural network architecture’ made it possible to classify language, images or other symbols more successfully than in computational attempts that have been made before. Older methods struggled with the ambiguity of symbols – what is said in a sentence or what is depicted in an image. Their meaning could not be made calculable until the computation of AI underwent a paradigm change: with deep neural networks, programmers do not write the rules of an algorithmic model anymore. Instead, they build a computer architecture, a network of nodes based on statistical analysis, through which they run large amounts of data from which an algorithmic model is then inferred. The statistical correlations of data points showed as highly successful: algorithms trained on large amounts of data could make classifications or predictions with a higher success rate than before. The meaning of symbols could now be calculated, but that did not mean that they performed flawlessly. Despite their errors, task-orientated AI programmes have been put into actual use from assisting typing on our phones by suggesting words to London’s Metropolitan Police’s operational use of live or retrospective facial recognition. 11 AI-powered weapon technologies also rely on image recognition of objects and targets in real-time video streaming from drones and other technologies of surveillance. Moreover, these implementations, often premature applications of programmes that did not undergo independent reviews or testing, amplify and intensify the existing apparatus of domination, as Marcuse would put it. AI may be a new technology, but it emerges from and works upon existing distributions of power. Yet, power has only recently come to feature in critiques of AI, even on the left.

In the history of AI, internal critique has always played a substantial role, ever since Alan Turing asked in 1950 the question ‘Can machines think?’ 12 These internal critiques, however, have focused less on power and political economy, but have circled around the philosophical question of whether computers, which are executing a programme, have a ‘mind’ or ‘consciousness’. Indeed, these considerations were revived in contemporary AI under a new keyword, that of a much debated ‘general AI’. They were also revived through public epistemologies of catastrophe, such as Nick Bostrom’s New York Times bestseller Superintelligence published in 2014. The overall argument – and effect – of this and other publications along those lines is geared towards imagining the future of AI as a catastrophe to come. AI – not just as a potential general AI but even in its supposedly ‘weaker’ or specific AI form – propels autonomous weapons and leads to loss of human control. Set up in a binary and technophobic way, this discourse of AI catastrophe minimises human agency, downplays the redrawing of lines within humanity and the legacy of struggles that have challenged these lines and recast the very understanding of the human. While warning about AI and implementing a much-needed distrust, these discourses of catastrophe always already have the effect of deterring any engagement in the present.

Therefore, this discourse that bound critical capacities to a catastrophe to come meant that in the meantime the development of real-world AI progressed undisturbed – at least for a while. The so-called ‘weak AI’ – AI applications that function as long as they target very specific and narrow areas – became a central part of our informational infrastructure so much so that AI has been described as a ’general purpose technology’. Critiques of AI soon started to catch up with this development. Science and technology studies scholar Lucy Suchman has argued that we need to demystify AI and avoid reproducing discourses of AI as a ‘thing’ or as a ‘coherent and novel field of technology development’. 13 Suchman’s point can be seen in the discourse of catastrophe, a discourse profoundly based on AI as a ‘thing’ taking over. While AI is not capable of a generality recognised in human intelligence (some would say ‘yet’), the contemporary critique of AI has become strongest in a different field, that of political economy. Suchman offers such a redefinition of AI, which emphasises data and data work. For her, AI is ‘a cover term for a range of technologies of data processing and techniques of data analysis based on the iterative adjustment of relevant parameters, according to some combination of internally and externally generated feedback’. 14 The human-machine relation is not a dual one, but one which is formed within capitalist relations of production and reproduction. From the material means of production such as the often-outsourced preparation of the data for the operations of machine learning algorithms, to the societal effects of its application, with bias being programmed into its functioning, critiques of AI shed light on contemporary capitalism and its violence. But to what extent is this critique of AI effective? Where are its limits? And to what extent are those critiques pointing beyond capitalism and articulating perspectives from the left?

Power

The critique of power addresses AI as an apparatus of domination and traces the technology through the production of data – data collected by corporations thriving in so-called ‘platform capitalism’ as well as by the state and its repressive agencies. Power emerges in multiple forms, from the historicity of data, its extraction by corporations and the state, its valorisation and the effects of surveillance and oppression it creates. What statisticians and computer scientists refer to as bias is created by training data reflecting historical or social inequities. When gathering training data, specific groups – such as people of colour, minorities or women – are often underrepresented. They might have been overlooked in the process of data sampling or during the testing of the AI technology. This ‘prototypical whiteness’ that renders racialised subjects invisible is entwined with surveillance technologies that render them hypervisible, as Simone Browne has shown. 15 For example, a dataset called ‘Faces in the Wild’, which had long been considered as a benchmark for testing facial recognition software, now comes with the warning that its data is not representative – 70% of the faces are male and 80% white, as digital activist Joy Buolamwini found out. 16 And even if values representing race, gender, sexual orientation or class are removed, AI models, always looking for patterns, turn to proxy discrimination using statistical correlations of postcodes, education or particular expressions to discriminate. As Wendy Chun pointed out in her study Discriminating Data: ‘These “errors” often come from “ignoring” race – that is, wrongly assuming that race-free equals racism-free’. 17

To ensure that we are not going to be governed like this, a range of organisations and institutions have been set up: critical work regarding bias is being done at the Data Justice Lab in Cardiff; at Joy Buolamwini’s Algorithmic Justice League devoted to the unmasking of AI bias and harms; at the Ida B. Wells Just Data Lab founded by Ruha Benjamin; at NYU’s AI Now Institute or the Distributed AI Research Institute (DAIR). 18 DAIR was founded in 2021 by the computer scientist Timnit Gebru, after being fired from her position as Google’s co-lead of the Ethical Artificial Intelligence Team for criticising large language models. While these institutions rely on university and funding infrastructures of the Global North, organisations like Coding Rights in Latin America have developed a feminist and decolonial critique of AI. 19 Rather than catastrophist imaginaries of the future, these institutions aim to develop institutional and organisational counter-power to ensure that AI systems are accountable to the communities and contexts in which they are applied.

As power and domination are built into AI technologies through the data that makes algorithmic operations possible, this critique has left activists and theorists puzzling over the systems themselves. For instance, following her firing from Google, Timnit Gebru reflected in a media interview on the dilemmas of addressing bias in a system and the critique of the system itself. 20 Gebru’s dilemma emerges from the entanglement of invisibility and hypervisibility of racialised subjects. It is also a dilemma of the many ways in which power operates, where an AI technology that reduces or erases racial bias remains a technology of power, one which renders marginalised and oppressed communities hypervisible and subject to intensified surveillance, policing or other lethal interventions. Here again, AI seems to blur the lines, this time between critiquing the system and the paradoxical effect of supporting the system through that critique.

This paradox can, for example, be seen when it comes to algorithms misidentifying people of colour. In September 2020, in the midst of the Covid-19 pandemic, the case of a professor whose head kept getting removed every time he tried to use a virtual background on Zoom went viral. 21 The issue: the professor was not white. Video-chat software relies on facial recognition to determine what parts of the screen should show the background image while leaving the head of the user visible. In this case, the head wasn’t detected, because whiteness was used as the software’s default for the recognition of a human. This kind of racism in media recognition has a long history. Photographic media have always been ‘developed with white people in mind and habitual use and instruction continue in the same vein, so much so that photographing non-white people is typically construed as a problem’, as noted by Richard Dyer and others. 22 While many challenged the misrecognition of people of colour, others worried that optimising facial recognition for people of colour also always means to optimise a system of whiteness, one that is quite likely to be turned at some point against people of colour.

Some critics are therefore opposed to this optimisation that discourses of data and algorithmic bias entail. For instance, Ramon Amaro has made the argument that, while such aims ‘might widen the scope of machine perception, not to mention the participation of excluded bodies in techno-social ecologies, the solution, as proposed, reinforces the presupposition that coherence and detectability are necessary components of human-techno relations.’ 23 Amaro reminds us that an optimisation of algorithms more or less confirms ‘what features represent the categories of human, gender, race, sexuality, and so on’, but it does not change them, thus pointing to a political dilemma well known within left politics: do progressive demands simply patch the cracks of a system prolonging its existence, or does an engagement with those cracks change the system itself? While the ‘ethics washing’ of companies (i.e. faking an exaggerated interest in certain issues as a way of getting around regulation) is a real problem, the question remains of what engagement from the left is necessary.

Critiques of power have shown that contemporary technologies of AI, always looking for and multiplying differences, are haunted by a racist-colonialist and classist past, and not only regarding its functioning. Yet these critics are also struggling with the dilemma of AI politics, as the institutes and organisations they lead depend on research funders, donors and universities. The critique of labour, while entwined with the critique of power, has opened different political interventions and ways of not being governed like that. From the statistical focus on bias, the legal language of discrimination and political mobilisation against entrenched inequalities and distributions of humanity, this other critique of AI moved to the ‘hidden abodes of production’. 24

Labour

At an expo dedicated to big data and AI in London, IBM argued that AI ‘takes the machine out of the human’. 25 Rather than fears of replacing humans by machines, tech companies reproduce imaginaries of human creativity and authenticity liberated from machinic-like labour. Neda Atanasoski and Kalindi Vora have argued that these imaginaries of machine labour reproduce AI as a ‘surrogate’ technology, a lesser human. 26 These ‘surrogate’ technologies do not in fact replace the repetitive labour that some humans are called to do, but they are intensifying the repetitive and machine-like labour which has come to be known as ‘microwork’. AI relies on globally dispersed, unpaid or underpaid labour of data cleaning, categorising and featurising. ‘But in reality’, Phil Jones reminds us, ‘the magic of machine learning is the grind of data labelling’. 27 There is no AI without training data, and as training and testing datasets become increasingly massive, they need to be cleaned, curated and improved. This work is done by millions of microworkers, mostly in the Global South. As geographers Mark Graham and Mohamed Amir Anwar have shown, given ‘the geographically untethered nature of digital tasks, workers from different parts of the world can potentially compete, thus creating a planetary market for digital labour’. 28 Rather than seeing AI as a high-tech autonomous weapons system that is a killer robot, or an automated facial recognition system – i.e. as a coherent ‘thing’ Suchman cautioned against – AI is a distributed socio-technical system that is always already produced, circulated, maintained and repaired through dispersed, intensive and underpaid labour.

These microworkers often disappear from analyses of labour, as resistance to AI developments has focused on the mobilisation and unionisation of tech workers. In 2018, over 4,000 Google employees protested against Google’s involvement in project Maven, a US Department of Defence project that aimed to automate the analysis of video images from drones. 29 In 2019, Microsoft workers asked the company to cancel a contract with the US army to develop augmented reality technology ‘designed to help people kill’. 30 More radically, #NoTechforICE moved beyond resistance to militarisation to expand protest and mobilisation against ‘the detention and deportation machinery but also to policing and military operations, endangering the safety and security of communities already vulnerable to criminalisation, from the Bronx to Compton to the southern border’. 31 While the protest focused on labour mobilisation and organisation at the big tech companies in the US, the labour of microworkers remains invisible. The data ‘cleaner’ becomes the other of the tech worker. Moreover, the dispersed and invisibilised data workers reactivate Marxist fears of the fragmentation of labour. Data cleaning jobs are often part of the gig economy, which has given rise to a new social class forced to live a precarious existence often slipping through the welfare net or finding themselves outside it, and always facing a lack of job security. For Jones, the stakes couldn’t be higher: ‘that the wretched and the precarious, left disorganised, fall under the thrall of reactionary elements, or else are prone to riot intermittently at the system’s edges’. 32 Unlike Jones, Verónica Gago has reclaimed the political potential of the feminist strike to reveal ‘the diverse composition of labour in a feminist register, by recognising historically disregarded tasks, by showing its current imbrication with generalised precarious conditions, and by appropriating a traditional tool of struggle to reinvent what it means to strike’. 33 Gago’s call for reinventing struggle is also a call for the redefinition of labour in ways that attend to the ‘differential of exploitation’.

A recent report by the International Labour Organisation (ILO) highlights these differentials of exploitation when it comes to migrant crowdworkers. 34 While global data is not available, there are estimates that indicate that 17% of workers on online web-based platforms are migrants. For migrant workers who have been excluded from employment or experience discrimination and limitations of access to labour markets, digital work becomes ‘simultaneously a site of degradation and one of opportunity for those who have little viable alternatives’. Yet, refugees experience different forms of exploitation to other microworkers, both due to citizenship and global differentials of pay and power. As the ILO report points out, ‘freelancers who label data and train algorithms that power AI technology do so mostly without access to a fair wage or basic benefits’. 35 Therefore, beyond concerns about the lack of collective action given the dispersal and isolation of microworkers, research with refugees on digital work has shown that their precarity of digital labour is reinforced through the precarity of their lack of status and multiple exclusions. For instance, refugees often cannot be paid because PayPal, a platform regularly used for payments, does not operate in certain countries. Some are blocked ‘due to international sanctions against financial transactions with certain nationalities’. In Bangladesh, official identification and biometric information are required to buy a SIM card, thus excluding the Rohingya refugees from accessing SIM cards for mobile phones, except through informal markets. 36

If the move from AI as a ‘thing’ to data-work sheds light on the differentials of exploitation, another term, widely used now by tech companies, alerts us to how tech companies recast questions of labour. XaaS means ‘anything as a service’. XaaS renders the ideology of tech companies, where everything can become a service: platform as a service, software as a service, cloud as a service, and as Jeff Bezos infamously put it, humans as a service. And now: AI as a service. The language of service is not a new one and it belies the claims of unprecedented development and innovation that AI now circulates across private and public realms. Large tech companies with a tendency to monopolise the development of AI such as Google, IBM or Nvidia increasingly tout their technologies as services advantaged by their massive technical infrastructure and high-skilled workers. Nick Srnicek argues that these companies aim to shape AI as a utility in the form of a pre-existing, bookable service or of a tool developed to assist other companies to run AI and build their own for a fee. 37 And at the moment it looks as if their dominance will continue – at least until new research breaks the trend for pretrained models developed on very large neural networks, which currently still deliver better accuracy. ‘Mega indexes (are) tracing the outline of capital today’, as Leif Weatherby and Brian Justie put it. 38 The language model GPT-3 developed in 2020 by OpenAI/Microsoft has a capacity of 175 billion machine learning parameters and was trained on 500 billion words. 39 Its estimated carbon emissions during training are massive with 552 metric tons of CO2, a number that has been linked to the greenhouse gas emissions of the average running of 120 US cars over one year, to put it into human perspective. 40 As a reaction, critical research into Green AI tries to find ways to create AI systems that are using fewer resources while being at the same time more inclusive, running again into the same dilemma of AI optimisation.

However, so far the trend of large companies acting as AI providers and offering AI as a service has not shown signs of abating. This has consequences for the labour linked to AI. Describing AI as XaaS blurs the distinctions between productive, unproductive and reproductive labour. The language of ‘service’ has been rehabilitated in public imaginaries of health and welfare services. Situating AI within the service sector rather than the manufacturing sector not only effaces microworkers and crowdworkers, but also obscures the multiplication of labour statuses and the blurring of boundaries between different forms of labour. As feminist accounts of service work remind us, we need to reflect upon the ways in which the racial division of labour ‘protects white male privilege in institutional settings’. 41 Rather than the public value of service unencumbered by exploitation, AI technologies are produced through the unpaid and underpaid labour of workers whose domination is entrenched through the lines of race and gender.

Another politics

The open-source computer vision project VFRAME was developed to assist human rights research. It currently works with an archive of digital information from conflict zones run by the NGO Mnemonic. 42 Mnemonic is dedicated to the collection and preservation of digital information from conflict zones, so that it can be used in struggles over accountability and justice. Syria is one of the places for which the organisation archives and preserves digital documentation of human rights violations, war atrocities and international crimes. 43 The project with Mnemonic started in 2017 and was founded by the artist and digital activist (and/or software developer) Adam Harvey. The VFRAME project includes coder Jules LaPlace and 3D-designer Josh Evans as well as a group of friends and contributors in Berlin with the help of some funding. Emerging out of discussions between researchers, digital activists and investigative journalists, VFRAME was created to assist human rights researchers, for whom the massive scale of the visual data in those archives is a challenge. Finding or paying experts trained to recognise illegal munitions, who can review thousands of hours of footage material, was not possible. Researchers were also aware of the need to avoid the ‘vicarious trauma’ of going through this massive visual data. This is why the group worked on the development of an AI model that detects and flags up the existence of cluster munitions. The group was interested in showing that ‘cluster’ munitions – more specifically RBK-250 – were used on civilian populations. The Convention on Cluster Munitions prohibits their use, development, production, acquisition, stockpiling or transfer. 44 Syria, Russia and the US are not signatories to the Convention.

The labour that comes with this is elaborate. It can involve running the AI model through thousands of videos to find example munitions, then going through those manually to find the ones that might be good for testing data and scoop those out. The found images that are of high quality are put onto an annotation platform. Collaborators and friends then do the work to draw exact boxes around them to allow the algorithm to identify and learn what it is supposed to look at – here data cleaning is a collective effort. The newly cleaned data is then folded back into the project benchmarking data to evaluate the models trained on synthetic data. Over many iterations, the AI algorithm learns to detect the munition better and better. Since all of the objects are rare, the use of 3D modelling to create training data has been a gamechanger. The 3D models are placed into environments that simulate conflict zones and then rendered into thousands of photorealist images for use as training data.

VFRAME needs technical teamwork such as that described above to create an AI model which can subvert the intention to keep war atrocities and involvements hidden from view. For this, data needs to be gathered as well as labelled. In areas in which there is no training data, computers remain blind. ‘Training datasets are the lifeblood of artificial intelligence’, Adam Harvey wrote in his essay on ‘Computer Vision’. 45 ‘They are so vital to the way computer vision models understand visual input that it may be helpful to reconsider the algorithms as data-driven code …’, he went on to explain. The existence of a dataset can make the difference between what can be detected, seen and interpreted and what cannot. It is here where AI models leave room to subvert the capitalist way of seeing. Projects like VFRAME, and also Forensic Architecture’s project ‘Triple Chaser’ to name another example, 46 intervene by creating data.

For us, another politics of AI is at stake in VFRAME, which entwines the critique of power and the critique of labour. While the discourse of evidence and documentation framed in legal terms is also present, as Martina Tazzioli and Daniele Lorenzini have argued about Forensic Architecture, we are particularly interested in the politics of collective organisation and labour. 47 It can be read as a form of counter-power emerging from the motley collective of international activism. At the same time, it also recasts and renders visible the composition of labour. Interfering in the existing data economy that follows capitalist aims, their AI models detect aspects that ruling interests would have preferred to remain hidden. These projects show that in a world administered by algorithms, it does matter what the algorithms can do. And they also show that ‘another AI’ is possible. Behind the hype about automation through AI models one finds the much more real politics of datasets deciding what can be detected, and what can remain unseen. Or in Adam Harvey’s words: ‘Becoming training data is political’.

Despite Marcuse’s concerns that the ruling interests are projected into the apparatus of technological domination, VFRAME explicitly configures AI as a political intervention. In that sense, Marcuse’s distinction between technology and technics is helpful here to render ‘another politics of AI’ and to differentiate its material-functional aspect (technics) from its ideological framework (technology). As we have seen, unlike technological control and domination, technics are part of technological rationality but they ‘can promote authoritarianism as well as liberty, scarcity as well as abundance, the extension as well as the abolition of toil’. 48 Building on the long history of feminist engagement with technology, Helen Hester has more recently cautioned against the work of foreclosure: it is important to reclaim and reposition technical practice as ‘one potential sphere of activist intervention’. 49 

Interventions by activists such as VFRAME show that assembling AI as subversive technics and a critical technical practice that moves the field towards another politics of AI is possible. Instead of reiterating futures of AI catastrophe, which reify the power of professionals that can guard the lines between human and machine, military and science, production and destruction, another politics of AI emerges at the interstices of political struggles across borders, efforts at organising and developing common infrastructures away from tech corporations, and collective contribution to data. These AI politics intervene in capitalistic violence by performing a labour of subversion in the present, dismantling forms of contemporary domination.

This labour of subversion that mobilises the ambivalence of technics does not mean that we should stop debating AI-powered weapons. Rather, returning to the letter of the German scientists with which we started, it means that left analyses of AI need to hold together power, labour and domination. AI-powered weapons materialise the destructive productivity of AI. They thrive on the labour of unpaid, underpaid and displaced populations around the world and intensify hierarchies of humanity. Examples like VFRAME show that another AI cutting through the capitalistic ideological framework thriving on misery unfolds in the here and now.

Notes

  1. ‘AI researchers call upon new German government to back autonomous weapons treaty (2021), https://autonomousweapons.org/ai-researchers-call-upon-new-german-government-to-back-autonomous-weapons-treaty/. ^

  2. Sylvia Wynter, ‘Unsettling the Coloniality of Being/Power/Truth/Freedom: Towards the Human, after Man, Its Overrepresentation – an Argument’. CR: The New Centennial Review 3:3 (2003), 272. ^

  3. Monika Reinfelder, ‘Breaking the Spell of Technicism’, in Outlines of a Critique of Technology, ed. Phil Slater (London: Ink Links, 1980): 9–37. ^

  4. Herbert Marcuse, Negations: Essays in Critical Theory (London: May Fly, 1968), 168. ^

  5. Herbert Marcuse and Douglas Kellner, Technology, War and Fascism: Collected Papers of Herbert Marcuse, Volume 1 (London: Routledge, 2004), 77. ^

  6. Herbert Marcuse, One-Dimensional Man (Boston: Beacon Press, 1964), 165. ^

  7. See Justin Joque, Revolutionary Mathematics: Artificial Intelligence, Statistics and the Logic of Capitalism (London: Verso Books, 2022), 200–01. ^

  8. Michael Chui et al, ‘The State of AI in 2021 [report]’ (New York: McKinsey, 2021), 2, available at: https://www.mckinsey.com/business-functions/mckinsey-analytics/our-insights/global-survey-the-state-of-ai-in-2021. ^

  9. https://www.statista.com/statistics/1032627/worldwide-machine-learning-and-ai-patent-owners-trend/. ^

  10. National Security Commission on Artificial Intelligence (NSCAI), ‘Final Report. National Security Commission on Artificial Intelligence’ (2021), 2, available at: https://www.nscai.gov/ ^

  11. For a discussion of the impossibility of eliminating error in machine learning, see Matteo Pasquinelli, ‘How a Machine Learns and Fails: A Grammar of Error for Artificial Intelligence’, Spheres: Journal for Digital Cultures 5 (2019), 1–17; Claudia Aradau and Tobias Blanke, ‘Algorithmic Surveillance and the Political Life of Error’, Journal of the History of Knowledge 2:1 (2021). ^

  12. Alan M. Turing, ‘Computing Machinery and Intelligence’, Mind: A Quarterly Review of Psychology and Philosophy, vol. LIX/236 (1950), 433. ^

  13. Lucy Suchman, ‘Six Unexamined Premises Regarding Artificial Intelligence and National Security’, AI Now Institute, 2021, https://medium.com/@AINowInstitute/six-unexamined-premises-regarding-artificial-intelligence-and-national-security-eff9f06eea0. ^

  14. Lucy Suchman, ‘AI at the Edgelands: Data Analytics in States of In/Security’, UC San Diego, The Design Lab, YouTube. ^

  15. Simone Browne, Dark Matters: On the Surveillance of Blackness (Durham, NC: Duke University Press, 2015). ^

  16. Priyanka Boghani, ‘Artificial Intelligence Can Be Biased. Here’s What You Should Know’, PBS, 2019, https://www.pbs.org/wgbh/frontline/article/artificial-intelligence-algorithmic-bias-what-you-should-know/ ^

  17. Wendy Hui Kyong Chun, Discriminating Data. Correlation, Neighbourhoods, and the Politics of Recognition (Cambridge, MA: The MIT Press, 2021), 2. ^

  18. See: datajusticelab.org; ajl.org; thejustdatalab.com; ; dair-institute.org. ^

  19. Coding Rights, https://linkme.bio/codingrights/. ^

  20. Amanpour and Company, ‘Fmr. Google Insider on Whistleblowers, Unions and AI Bias’, 16 December 2021, YouTube.com. ^

  21. Colin Madland, ‘A faculty member has been asking how to stop Zoom from removing his head when he uses a virtual background. We suggested the usual plain background, good lighting etc, but it didn’t work. I was in a meeting with him today when I realised why it was happening’, 19 September 2019, Twitter. ^

  22. Richard Dyer, White: Twentieth Anniversary Edition (London: Routledge, [1997] 2017), 89. ^

  23. Ramon Amaro, ‘As If’, eflux (2019), https://www.e-flux.com/architecture/becoming-digital/248073/as-if/ ^

  24. Karl Marx, Capital, Volume One (London: Lawrence and Wishart, 1986 [1867]). ^

  25. Big Data LDN.2019. “To Intelligence … and Beyond, 13-14 November 2019, accessed 22 July 2021, https://bigdataldn.com/. ^

  26. Neda Atanasoski and Kalindi Vora, Surrogate Humanity: Race, Robots, and the Politics of Technological Futures (Durham, NC: Duke University Press, 2019). ^

  27. Phil Jones, Work without the Worker: Labour in the Age of Platform Capitalism (London: Verso, 2021). ^

  28. Mark Graham and Mohammad Amir Anwar, ‘The Global Gig Economy: Towards a Planetary Labour Market?’ First Monday 24:4 (2019). ^

  29. Brian Menegus, ‘Thousands of Google Employees Protest Company’s Involvement in Pentagon AI Drone Program’, Gizmodo (4 April 2018), https://gizmodo.com/thousands-of-google-employees-protest-companys-involvem-1824988565. ^

  30. Avie Schneider and Laura Sydell, ‘Microsoft Workers Protest Army Contract with Tech “Designed to Help People Kill”’, NPR, 22 February 2019. ^

  31. https://notechforice.com/about/. ^

  32. Jones, Work without the Worker. ^

  33. Verónica Gago, Feminist International: How to Change Everything (London: Verso, 2020), 257. ^

  34. International Labour Organisation, ‘Digital Refugee Livelihoods and Decent Work. Towards Inclusion in a Fairer Digital Economy’ (Geneva: International Labour Organisation, 2021). ^

  35. Ibid. ^

  36. Ibid. ^

  37. Nick Srnicek, ‘Data Compute Labour’ in Digital Work in the Planetary Market, eds. Mark Graham and Fabian Ferrari (Cambridge, MA: MIT Press, 2022), 241–262. ^

  38. Leif Weatherby and Brian Justie, ‘Indexical AI’, Critical Inquiry 48:2 (2022), 381–415. ^

  39. Emma Strubell, Ananya Ganesh and Andrew McCallum, ‘Energy and Policy Considerations for Deep Learning in Nlp’, arXiv preprint arXiv:1906.02243 (2019); David Patterson et al., ‘Carbon Emissions and Large Neural Network Training’, arXiv preprint arXiv:2104.10350 (2021). ^

  40. Patterson et al., ‘Carbon Emissions and Large Neural Network Training’. ^

  41. Evelyn Nakano Glenn, ‘From Servitude to Service Work: Historical Continuities in the Racial Division of Paid Reproductive Labor’, Signs: Journal of Women in Culture and Society 18:1 (1992), 1–43. ^

  42. VFRAME: https://vframe.io/ ^

  43. Mnemonic has also built archives from Yemen and Sudan. ^

  44. United Nations, ‘Convention on Cluster Munitions’, https://www.un.org/disarmament/convention-on-cluster-munitions. ^

  45. Adam Harvey, ‘On Computer Vision’, Umbau 1 (2022), https://umbau.hfg-karlsruhe.de/posts/on-computer-vision ^

  46. Forensic Architecture, Triple Chaser (2019), forensic-architecture.org/programme/exhibitions/triple-chaser-at-the-whitney-biennial-2019. ^

  47. Martina Tazzioli and Daniele Lorenzini, ‘Critique without Ontology. Genealogy, Collective Subjects and the Deadlocks of Evidence’, Radical Philosophy 2.07 (2020), 27–39. ^

  48. Marcuse and Kellner, Technology, War and Fascism: Collected Papers of Herbert Marcuse, 41. ^

  49. Helen Hester, Xenofeminism (Cambridge: Polity, 2018). ^