The Fallacies of AI

Mathew Black
8 min readSep 22, 2019
Adolph Menzel. The Iron-Rolling Mill (Modern Cyclops) 1872–1875. Berlin. Alte Nationalgalerie.

Despite the inundation of conferences, headlines, TED Talks and controversies surrounding Artificial Intelligence over the past several years, most of what gets published about AI and spreads through mainstream media and social networks does not go beyond the techno-utopian hype promoted by Silicon Valley or the apocalyptic click-bait headlines foretelling mass unemployment and human uselessness. Yet Artificial Intelligence is, in essence, only a form of automation, one of many within the development of machinery under capitalism, reducing costs and increasing productivity.

Few technological developments… have elicited as many contradictory emotions as automation. It has been so since Henry Ford perfected the assembly line in the early 20th century. Automation has progressed steadily since then in factories, back offices and shop fronts, but these emotions are bubbling up again as intelligent, data-driven automation technologies make their presence felt.

These “contradictory emotions” stem from the fear that automation seeks to remove human labor wherever possible or reduce it to a supplementary role when not. For the working class, therefore, automation is a constant, looming threat while for managers, CEOs and owners of capital, automation is simply a means for “adding value to the business”.

In terms of information technologies, these fears “bubbled up” at least as far back as 1964 when the so called Committee of the Triple Revolution sent American president Lyndon B. Johnson an appeal to address the “cybernation revolution” which, as a combination of “the computer and the self regulating machine”, threatened to put unprecedented numbers of people out of work. Mainstream economists generally dismiss these concerns since the conventional wisdom is that “…in the long run, technology is a net creator of jobs.

Automating a particular task, so that it can be done more quickly or cheaply, increases the demand for human workers to do the other tasks around it that have not been automated.” Or put another way, profit “does not arise from the labour power that has been replaced by the machinery, but from the labor-power actually employed in working with the machinery.

There are, however, certain contradictions regarding AI that are often injected into public discourse by its own creators, revealing both their neoliberal affinity and their knowledge of how limited it actually is. For example, in 2018, after the Cambridge Analytica scandal demonstrated that social media platforms could have their algorithms easily manipulated by organized disinformation campaigns to impact democratic processes, Mark Zuckerberg was called for questioning before the American Senate. During the session, Zuckerberg insisted (24 times throughout the three hour session) that AI can solve all of the problems that worried the senators and society at large. Interestingly, he also referenced hiring 20,000 people to work on “security and content review” because “[S]ome problems lend themselves more easily to AI solutions than others.” To put this in perspective, the total Facebook operation employs a little under 40,000 full time employees world wide. In other words, it hired an equivalent of half its total workforce for the kind of content moderation that Artificial Intelligence cannot provide because understanding linguistic nuances is beyond what it can reliably do.

Artificial Intelligence is basically a prediction engine that can autonomously scan datasets in order to calculate possible outcomes. It communicates the results of those calculations in different ways depending on the specific application at hand, an estimated time of arrival for example or a search query in another. Undeniable advances in computation have led Silicon Valley to claim success in developing something similar to human intelligence but in fact what they have done is merely hijack the name of a scientific field of study to market the practical application of advanced calculation tools. According to Noam Chomsky, this “new AI […] focuses on using statistical learning techniques to better mine and predict data [and] is unlikely to yield general principles about the nature of intelligent beings or about cognition.” AI tools simply calculate and calculation does not equal intelligence. The fact that Silicon Valley insists on equating the two in the minds of the public does not make it so. To paraphrase John Searle, in so far as we can create machines that can carry out computations, computations are not thoughts and they cannot execute cognitive processes because intelligence is a biochemical process.

Zuckerberg’s adherence to Silicon Valley’s marketing strategy that promises everything to everyone through an all-knowing AI is understandable. In the US, while investment in startups in general has been relatively stagnant, investment in AI startups has increased “exponentially” by 113% between 2015 and 2018. As for Venture Capital investment in AI the growth is even more dramatic with an increase of 350% in just four years, between 2013 and 2017. Silicon Valley has chosen AI as its brand, and considering the money at stake, it is no surprise the wizard does not want the curtain pulled back. If it were we’d see AI in all its algorithmic frailty, unable to comprehend the difference between a joke and a threat and incapable of understanding the dangerous degree of faith and responsibility its human creators have placed on its ability to “mine and predict”. We are therefore faced with a dual fallacy. On the one hand AI is incapable of dealing with the daily problems that have arisen from the constant digitalization and dataficiation of human life and, in many cases, only complicates matters further through the biases built into its calculations. On the other hand, when AI is used, it requires human labor to awaken it from the dead, as Marx might say, just like any other tool or piece of machinery.

Spread out across the globe, the exact number of people working behind the curtain is unknown. There are, broadly speaking, two different categories: content moderators and data labellers, although it is unclear how often these two overlap or mean the same thing.

The worst-kept secret to Silicon Valley’s AI push,” says a recent article in the Washington Post, “has been the tech industry’s army of […] low-wage contract workers [who] spend their days screening posts for offensive or disturbing content, indirectly helping train the artificial intelligence program on what problems and patterns to look for.

Located in India, the Philippines, China and many cities in the U.S., the exact number of content moderators directly or indirectly working for the big social media companies ranges between 30–50,000 people. The content they view is so disturbing that many do not last. The high turnover rate plays a big part in keeping wages low, limiting benefits and in many cases providing insufficient psychological support or other coping mechanisms.

Since the data related scandals involving Facebook and other companies has brought to light the existence and the need for these moderators, many organizations have published important and devastatingly revealing journalistic accounts of what is involved. Take a moment to imagine all the evil, heart breaking and tragic things that happen daily. Sexual abuse, murder, hate, animal abuse, gruesome images of war; inexplicably, much of it is voluntarily shared through social media. In order for you not to see this content, someone else must. A content moderator may start the day by watching a video of a man being stabbed to death. She then has to explain why that video does or does not violate Facebook’s content policy and whether or not it should be removed. Another, employed by a different contractor, has to determine if the grown man touching a child’s exposed genitals is doing so by accident or on purpose while yet another, employed by a third contractor, will review footage of the Pulse nightclub shooting in Orlando. It is probably true, as the social media companies claim, that most of the content that requires moderation is not always like this and often does not go beyond bullying, hate speech or conspiracy theories. Perhaps it is also true that moderators earn above the minimum wage in each of the countries where they’re employed and that they are provided adequate counseling to cope with the material they work with. But just how many posts containing pedophilia, suicide or murder can be considered manageable? And what exactly is a reasonable wage to filter the global sewer of human depravity, fear and sadness?

The world of “data farms” seems to be much more varied with contractors offering to create data sets for facial recognition, self driving vehicles, health care and any other application of artificial intelligence.This labeling is necessary because, as we saw above, software does not have any cognitive ability, it can only compute, so for an algorithm to identify an apple, “it needs thousands to millions of pictures of apples.” AI’s limitations have therefore created the need for a whole new category of worker thereby fulfilling Silicon Valley’s techno utopian claim that automation creates new jobs, albeit somewhat ironically. In “India, China, Nepal, the Philippines, East Africa and the United States, tens of thousands of office workers are punching a clock while they teach the machines.” The exact number of data labellers is difficult to pin down and there are “many thousands more” employed independently through Amazon’s Mechanical Turk and other crowdsourcing services. There are more to come. In the drive to automate, the adoption of AI by businesses across all industries will allow the market for data labelling services to grow from a $150m market in 2018 (employing “hundreds of thousands”) to more than $1bn by 2023. Unfortunately not all data labelling is as benign as teaching algorithms to recognize apples. Many of the workers in this new extension of the artificial intelligence industry find the medical videos, pornography or violence they are exposed to disturbing and though some quickly abandon their jobs many cannot. “[F]or those of us who cannot afford to not go back to the work, you just do it.

The image of hundreds of thousands of people working hours on end feeding AI the data it needs and “training” it to correctly differentiate between a cat and a car, between real violence and simulated violence, is reminiscent of the human labor needed in XIX century factories to set their giant machines in motion. The innovative application of science and technology to create new profits will always require human labor. Likewise, this is not the first time capital uses deliberate contradictions in its rhetoric to obscure this fact. As we have seen, automation is presented as both a means to increase productivity and as an opportunity to create new labor opportunities in areas that have not yet been automated. This standard belief among free market advocates is in itself debatable. It is straightforwardly contradicted by the very same advocates for free markets when it is simultaneously used as a threat against organized labor and its demands. Behind the hype and the code, capital’s discourse remains unchanged: “Do not fear automation, it will improve your lives and create new jobs. But don’t ask for increased wages, better working conditions or a shorter working day for maybe your job will be automated next.”

--

--