Home War Behind AI, an army of precarious workers from Hollywood

Behind AI, an army of precarious workers from Hollywood

7
0
Behind AI, an army of precarious workers from Hollywood

The testimony published by Wired offers a rarely more concrete insight into the behind-the-scenes of generative artificial intelligence. Its author, a Hollywood screenwriter and “showrunner”, recounts how, after the 2023 strike and the lasting slowdown in the American television industry, she turned to AI model training contracts to pay her rent and expenses common. What had been presented to him as easy money quickly transformed into a succession of unstable, poorly supervised and often exhausting tasks.

His story highlights a reality often forgotten in enthusiastic talk about AI: behind conversational robots, image generators and recommendation systems, thousands of human workers evaluate, annotate, correct, classify and test the responses produced by models. They must judge an assistant’s tone, describe videos accurately, identify objects in images, write test scenarios or spot security vulnerabilities. All in an environment where speed, permanent availability and obedience to instructions seem to count as much as expertise.

The screenwriter says she has worked for several platforms specializing in this market, including Mercor, Outlier, Turing, Handshake and Micro1. The initial promises may have seemed attractive, with high hourly rates for qualified profiles. But in his experience, contracts roll in without guarantees, projects start late, end without notice, and workers can be kicked out of a Slack channel or platform overnight. In this system, they are not considered as employees, but as “taskers”, performers of microtasks.

One of the most revealing passages concerns the contradiction at the heart of this model. Platforms tout the freedom to work when you want, according to your availability. But in practice, tasks can appear at any time, be limited in number and disappear very quickly. Workers must therefore stay connected, monitoring their emails, Slack and internal dashboards, sometimes late in the evening or in the middle of the night. Those who do not react quickly enough simply risk winning nothing.

The testimony also describes a strongly hierarchical system, but without real stability. Very young team leaders supervise professionals who are often older and more experienced, coming from cinema, television, teaching, journalism or other weakened sectors. Instructions change, evaluation criteria remain unclear, grades drop without clear explanation and workers must constantly follow new training, often unpaid. The vocabulary of motivation, badges, scores, rankings, enthusiastic messages and calls to “finish strong”, hardly masks the precariousness of the situation.

This story joins a broader issue: AI not only replaces certain workers, it also transforms others into invisible labor serving the machines. In the case of Hollywood, the irony is brutal. Screenwriters who went on strike to prevent studios from replacing them with AI find themselves, a few months later, training the systems likely to further weaken their profession. Their narrative expertise, their sense of language and their ability to judge a scene become resources used to improve automated models.

The article also recalls that this economy is based on a great asymmetry. AI companies and their contractors can quickly mobilize thousands of independent workers, adjust pricing, pause projects, and move tasks from one platform to another. Workers, for their part, endure uncertainty, unpaid waiting periods, tool changes, availability requirements and the absence of protections usually associated with salaried employment. Wired also reports that several lawsuits have been filed against Mercor, alleging misclassification of certain workers as independent contractors.

Beyond the personal case told in Wired, this testimony poses a central question for the future of digital work: to what extent can we automate without recognizing the human work that makes this automation possible? AI models are often presented as autonomous systems, capable of learning from gigantic volumes of data. But their refinement still largely depends on people who evaluate, correct and supervise their behavior, sometimes in conditions close to permanent racing.

For the cultural industries, the signal is particularly worrying. AI isn’t just coming as a creative or productivity tool. It fits into an already weakened labor market, where qualified professionals accept unpredictable contracts because their original sector no longer offers them enough security. The risk is therefore not only to see machines producing more content. It is also to see creative professions being broken down into fragmented, timed, evaluated and underpaid tasks.

The testimony published by Wired has the force of a personal story, but it goes far beyond the case of an American screenwriter. It shows an AI economy built on a promise of modernity, while taking up certain very old reflexes of precarious work: permanent availability, absence of security, pressure to perform and unilateral power of the employer or the contractor. Behind the smooth image of digital assistants, there are still humans. And, too often, they are the ones the system makes interchangeable.

Source : Wired

*****

From Monday to Friday, Bruno Guglielminetti offers you a look at the essential digital news with120 seconds of Tech.

Or encore…

Listen to the most recent edition of My Notebook,
the weekly digital news magazine.


Learn more about My Notebook

Subscribe to get the latest posts sent to your email.