It probably took the irony of such an episode for 1984 to be removed from a school library by a tool presented as rational in 2026. At the Lowry Academy in Greater Manchester, an Index on Censorship investigation claims that an audit of the library resulted in the removal of over 130 titles, nearly 200 if certain graphic novel series volumes are counted. Among them were the graphic novel adaptation of Orwell’s Twilight, Heartstopper, Becoming by Michelle Obama, and Why I’m No Longer Talking to White People About Race.
The most troubling aspect is not just the number. According to documents reviewed by Index, the school admits that the categorizations used to justify the removals were generated by an AI, deeming them “broadly accurate.” The librarian was also affected, facing disciplinary action and ultimately resigning. The school’s administration, as reported by the Manchester Mill, denies a blanket ban and claims that post-audit, the books were only reclassified by age, with “a very small number” being removed.
A policy entrusted to the machine
The heart of the problem is here. A document policy is not a compliance spreadsheet. It requires reading, understanding audiences, maturity levels, pedagogical contexts, and, above all, assumed responsibility.
When a novel is disqualified for “sexual tension,” an essay on incel culture for “development of misogynistic beliefs,” or an autobiography for its “political themes,” the algorithm does not provide expertise; it industrializes institutional caution bordering on censorship. It replaces motivated judgment with automatic, and hence debatable, and seemingly neutral automated taxonomy.
Alarming – For successful chicken farming, stuff them with candy (says the AI)
Expanding the scope requires looking elsewhere. In 2023, the Mason City school district in Iowa tasked ChatGPT with identifying titles containing sexually explicit descriptions among fifty, resulting in the removal of 19 titles to comply with a new state law. The American Library Association noted the first known case of a district using AI to decide which books to remove from school libraries.
In 2025, Texas offered a different approach. According to the Houston Chronicle, Pearland reported 57 titles flagged by ChatGPT as potentially violating law SB 13. The Chronicle also mentioned other districts using internal or third-party tools to identify controversial titles.
The shift is crucial: the AI does not always decide alone but introduces a pre-selection process, an automated inspection that turns a book into a potential risk before a professional can defend it.
Algorithmic neutrality, administrative fiction
Proponents of these methods cite lack of time, legal uncertainty, and political pressure. The rationale exists but does not justify everything. In Iowa, officials claimed to act due to a lack of clear directives and to comply with the law before the school year began. Such justifications reveal not the tools’ relevance but a transfer of responsibility: faced with vague standards, the administration delegates to the machine the task of creating an appearance of objectivity.
Yet, this objectivity is a facade. The Brennan Center for Justice warned in 2023 that using generative tools to enforce book bans makes these laws more dangerous, allowing decision-makers to hide behind a veneer of neutrality while managing overly broad lists of supposedly inappropriate titles. The bureaucratic machinery witnessed in Greater Manchester mirrors this prophecy: the AI reduces, the institution approves, and each person retreats behind the procedure.
The trend goes beyond isolated cases. In December 2025, 404 Media reported that a document management product for school libraries, Class-Shelf Plus, emphasized an “AI-driven automation.” A contextual risk analysis, with the ambition to reduce over 80% of manual verifications related to legal obligations. In essence, the market smells an opportunity: to transform fear of controversies into software solutions, offering institutions a quicker sorting of sensitive collections.
A British case not out of the blue
In August 2024, Index on Censorship noted that 53% of British school librarians queried had received requests to withdraw books, often initiated by parents. In other words, the algorithm does not invent the desire for purification; it accelerates it, using pseudo-technical language and a new scale.
Entrusting selection to these tools does not modernize the library; it installs a low-profile censorship in the public book chain, more convenient as it pretends to be merely a sorting operation.
Photo credits: ActuaLitte, CC BY SA 4.0
By Nicolas Gary Contact: ng@actualitte.com





