Why Writers Should Pay Attention to Anthropic’s AI Controversy
The rapid advancement of AI is reshaping the creative landscape, often at the expense of writers’ rights and livelihoods. The controversy over Anthropic’s book scanning project—now back in the headlines—illustrates the urgent need for writers to pay attention to how their work is used in AI training.
Last week, The Washington Post and other major news outlets mined newly released legal filings to expose how far tech companies were willing to go in their quest to teach LLMs to write well. In pursuit of building better AI products, Anthropic executives downloaded millions of pirated ebooks before switching to a book scanning operation that destroyed an unknown number of physical books.
Yes, the book destruction looked bad—but that’s not the point
Reactions to this coverage have focused on the irony (and dramatic visuals) of destroying physical books in order to create digital versions of them and whether it was wrong for Anthropic to scan and destroy the books on such a massive scale.
Overlooked in all this discourse is the potential impact on the creative economy. With millions of books affected and similar practices alleged at other tech companies, the cumulative loss of income for authors is substantial and threatens the sustainability of professional writing as a career.
‘Good writing’ that ‘an editor would approve of’
Training LLMs requires billions of words. Last year’s ruling notes that Anthropic could have trained its LLMs without using any books, but doing so would have meant paying in-house writers and engineers to cultivate examples of “good writing” that “an editor would approve of.” Purchasing physical books—which Anthropic ended up doing for its destructive book scanning project—is a legal workaround, albeit an expensive one.
The problem is that before pivoting to book scanning, Anthropic knowingly amassed at least 7 million copies of pirated ebooks as a training source for its LLMs, court documents show. The judge ruled that Anthropic’s acquisition of stolen books displaced demand for those titles.
AI is learning fast. Writers are paying the price.
Without claiming any wrongdoing, Anthropic agreed to pay a $1.5 billion settlement to publishers and authors; the settlement covers nearly 500,000 books, according to several published reports. But other tech companies stand accused of using stolen books as training fodder for LLMs, eliminating the chance for authors to earn income off millions of titles.
I don’t endorse all the AI fearmongering proliferating within publishing circles. But AI innovators are exploiting legal blind spots, underscoring the need for updated copyright laws. Tech companies argue that AI training advances knowledge and innovation, but these benefits often come at the expense of an author’s legal right to control how their work is used, sold, or shared.
The potential economic consequences for writers are profound. It’s already difficult for writers to make a living. The continued devaluation of the written word for the sake of technological advancement reinforces the idea that creative labor isn’t worth paying for. As technology races ahead, it’s vital for policymakers to demand stronger protections for creative work so that writers are fairly compensated in the age of AI.
See also:
Inside an AI start-up’s plan to scan and dispose of millions of books (Washington Post, 🔒)
Unredacted files reveal Anthropic’s ‘secret plan’ to ‘destructively scan all the books in the world’ (The Bookseller, 🔒)
Anthropic to pay authors $1.5 billion to settle lawsuit over pirated books used to train AI chatbots (The Associated Press, Aug. 2025)
Anthropic destroyed millions of print books to build its AI models (Ars Technica, June 2025); Related: Key fair use ruling clarifies when books can be used for AI training