
On Thursday, my publisher emailed me that a “major technology company involved in AI development” wanted to use my book Stories from Montana’s Enduring Frontier “for AI training purposes.” The well-crafted email felt intentionally bland, as if trying to dampen my emotional response. I would earn, it explained, $340 for “this one-time use.” I had less than a week to decide.
Artificial intelligence is creating horrific uncertainty for our society, with authors particularly vulnerable. Given that today’s AI providers are mostly large language models (LLMs), their chief skill of generating text represents a direct threat to writers’ jobs. Yet while most people fulminate abstractly about AI, here I was with a clear, vivid choice.
It’s not a lot of money, $340. But that makes it useful in crystallizing the choice. If I take the offer, and the knowledge I poured into these historical essays is available from an AI, would anyone ever again buy my book? If my royalties thus fall to zero because I have effectively signed a death warrant for a book-that-is-like-a-child-to-me, is the $340 worth it?
There’s a bigger picture, of course. Slogging through most AI-generated text is like reading the legalese in a contract. Replacing a human writer with an AI is like replacing a wild raspberry with a raspberry-chemical-flavored Crystal Light. The error-filled, uncreative, BS-laden output of an AI threatens the joy and usefulness of reading, which is a central facet of my life.
Worse, the act of writing has always been about learning. When we learn to write, we learn to think. In thinking, we discover we need to read. If the read-write-think triangle is outsourced to AI, how will young people develop the skills to evaluate its results?
Worst of all, many AIs have been trained by stealing from authors. A database released in March revealed that four of my books and at least three of my articles were among the pirated works used to educate Meta’s AI. Curiously, works I ghostwrote for large corporations appeared to be absent—as if the thieves knew to target only powerless freelancers and scientists.
So why would I participate? For one thing, $340. It’s not much compensation for all the work I put in, but then again neither is a royalty of $1.19 per book sold. If my main goal was adequate market compensation for my writing, I probably shouldn’t have published Stories from Montana’s Enduring Frontier in the first place.
The book is now 12 years old. At current sales rates, it would take a few years to make $340 in royalties. And at least I’m making something, as opposed to being stolen from. My publisher would also make money on this deal, so it might be worth rewarding them for handling this situation well. I could even donate the $340 to some organization that would save civilization from these threats (and if, as I fear, such an organization doesn’t exist, does declining the $340 really help the situation?).
When I posted my dilemma to Facebook, I got some very smart responses, but not the viral engagement I expected. I guess Facebook’s AI-driven decisions are hard to predict? Or maybe people didn’t engage because they don’t know how to think about this: Maybe, as with previous technologies, making the book available to AI will stimulate sales—or maybe not. Maybe AI will thwart young people’s ability to engage in creative or intellectual careers—or maybe it’s overhyped. Maybe AI will swallow my output without fair compensation—or maybe it already has. People will like or comment on a social-media post when they think they know something, but with AI we’re all in the dark.
My publisher wouldn’t say which “major technology company” made the offer or what they intend to do with my book. Would my desire to make $340 be more justified if I knew they were going to cure cancer? What if they wanted to contribute to the world’s knowledge rather than merely help students cheat?
Such big-picture questions, I realized, reflected a distinctly human desire, rather than an AI desire. An LLM consumes a book as data. Its model requires data to predict what the next word in a sentence should be (“can it be algorithm here? Or blockchain? Pretty please?”)
It’s a bit ego-deflating to think of my book as “data.” I’d prefer it to be “knowledge” or even “wisdom” that the AI wants to suck from me. I’d prefer to think that the stories in my book are really well told, that the insights I draw from them are particularly keen, that I stitch them together in ways that make brilliant larger points. But assuming this is an LLM, it doesn’t think in such big-picture terms. It just predicts a word, and then another word, and then another.
Yet isn’t that what nature does? There’s no grand plan. No knowledge. No story with an emotionally satisfying ending. There’s just a predator looking for its next dinner. A cell looking to reproduce. A writer looking for $340. A leaf looking for sunlight. A creature, like me or you, looking for love and purpose.
A thought-provoking essay, John, on a modern-day writers’ dilemma. Let us know in a future Natural Stories post what you decide, and how you arrived at that decision. No judgement here.
A good read, John. For some reason, while reading your post, the movie Invasion of the Body Snatchers came to mind.