Following the flood of AI-generated books and the KENP scam I was discussing a couple of weeks ago, Amazon has started taking action — just as I had predicted. As the Goodreader reports, Amazon KDP has recently introduced new content guidelines specifically addressing AI-generated books on the platform. This is to maintain transparency and quality of books available to the readers.
Now, authors must disclose whether their book(s) contains AI content. Specifically, during the upload process authors must inform Amazon of any AI-generated content – images, text, or translations – used in their book. This is necessary both when they publish a new book or edit and republish a new one.
Amazon presented the move as part of an ongoing process:
We are actively monitoring the rapid evolution of generative AI and the impact it is having on reading, writing, and publishing, and we remain committed to providing the best possible shopping, reading, and publishing experience for our authors and customers.
Humans vs. AI vs. AI
As part of the new process, authors must select one of the following options:
- None of the text/images/translations are generated by artificial intelligence.
- Some sections with minimal or no editing were generated by AI.
- AI-generated some areas with extensive editing.
- The entire work, with minimal or no editing, was developed by AI.
- The whole piece, with extensive editing, was generated by AI.
Failure to accurately report AI content may result in penalties or account termination in the future.
Crucially, authors who create content themselves using AI-based tools to “edit, refine, error-check, or otherwise improve that content” don’t need to report that usage. This is great news to anyone using tools such as Grammarly (yes, I know we don’t normally consider Grammarly to be an AI tool but that’s exactly what it is — especially now that it offers suggestions on your content).
Of course, authors still have to adhere to all content guidelines regardless of whether they include AI-generated or AI-assisted content.
I see this as just the first step in what is bound to be an interesting battle between scammers and Amazon. On its own, it does little to protect either authors or buyers. Therefore, it’s likely setting up the stage for the next step, which is to automate the process of identifying AI-generated content (ironically enough, the best way to do this in bulk is through the use of AI tools).
Once that happens, Amazon will deal swiftly with scammers using AI-generated content to game its KENP algorithms. When the hammer falls, I expect many titles to be forcibly removed and many innocents to suffer until Amazon finetunes its detection algorithms.
At the same time, it’s a necessary step that may help immensely with AI detection, as current tools are woefully inadequate, generating both false positives and negatives. Perhaps, then, Amazon will manage to solve the issue by identifying AI content more accurately.
My guess is that Amazon is going after the low-hanging fruit first: people who pretend they haven’t used any AI but Amazon’s detection algorithm shows they have. These could easily be 80% of the cases, so the company will target them first. It remains to be seen what effect this will have on KENP royalties or what unintended consequences there may be.
Update Sep 19, 2023: A few days after I published this post, Amazon took the second step against AI.