AI dangers for writers: Getting sued isn’t the worst that can happen 8 Comments


The sudden rise of AI has brought with it many great benefits. However, with those benefits come dangers that writers should be aware of. Supprisingly perhaps, having your work used to train AI is not on this list of dangers as that is entirely another article in itself.

It is no secret that AI can be used in many industries to 10x your work output. Writing is no exception to this. A few well-crafted AI prompts can have a first draft article written in the time it takes to make a cup of tea.

With such amazing tools, it seems like nothing is stopping you from producing an avalanche of pretty average articles to sell. Before you try that, it might be wise to learn the hidden dangers of selling AI-generated content.

It is said that AI is a wonderful tool to get writers started. This is simply not true. It is a tool but one that has a myriad of hidden traps. Join me as we explore the dangers of AI-generated and AI-assisted content that all writers need to be aware of.

Problematic copyright and ownership of AI content

The law is unclear

The law on who owns content generated by AI is complex. An article from the University of Portsmouth cities the Copyright, Designs and Patents Act 1988‘s section 178 which includes machine-generated content within the definitions. It goes on to say that AI-generated work can be protected by copyright but there are many complications.

Dr Kampakis reports that in America, AI-generated content cannot be copyrighted.

The U.S. Copyright Office has taken the position that creations made by non-human entities, including machines, are not eligible for copyright protection.

The Legal Implications Of AI-Generated Content In Copyright Law, Dr Stylianos (Stelios) Kampakis, TheDataScientist

This immediately means that, if your work includes AI-generated content, your work has weaker protections if it has any at all.

That is not the end of a writer’s potential copyright woes. AI models like ChatGPT are trained on mind-numbingly large quantities of existing work. Work that might be subject to its own copyright protections. It may be currently impossible to be 100% certain that the generated content you are looking to sell is not accidental plagiarism.

However, the original sources of answers generated by AI chatbots can be difficult to trace – and they might include copyrighted works.

ChatGPT: what the law says about who owns the copyright of AI-generated content, Security and Risk blog, University of Portsmouth

It is a sure bet that editors and publishers are well aware of this hidden danger. Generally, I find, they don’t like exposure to risks of copyright claims.

Only valid for non-commercial use?

Additionally, text and data mining (TDM) in the UK is only allowed for non-commercial uses. ChatGPT is a massive example of such data mining. This raises a further question if you can even legally sell generated works. There is a reason that the University of Portsmouth spends a third of its article talking about the question of ownership.

Yes, it could be argued that you – the prompt engineer – own the output. However, there are many other competing claims.

It could be argued that the work is owned by the owners of the training text. After all, what you have generated is a highly complex derivative work.

It could be argued that the owners of the AI model are the owners of its output. There is a strong case to be made in defence of this position.

It could also be argued that the AI itself owns the output. I don’t see such a claim doing very well in a court but as technology progresses, we will come to a point where we have to decide if these computer-powered brains are truly alive.

It could also be argued that generated content is immediately in the public domain. However, I would not want to be the one defending that argument in court.

Good luck selling a commercial article that might require a team of solicitors and developers to certify you as the clear and only owner of the content.

Getting sued over AI content

It follows, then, that if someone else’s work can become included in your work when you use AI, you could breach copyright law without knowing it. If the owner of the work can build a case against you, a lawsuit may come your way.

There is no reliable way yet to trace where each word or phrase in your AI content came from. This means that you cannot ever be entirely sure that a substantial part did not come from existing copyright works.

Many creators are angry about their work being used. It is only a matter of time before a viable plagiarism challenge is made. Getty Images reportedly took legal action against Stability AI. Getty Images claims that thousands of images were unlawfully copied without proper permission or licensing. That is not the only legal action being taken over the legitimacy of training data.

Three of the five possible ownership arguments could also come into play here. Not just from other authors who had their work used for data training but programmers and companies also claiming they own the work you are trying to sell.

Even one such claim landing could leave your work as not your work any more. Even if you just started the work from an AI and then improved it yourself, that might not be enough. The legal principle of the “fruit of the poisoned” tree could apply. Your first draft written by a computer could make the finished article or book a derivative work.

A publishers’ trade association— which includes the New York Times, the Washington Post, Disney, and NBCUniversal—is reminding members that AI tools built on their archives could break copyright laws.

Publishers’ group warns that generative AI content could violate copyright law, Morning Brew

Getting sued might not even be the biggest danger of AI-generated and AI-assisted content for writers.

AI and author reputation damage

There are three destructive ways that AI could inflict lasting reputation damage for writers. Such damage might only become a bit of a laughing stock in the best case. However, such damage could cost you readers, work, and income.

AI reputation damage route 1: Accidental plagiarism

If you find yourself on the receiving end of a plagiarism lawsuit, it is not going to do your career any favours. Even if you win, publishers are going to think twice before accepting your work in the future.

Goodness only knows how long that reputation will be a millstone around your neck.

AI reputation damage route 2: Factual inaccuracy

When AI produces content all it is really doing is working out which word is statistically likely to go next. Despite the realistic way AI like ChatGPT can interact with you, they have no intrinsic understanding of what they are saying.

It is common for AI to hallucinate entirely fictional “facts” and present them with the same confidence as it would truthful information.

You need not look any further than the BBC article about a US law team that had ChatGPT write their filing. A filing that cited six bogus cases. The law firm in question, Levidow, Levidow & Oberman, not only faces a negative reputation with the public but disciplinary hearings to explain themselves to a very angry judge.

AI reputation damage route 3: Tried to get one passed the editor

Many publications, publishers, and editors are aware of the complex minefield that is the question of copyright and AI-generated work. In the last few months, I have started to see magazine submission terms gain clauses about not using AI content.

I’m a talented writer, you might think, I can use AI and get away with it.

Not so.

Editors aware of the complexity and problems AI-generated content can bring them are almost certainly aware of the tools available that use AI to detect if AI was used to write the content. AI detector tools are powerful, easy to use, and getting better all the time.

If the editor discovers that you have been selling them AI content, they are likely to take a dim view of your antics. You can probably expect to get no more work from an editor who caught you breaking the magazine guidelines. And that’s assuming that the magazine’s legal team don’t take a long hard look at you and any potential liability you set them up for.

You don’t get paid but they use it anyway

thief

While unlikely, there is nothing stopping a shady publisher from arguing that as you did not write the work, they don’t need to pay you.

Sure, the law is unclear in the UK on who owns AI-generated work but the US is pretty clear. In America, AI-generated content is not eligible for copyright protection. American publishers don’t need to pay or even acknowledge you and your AI-generated content.

When it comes to getting paid overseas with your AI-assisted content, you face a die roll on getting accepted and another on getting paid.

You could lose ownership if you even had it to start with

Your ownership of generated content is not a given.

The UK government recently carried out a consultation on AI and copyright. Two conflicting views emerged. The tech sector believes the copyright to AI-generated content should belong to users, whereas the creative sector wants this content to be excluded from ownership completely. The UK government has not acted on the findings and instead recommended further consultation between the interested parties.

ChatGPT: what the law says about who owns the copyright of AI-generated content, Security and Risk blog, University of Portsmouth

AI is such a new technology that the law is having a hard time keeping up. It is yet to be decided who owns any generated content.

After a consultation, the UK government concluded that it was too soon to say who owned generated content and that the subject must be revisited as the technology matures.

While you might be able to claim ownership now, that could be taken away from you at any time. All it takes is a shift in the law down the road. Unless I missed something none of us can see the future. A future where your ownership of computer-generated content may suddenly end.

Existential threats of AI for writers

An often overlooked threat for writers when it comes to AI is far more existential. A threat to your writing and to your ability to write.

That sounds outlandish but give me a few minutes to explain.

Large Language Models like ChatGPT are little more than a statistical distillation of all the training text. This enables them to work out what, on average, might go next. The output is, at best, exactly average. By definition, 50% of all written work is better than what you get.

This comes with a few problems of its own. This merely average writing offers only a dilution of your voice as a writer. When you use generative models you exchange quality for speed.

Surely, you might say, I’m a strong writer and can fix up the text.

Yes, I reply. You can certainly try. However, the quality of a house depends on the foundation upon which it sits. Your writing deserves a better foundation. You may even find that it takes less effort to plan and write the article yourself from scratch than factor in the drag factor of nearly okay computer-generated content.

I would also suggest that you risk slower growth as a writer. Instead of crafting one fine article (or book) after another, you could instead spend time dragging an okay text up to some sort of professional standard. While you do that, your mental muscles which are used for first drafts atrophy.

It hardly seems to me to be worth the risks of using AI content for the relatively limited upsides. One day, maybe, there will be a computer that can write as well as you do in the style that you write. Until then, the only one who can write like that is you. Don’t rob the world of your writing for a short-term advantage.

Conclusions and questions

At the end of the day, generative AI like ChatGPT is a tool. Tools are not of themselves good or bad. There are plenty of great ways to use ChatGPT and other tools. Writing articles just is not one of them.

In this post, we looked at some of the legal and business threats of using AI as a writer. We saw that the law is uncertain and that publishers would rather not have writers submit computer-generated content. Using such tools to write content for submission carries a threat to reputation and a threat to getting paid.

For now, at least, writers should not use AI to do the writing.

Have you used generated content as part of your professional writing?

Do you know someone who tried to pass off AI writing as their own?

Does the rise of AI worry you?

I want to hear your thoughts. Use the comments section below.


About Matthew Brown

Matthew is a writer and geek from Kent (UK). He is the founder and current chair of Thanet Creative as well as head geek for Author Buzz. His ambitions include appearing in some future incarnation of TableTop with Wil Wheaton and seeing a film or TV series based on something he wrote. Matt is also responsible for fixing stuff here when it breaks.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

8 thoughts on “AI dangers for writers: Getting sued isn’t the worst that can happen