winther blog

Various arguments for and against AI

Sean Goedecke wrote a post titled "Is using AI wrong? A review of six popular anti-AI arguments" that goes through common anti-AI arguments. I am occasionally one of those people that would make these sort of arguments, so I appreciate some well articulated counterarguments that can nuance my world view and his arguments are pretty nuanced and reasonable. Please read Sean’s entire post, as I only have some select quotes and comments.

On the issue of use of copyrighted training material:

The pro-AI counter-argument here is that when GPT-4o learns from a copyrighted book, it’s the same as when a human learns from it. If I read Freakonomics with the intention of learning from it and explaining the things I’ve learned to other people (who have not bought the book themselves), that’s not immoral. Why then would it be worse for GPT-4o to “read” Freakonomics with the same goal?

I don’t think it’s a good argument to say “because OpenAI is trying to make money”. I might read Freakonomics in order to make better financial decisions and make money, and that wouldn’t be wrong. Likewise, I don’t think it’s a good argument to say “because GPT-4o is a derivative work of Freakonomics”. Maybe it’ll eventually be a successful legal argument, but I don’t buy it from a moral perspective.

First of all, we know that the training material that went into this has included huge amounts of pirated books, so the models aren't just learning from other sources that have read those books. In addition, I think there is an element of scale that makes these sort of comparisons to how regular people learn and process information not quite hold up. The training of these models involve trying to process almost everything that has ever been produced of text based material and feed it into a statistical model. That is not in any meaningful way comparable to how humans learn things. I don’t think big companies can compare their business model to what individuals do, and we have plenty of laws and regulations that apply to businesses and not people, because it doesn’t make sense to uphold them to the same laws and ethics.

He does continue with:

To me, the most compelling anti-AI argument is that a world in which ChatGPT is universally available is a world in which fewer people have the reason to buy and read Freakonomics at all. If you can ask GPT-4o to explain any idea for free, you’re less likely to go and buy books that explain those ideas. And that just seems unfair. Why should OpenAI be able to base their entire business model on a system of experts-writing-books, when their business model will destroy that system and those experts’ livelihoods?

That I completely agree with and I think it is fundamentally that which most critics are responding to, and whether the training method is technically stealing or not is just a distraction on semantics.

He also goes into whether AI will prevent humans from learning and ends up with a tangent that I found thought provoking:

I will admit it does seem pretty inhuman when someone says that they got ChatGPT to write a nice note for their children or their spouse. But I don’t know if that’s any worse than someone buying a nice card with some pre-written text inside it, or even someone quoting a nice piece of poetry that they didn’t write themselves. Isn’t it the thought that counts?

I have previously written how I feel that AI is removing the human element when it is used, and with this example I think some thoughts count more than others. If I want to give my wife flowers, there is more to that gesture than the actual flowers. It can be a surprise where I go to a flower store and they help me pick out something, or I could pay for a subscription service to automatically send flowers on a set schedule. The second approach definitely feels very impersonal, and I would say that even if you just go to a store to choose and buy a pre-written card, it beats having ChatGPT do it.

On the huge topic of AI and its effects on the art world, and Sean acknowledges the major issue here:

One reason to think AI is bad for art is that it may be destroying the livelihoods of artists. There are many independent artists who exist on commissions - people paying them to draw specific things - and many of those clients are now asking multimodal models like GPT-4o to generate those images instead. AI-generated images have their own problems, but it’s hard to compete with the fact that they’re free and near-instant.


I think there are good arguments presented here and while I am generally critical of the whole AI hype, it definitely isn’t all trash and some anti-arguments miss the point. What I think is important to acknowledge by anyone, is that this technology is such a big game changer that the various comparisons can only be used to a certain degree. The new paradigm demands a rethinking of how copyright law works for example, and the debate should be more on what is just technically legal. The scale that these models are being applied and how they scrape the entire internet, forces us to think differently about the ethics and unwritten rules of the free internet and the value of art and knowledge.

#ai #links