Various arguments for and against AI
Sean Goedecke wrote a post titled "Is using AI wrong? A review of six popular anti-AI arguments" that goes through common anti-AI arguments. I am occasionally one of those people that would make these sort of arguments, so I appreciate some well articulated counterarguments that can nuance my world view and his arguments are pretty nuanced and reasonable. Please read Seanâs entire post, as I only have some select quotes and comments.
On the issue of use of copyrighted training material:
The pro-AI counter-argument here is that when GPT-4o learns from a copyrighted book, itâs the same as when a human learns from it. If I read Freakonomics with the intention of learning from it and explaining the things Iâve learned to other people (who have not bought the book themselves), thatâs not immoral. Why then would it be worse for GPT-4o to âreadâ Freakonomics with the same goal?
I donât think itâs a good argument to say âbecause OpenAI is trying to make moneyâ. I might read Freakonomics in order to make better financial decisions and make money, and that wouldnât be wrong. Likewise, I donât think itâs a good argument to say âbecause GPT-4o is a derivative work of Freakonomicsâ. Maybe itâll eventually be a successful legal argument, but I donât buy it from a moral perspective.
First of all, we know that the training material that went into this has included huge amounts of pirated books, so the models aren't just learning from other sources that have read those books. In addition, I think there is an element of scale that makes these sort of comparisons to how regular people learn and process information not quite hold up. The training of these models involve trying to process almost everything that has ever been produced of text based material and feed it into a statistical model. That is not in any meaningful way comparable to how humans learn things. I donât think big companies can compare their business model to what individuals do, and we have plenty of laws and regulations that apply to businesses and not people, because it doesnât make sense to uphold them to the same laws and ethics.
He does continue with:
To me, the most compelling anti-AI argument is that a world in which ChatGPT is universally available is a world in which fewer people have the reason to buy and read Freakonomics at all. If you can ask GPT-4o to explain any idea for free, youâre less likely to go and buy books that explain those ideas. And that just seems unfair. Why should OpenAI be able to base their entire business model on a system of experts-writing-books, when their business model will destroy that system and those expertsâ livelihoods?
That I completely agree with and I think it is fundamentally that which most critics are responding to, and whether the training method is technically stealing or not is just a distraction on semantics.
He also goes into whether AI will prevent humans from learning and ends up with a tangent that I found thought provoking:
I will admit it does seem pretty inhuman when someone says that they got ChatGPT to write a nice note for their children or their spouse. But I donât know if thatâs any worse than someone buying a nice card with some pre-written text inside it, or even someone quoting a nice piece of poetry that they didnât write themselves. Isnât it the thought that counts?
I have previously written how I feel that AI is removing the human element when it is used, and with this example I think some thoughts count more than others. If I want to give my wife flowers, there is more to that gesture than the actual flowers. It can be a surprise where I go to a flower store and they help me pick out something, or I could pay for a subscription service to automatically send flowers on a set schedule. The second approach definitely feels very impersonal, and I would say that even if you just go to a store to choose and buy a pre-written card, it beats having ChatGPT do it.
On the huge topic of AI and its effects on the art world, and Sean acknowledges the major issue here:
One reason to think AI is bad for art is that it may be destroying the livelihoods of artists. There are many independent artists who exist on commissions - people paying them to draw specific things - and many of those clients are now asking multimodal models like GPT-4o to generate those images instead. AI-generated images have their own problems, but itâs hard to compete with the fact that theyâre free and near-instant.
I think there are good arguments presented here and while I am generally critical of the whole AI hype, it definitely isnât all trash and some anti-arguments miss the point. What I think is important to acknowledge by anyone, is that this technology is such a big game changer that the various comparisons can only be used to a certain degree. The new paradigm demands a rethinking of how copyright law works for example, and the debate should be more on what is just technically legal. The scale that these models are being applied and how they scrape the entire internet, forces us to think differently about the ethics and unwritten rules of the free internet and the value of art and knowledge.