Why the arguments against AI are so confusing

Introduction

I was chatting with a normal friend recently and they seemed confused by some of the arguments against AI he had heard. By ‘normal’ I mean someone who hasn’t been adjacent to the AI discussion for the last decade and watched how the arguments have grown and multiplied over that time. When he asked me1 for an explanation I realized the argument was actually two different arguments made by two different groups. The number of arguments and groups making those arguments, have grown over time, if you’ve been near the space you’ve had time to acclimate.

This is a brief explanation of the groups and their arguments for normal people

Groups arguing against AI

It’s easiest to understand why there is so much confusion in the anti AI arguments when you realize there are actually 5 separate groups making overlapping sets of arguments, but with wildly different concerns, languages and inception dates. Once you’re able to identify which group(s) the arguer is in, it’s much easier to understand and evaluate the argument.

Pre-2020/GPT2

AI safety

This group was mainly concerned that long term AI will have agency/intelligence we can’t understand, predict or control and will harm humanity as it will be aligned with different goals. Associated with rationalist/effective altruist/people in tech. If this sounds like science fiction be aware that OpenAI was formed by people worried about this. The writing style is very long, well thought out posts on LessWrong

AI ethics

Mainly concerned with fairness and bias of the AI and how it impacts different groups differently. Most were previously working on fairness and bias of big data/machine learning of ‘Big Tech’. Which was a related precursor tech to AI. Mainly academics or reporters quoting them. The writing is mainly academic social science style

Differences

Just looking at what those two groups are concerned about and who they consist of, you can see that they likely don’t have much overlap. The ‘ethics’2 people were primarily concerned about things that were happening right now, while ‘safety’ concerns seemed like science fiction. The other difference was how direct the harm was, with much of the ethics work being focused on less direct harm and safety people being focused on direct, unambiguous harm. Being concerned with fairness vs being concerned with being turned into a paperclip.3

Simplifying you could graph them as

cast

Post 2020/GPT2

Pre 2020 it was very easy to know which of the two groups was making the argument and they weren’t really in the same conversations and there was no overlap. Then post 2020/GPT2 some new ‘groups’ of concerns appeared that didn’t fall into either existing group. When the world saw what actual AI could be used for they had a different set of concerns than the first two groups. This is not a criticism of the prior groups, it’s hard to predict what the second order effects of some new technology will be until it ships. In 2025 standard names haven’t emerged yet, but they fall into 3 groups

AI Capabilities

Mainly concerned with the current capabilities or near term abilities and what people will do with them. Worried about addiction/misinformation/enabling bio terror etc. Generally associated with governments, economists and think tanks and written in that style

AI Doubters

Mainly concerned that AI is just a “stochastic parrot” or was vending information that a human has created. Initially they thought it wouldn’t be able to do anything useful, the current (2025) position is it’s not ‘real’ or interesting. Often a specialist in some field AI is currently bad at so writing style is pretty varied.

AI Grifters

Mainly concerned with either using anti AI arguments to further another cause or gaining status and power by raising concerns about AI. Generally are unconcerned with the coherence of their own arguments, much less the truth. Are split across the prior four groups, but write a lot of opinion pieces.

Differences

The Capabilities arguments is distinct from Safety as it’s focused on the nearer term/proven abilities and usually involves a human using the AI to complete the human’s goal. But it’s similar to Safety in that it’s concerned with what the extra intelligence provided by it will do. The Capabilities arguments are distinct from Ethics because it’s usually a very direct issue they are worried about as opposed to indirect harm

Doubters are normally looking at what is produced by the most widely used consumer version of the model. So not the SoTA or bleeding edge of what models can do. They’re distinct from both the Ethics and the Capabilities in that the level of tech they are looking at is usually lower. The other difference from the other groups is that they doubt that there will be a large benefit from AI. The other groups believe there will be large benefits, but are focused on the costs

cast

I’m going to skip the grifters as much as possible for the rest of this essay, even though they have caused most of the confusion. You might disagree with the facts or arguments presented by the other groups, but they are usually at least good faith and coherent. Grifters don’t really care about the coherence of their argument, or even issues related to AI. They usually have some semi-related grievance they wish to advance, want power, or have a cause they want funding for. So will make whatever arguments they think will convince you. A single piece written by them will use premises that contradict each other in two adjacent paragraphs. On Bullshit covers this.

Arguments

I don’t agree with all of these arguments, but these are the major ones and the group who normally presents them. I’ve tried to keep them under 100 words each. I’ve presented them in increasing levels of complexity

I also have not presented counterarguments but if you want the question you should ask generally depends on the group presenting it.

  • Safety - How likely do I think this scenario will happen?
  • Ethics - How bad do I consider this indirect harm?
  • Doubters - Does this match my experience with AI?
  • Capabilities - How much worse will AI make this vs current level?
  • Grifters - Does this argument just actually apply to AI or technology more broadly?

AI just is copying from people

Raised by:Doubters, Ethics

An AI model is created by ‘training’ or looking at existing works and ‘learning’ certain values or strings of numbers from it. So when you tell it to create an original work it might produce something that looks very similar to work it is trained on. AI can be told to produce in a style of an artist. So the AI appears to be copying a human’s style

AI is doing work that people could do

Raised by:Capabilities, Ethics

AI can produce a text/image/video clip. This piece of media could be used for advertising copy, making decisions, entertainment etcetera. In some cases the piece of media being used would have been created by a human. So the AI is doing work a human would have been paid to do

This argument and “AI is just copying from people” often get combined to be “The AI is copying from people and then taking the job they would have been hired for”.

AI is running out of things it can learn

Raised by:Doubters

AI is created by looking at existing works. There is only so much human created text/image/video that already exists. If AI training requires human created media, and if training can’t use AI created media (called synthetic data). Then AI will not get much smarter

AI is biased

Raised by:Ethics

AI is trained on material produced by humans who made it and published it on the internet. This is a subset of humanity that’s not representative. Think how often you post vs the top 1% of reddit posters. So the information it is learning from is biased. It’s then trained by an even smaller subset of humanity who’s definitely not representative. This means that the AI has biases.

For a non-inflamatory example, AIs a few years ago (GPT-2 era) thought that Japan was richer than the US, even though America is on average much richer. This was believed to be caused by how the model was trained on primarily American writers’ impression of the richer parts of Japan.

AI output cannot be trusted due to hallucination

Raised by:Capabilities, Ethics, Doubters

Simplistically put, LLMs are just predicting the next word based on the prior text and probability. It’s actually much more complicated than that but it means that there is a chance that they can say something completely untrue. This means users shouldn’t trust what it says

AI will give lone actors too much power

Raised by:Capabilities, Ethics

AIs give individuals access to more knowledge As AI is hooked up to more and more other computer systems it can do more and more work on behalf of the person (called Agents) and can make them more productive. This person could be trying to hack banks or make the Black Plague. AI makes it more likely they will succeed.

AI will give AI corporations too much power

Raised by:Capabilities, Doubters

Currently most of the cutting edge models are created by private corporations. So the corporation would have the best models available first. This means the corporations with the cutting edge model are, for example, likely to be able to automate all of phone/online customer service. More broadly if the model can do lots of tasks very well that corporation could dominate many fields, from manufacturing to day trading. This gives those corporations too much control/power.

AI will give governments too much power over their people

Raised by:Capabilities

AI is able to process a much larger amount of media, much more quickly and cheaply than a human. This means that a national government could use an AI to monitor every citizen’s internet usage or compare faces in a protest video to DMV records much more efficiently than they currently do. It also allows for much better targetted propaganda. This gives governments, especially ones without strong norms of personal freedom, too much power

AI will accidentally kill people

Raised by:Capabilities, Doubters, Safety

As AI is talked to more, people will stop double checking what it says. At some point the AI will hallucinate, give bad advice and give someone bad advice that they follow and get themselves killed. Similar to how people drank bleach because they read to do it on the internet to cure COVID.

A related but distinct argument is, as AI Agents are used to make decisions for more systems even if it is 99.999999% correctthey will make some bad decisions. Some of these bad decisions could kill people. For example if AI reading of image was used to decide if earthquake damage means a building is no longer structurally sound. This is usually an argument for Human in the Loop

AI will have agency

Raised by:Safety

Current AI does not appear to have wants or goals, although there is some research into this. However it’s trained on text written by humans who have wants and goals. When you combine this with the fact that we don’t know how agency develops it could lead to the AI having goals that might conflict with humans. This could lead to it purposely killing people

AI will be an alien intelligence

Raised by:Safety

While we understand how to train an AI we do not have a great model of how human intelligence works. This means that we don’t know how similar or different the AI intelligence is compared to us. This could lead to it thinking much differently than we do. So the results cannot be trusted

This is usually combined with the “AI will have agency” argument, where the AI will have goals and a different enough intelligence that there is a concern it will be a threat to humanity. This is an argument for ‘alignment’ research or training

AI will become superintelligence

Raised by:Safety

AI is getting smarter and smarter, although the speed of improvement may be slowing down. Conceivably an AI could be trained by humans that is then used to train a smarter AI, that is then used to be trained a smarter AI etc.

This is almost always combined with the prior two arguments such that there will be an super intelligent alien intelligence with different goals than us, that might not care about us.


Thanks to Julius of San Diego Rationalist for discussing


  1. While I don’t work directly in generative AI I’ve been aware of Yudkowsky since he wrote on Overcoming Bias in 2008-2009, been working in supervised ML the last 8 years and have shipped something useful that actually needed to be powered by LLM. Most importantly, I have been making Roko’s Basilik joke’s since at least 2016

  2. I’m using ‘AI ethics’ and ‘AI safety’ because that’s how both groups usually self identify. What most people consider ethics and safety would be a much broader category than what ‘ethics’ and ‘safety’ is usually concerned with. This is a broader issue with most $FOO ‘ethics’ 

  3. I am purposely invoking Yudkowsky’s Law