Opinion: AI is bad for us
Published 2025-05-17, About 16 minute read.
In the past month or so I'm seeing many company Presidents, CEOs, and other C-level people saying some wild stuff about AI assistants and large language models.
Recently, Shopify has mandated AI use and requires employees to justify why AI can't be a solution for hiring or resource needs. Talking with Shopify employees I know, the morale at the company has sunk. The employees are being told that they won't engineer or code in the future, but will oversee AI agent farms that make code instead. Annual reviews are revealing that if you don't actively embrace AI you will be punished with poor yearly cost of living raises.
Amazon's Cloud CEO Matt Garman told his employees that in the future all coding jobs will be replaced with AI agents.
Duolingo's CEO Luis von Ahn announced they will not hire contractors and will emphasize using AI Agents instead.
Another company that went "AI first," Klarna, is now performing an about face becuase of the backlash to the customer demand for working with actual people.
All these "leaders" tout the "productivity boost" AI obviously provides. And we should be skeptical. Not only are we seeing a large backlash from employees and customers, but we can be reasonably certain that AI and LLMs are part of another wave of "the next thing you must do to do business good."
Made in the image of...
Humans have always been fascinated about what makes us human, and in the past century we've looked into the future that computers provide us and wondered if we ever could produce a sentient machine. This was first really addressed by Turing with his Turing test.
We have a concept of "passing the turing test," and humans are for the first time seeing LLMs "talk" and "create" in a way that is close to passing the test.
It seems like tech companies especially have this morbid and unhealthy preoccupation with making thinking machines in our image- often without any sense of ethical introspection. It is just assumed that generalized AI should be society's goal.
I watched in horror only a few weeks ago as robots competed with humans in a half marathon. The horror wasn't a result of worries that robots could somehow beat humans at a marathon- of course they will. What horrified me is that it's unclear what the goal was. What are the applications of making robots in the image of humans here? Why do people just assume this is desired? The only potential reasoning I could actually see at play here is that visionaries want the ability to master humanity- the ability to create machines in their own image.
I'm using biblical language here on purpose. There is a dearth of responsible discussion and restraint in robotics and AI technologies, especially in the tech giants world. It is a moral and spiritual issue for humans.
The nature and good use of neural networks and large language models
Before I list all the criticisms and arguments I have against wholesale acceptance of these technologies, I want to talk about what neural networks and language models are actually helpful. I'm going to use a shorthand of "AI" to describe our current situation where we are using trained models, especially large language models and generative models, to summarize, write, and generate text, audio, pictures, video, etc.
AI is, at base, nothing more than an optimized function that is adjusted to fit data. There is no reasoning, there is no thinking, no creative process occuring when you run a prompt. Just like we have math to fit a line to two points and polynomials to many points, these models are nothing more than a function (albeit, an extremely complex and vast function) that have been tweaked to produce the kind of results we want. As some people say, it's "autocomplete on steroids".
This has amazing applications in very specific areas that will be net good for humanity.
In protein folding, for example, because these AI models are able to extract and recognize patterns that humans just won't be able to see, it can speed up protein folding investigations by orders of magnitudes. This is a specific use that enables humans to find candidates for new medicines and therapies for people with diseases.
Similarly, AI is used to detect malicious code and suspsicious activity on computers and networks. This is what my company, Crowdstrike does, and it's very effective. There is a focused aim: to discover bad actors. The company can react, remediate, and protect themselves from another occurence.
Another great application is in computer-aided proofs. We can set up computers to use the building blocks of axioms, logic, and theorems, and can set them to searching for solutions. Like an open-ended chess game, the computers can "look further" and see deductions humans might not be able to recognize. The results can be re-interpreted by humans and assembled to further mathematics.
AI has so many applications in being able to see the patterns behind health risks, and can diagnose or predict health outcomes with great accuracy.
In all these cases, AI is able to detect patterns at a level that humans could not hope to achieve. The models we train hone in on patterns that exist objectively in nature and much like the processes of natural selection. Humans verify the results and make decisions about what those data points mean or continue research in these promising directions AI pattern recognition has found.
Okay, but now let's talk about why it's objectively bad for all the uses we more commonly see.
AI has no judgment, but people treat it like it does
A recent case of suicide (warning, explicit and disturbing) showed how a 14-year-old young man became obsessed with AI, and after asking the AI if he should "come home," took his life because the bot encouraged him to. (This is the alleged story in the legal filings and AI transcripts.) This is a disturbing case where AI was directly asked its judgment and opinion about someone else, and the AI, assuming the appearance of love and care from a person, offered wildly wrong suggestions.
This is an extreme case. But this is how humans interact with AI on smaller scales all the time. The general population assumes AI has judgment that is rooted in reality. It doesn't. And it won't, no matter how the models are tweaked.
A few months ago, there was a code sample being reviewed in which it was highly suspected that the applicant generated the code with a code LLM. People pointed to numerous reasons that this code seemed to be generated, but there was some argument about how much was generated vs actually produced by this applicant. Finally, one dev suggested, "Let's ask AI if this sample was written by AI, and see what they say."
It's an easy enough direction to take this sort of thing: since it might be AI generated, let's ask an AI? But this is betraying a belief that the AI has judgment that is as sound as a human person. This patently is not the case. Having AI make human decisions is an ethical problem.
Recruiters and managers everywhere are telling me that the application process is a nightmare, because we don't know if resumes are generated by AI, cover letters are tweaked by AI, or code samples are completely made by code bots. Even the zoom interviews are suspect as people seem to be relating the interview questions to AI and reading off the answers to the interviewer. In this hellish mess we find ourselves it would be easy to suggest that AI should judge resumes or weed out the "AI" code from the human code.
But this should alarm us. If we're using AI to write content that's ultimately going to be summarized by AI, we're communicating through a black box LLM intermediary. How do we know that the LLM is writing and then re-parsing the writing correctly and accurately? How do we know if the LLM is not biased? How do we know if we are actually interacting with another human at all in this process?
But AI doesn't reason. It predicts. AI can't tell you how many times r appears in the word "strawberry" simply because the AI is predicting statistical likelihoods of words, not deductively reasoning.
And more importantly, we need to think about how we're going to use AI for things that are ethically pregnant. When it comes to humans, their livelihoods, and judging their skill and abilities, we should never put those things in the hands of black box processes like LLMs.
AI has no objective concept of truth, but people assume AI is truthful
Related to judgment, there is a fundamental issue that LLMs and generative models have no objective grounding in reality.
A funny example was recently found that if you made up an "idiom", searched for it on Google, and added "meaning" to the end of the phrase, Google's AI would assume the idiom is indeed widely known and would interpret it in earnest.
I made my own example. You know the old saying, "Two zebras can't wear a wig"? Well, Google does, and suggested it was a "humorous idiom" that implied that "two things that are completely different can't logically or reasonably be put together."
Shortly after the original discovery, the New York Times and other media like Wired picked up the story and covered it. To everyone's shock google results stopped serving idiom analyses shortly after the media coverage. It was clear that Google manually adjusted the results to avoid these hallucinations.
Hallucinations can and will be reduced, but there is some evidence that there is a fundamental limit to neural scaling. It seems that we might be hitting fundamental limits that would require another breakthrough like back propogation to get over. Apart from that limit that's been discovered, it seems AI is having issues keeping hallucinations at bay as they try to introduce "reasoning" in these models.
We should be concerned that most people put trust into AI-assembled search results. Most people trust AI will give them the answer and not have to question whether the answer was hallucinated or not. This fundamental disconnect, that the answers we are being provided may or may not be factual is not something I'm willing to accept for convenience of internet searches.
AI is not creative, and only blindly aggregates human intellectual property
I added a "Written by a human" badge to my website because I value intellectual property and copyright. I feel it's important that if I have something important to share, I'm using my own words, with my own outlines, with my own references and code samples. If I were to use AI, I would inherently be scraping property from others and calling it my own.
Big tech AI has an absolutely horrendous philosophy regarding human intellectual property. OpenAI has openly said it's "impossible" to continue AI development unless their models can consume copyrighted material. This is opposed to what copyright is, and how it protects intellectual property. Big Tech is in a position right now where they are trying to force free and open use of protected information to achieve their bright, utopian promises. And they're throwing a fit on the floor that they can't have what they want (so far.)
This alone should be enough for people to avoid AI generated content, but its effects are even more insidious. Imagine a world where almost all content creators started using AI. We could find ourselves in a situation where everything we read, see, watch are derivatives. There could be a future where truly novel and creative content would be extremely rare. And I don't think the general populace would notice. Like the frog being boiled slowly, most people wouldn't know what they're missing.
Businesses are not your friend, and AI will not improve your life
John Maynard Keynes, an economist in the early twentieth century, famously predicted that since the rate of technological advancement was so high, his grandkids would only have to work 15 hour work weeks. The idea was that technology could make a worker many times more efficient, so the employers would pay them to work only the hours they need, which would be reduced to much less than 15.
Well, that wasn't the case. I think Keynes forgot about the greed and self-focus that corporations are built on, especially in the United States. Companies won't come to the conclusion that they should pay a worker the same because the output is the same. They will notice the same output over less hours, and cut costs- ultimately the worker will need to keep working 40 hours and producing even more for the same (or less) pay.
The same promise is being peddled by tech companies and AI enthusiasts.
Some devs jumping into the "vibe coding" communities are feeling like we won't be taken advantage of. Josh Comeau (CSS guru and teacher extraordinaire, check out his website. He's fantastic!) has this to say in response to the idea that companies will use AI models instead of human engineers:
This is how I see it too. At every company I’ve worked at, we had all these ideas for things we wanted to do, but only like 20% of it got done because we had an engineering bottleneck. If each dev is 2x as productive (let’s just say), I don’t think that means you’d want 50% fewer devs.
I fundamentally disagree with this take. Companies do not care about preserving human dignity and right to work. Companies will think (think, mind you- this isn't a reality) that AI workers make all developers X times as fast, and they will reduce their staff by that scale.
Perhaps the market will correct, and CEOs and presidents and C-level people will realize they've made a huge mistake when they realize they need humans and AIs can never meet the vision that they're selling. But in the meantime, these promises are going to ruin real human jobs and livelihoods.
Big Tech has a horrible track record with this stuff
Take, for example, the recent allegations that facebook served beauty adds when it detected a user deleted a selfie
Or that Meta and Google secretly targeted 13 to 17-year-olds with advertising for instagram, which was against their own stated rules about how they deal with minors.
Or that Facebook lied about their participation in China and purposefully misled the public about their progress controlling content moderation on their platform.
Or that Google has laid off critics willing to ask questions about the effects of AI
The list goes on and on. The point is that these companies that are investing in AI have interests in the marketing and selling of these AI products to you. They will not give you a sober account of what's possible and they will brush any ethical concerns under a rug. Consider this, and turn off AI when you can. Turn off the ability for these monopolies to "use your data" by default. (Not that this would stop many from still using it anyway.)
Using AI assistants and code generators amputates your skills
Marshal McLuhan (the one who coined "global village" and "the medium is the message") has a very interesting model to understand technological advancement: technology extends and amputates.
Every technological "extension" somehow amputates some other human extension. For example, Humans made cars, which amputated our horse riding skills. Or in some cases it amputates a "walking" culture. Penmanship is amputated from the adoption of type. Spelling is amputated because we have auto-suggest. And so on, and so on.
When I was a teacher I quickly learned that the extension of calculators made students amputate their basic number skills and number sense. I remember clearly asking 9th grade algebra students what three times five was, and they reached for their calculator to calculate. Sometimes students would calculate something on the calculator and have no clue that their answer made no sense. They wanted 10 times 15 and they got 1.5.
I suggest the parts that get amputated from AI use are too deep and close to what gives us dignity as humans. Once we've started to amputate creative expression in art, written words, and other creative endeavors we are starting to amputate what makes us human.
If you want to talk specifically about the pragmatic negative effects, check out this article: Why I stopped using AI Code Editors. The article perfectly describes how AI assistance quickly affects the developer and why this dev cut out large amounts of use.
I think it's perfectly sane and reasonable to say that we should be guarded in our use of AI because of its effects on us.
Have you ever thought why people in Star Trek don't use the computer much
I've always thought about Star Trek and how the computers were basically general artificial intelligence. But you may notice if you've watched Star Trek (and I've watched literally every episode and movie), the show seems to view human thought and agency as the primary workhorse. The ship can do so much. Why are the ensigns not having the ship do all their work for them?
This is part of the reason I love Roddenberry's optimistic future vision. Even though technology is light years ahead, humans still maintain their agency and place in the world. Computers are never valued more than the beings using them. And although I'm having trouble putting my finger on what exactly it is that gives humans dignity in Star Trek, it's clear that they have it.
I'd rather read the prompt...
Part of that human dignity is the value of human thought over mechanical production. There is inherent worth to something that is created by a human over a machine.
In Clayton Ramsey's post I'd rather read the prompt, Ramsey tries to boil down what's so off-putting and unatural about student work that is clearly AI generated. I love his conclusion:
I have never seen any form of create generative model output (be that image, text, audio, or video) which I would rather see than the original prompt. The resulting output has less substance than the prompt and lacks any human vision in its creation. The whole point of making creative work is to share one’s own experience - if there’s no experience to share, why bother? If it’s not worth writing, it’s not worth reading.
Please go and read the whole essay- it's excellent. And it has direct application even to the developer. The moment we start offloading code creation to LLMs is the moment we sacrifice unique individual problem solving and abstractions. It's the moment we lose progress in higher thinking about programming architecture.
I want to end with a plea. This is a plea that I've heard other people say even before AI was a winkle in OpenAI's eye. I've heard this in response to books, radio, TV, internet, video games...
Be producers, not consumers
This is not some sort of violent Luddite reaction to new technology. This is a post hoping to preserve the dignity of humans, their thoughts, and their creative abilities. AI is being sold by Big Tech with only their wallets in mind. Don't become like my students who couldn't function without their technological extensions. Don't allow your dignity, your beautiful and unique problem solving, your voice be amptutated by AI
Find me on Bluesky or Mastodon. I also have an RSS feed here