AI: Artificial Idiocy

I’m confused. Genuinely, I’ve been feeling insane about this for the past few months. I have been bombarded with news articles, videos, and even a balloon saying how generative AI is the future and we’re all gonna get replaced by GPT 10 or something along those lines. Because of these stories, many are inclined to believe that AI will change society. Many people have opinions on this technology ranging from excited to scared, and I honestly get that. In fact I truly believed the AI hype up until about 4 months ago when I decided to look under the hood. What I found was a rather interesting story. One of lies, one of overhyped promises, and one of an overly enthusiastic media captured by producing content that’s “most clickable”. So join me on this weird robot-filled rabbit hole as we discuss what’s really going on.

What is AI?

I know this is a dumb question, but think about it for a second. What does AI actually mean……. ok time’s up (unless you stopped reading). AI stands for artificial intelligence, but we don’t stop to truly understand what that means. AI, in the way the word its supposed to be used, means a simulation of human intelligence. Now think, how does chat gpt talk? By algorithmically generating a response, basically using math to make the “best” response. Now think how tech executives are using the word AI. If you look at some of their interviews they use, or at least imply, the definition as it is supposed to be used. But they’re using it to describe their products. See the problem?

Contrary to popular belief, words have meaning and can affect the way we perceive things. Starting to sink in? In fact, this language is everywhere. Think ML (machine learning). I’m not the only person who realizes this problem. Evgeny Morozov wrote an article for the Guardian about it . These companies are using a term which is non-applicable with their product and its affecting the way we perceive this technology. You may be thinking so what ? This could be called B.C (buttered croissants) and it wouldn’t matter. It’s a revolutionary technologically that will change everything at least that’s the common rebuttal. Except is that true or have we all been tricked by a hype wizard?

Welcome to Hype Land

I think it’s important to note who is hyping up this technology. There are many faces and many names but the ring leaders are two people. One is of course Sam Altman. If you don’t know who he is I have two things to say 1: he is the head of Open AI , the main company/nonprofit behind the AI so called revolution and 2: If you do exist, I kind of envy you. Over the past year the news and social media have been flooded with his advertisements. yes you heard me right his so called interviews are advertisements. Simply put, Sam Altman is a marketer. He’s not an expert in AI.

Sam Altman | Biography, OpenAI, Microsoft, & Facts | Britannica
Sam Altman

First off he has no degrees to speak of. He studied computer sciences for two years before dropping out . He might argue back by saying, you don’t need a successful degree to be successful. This, while true, falls apart once you realize how complicated the computer sciences are. I would never be able to publish an article about neuroscience in a scientific publication without a degree in neuroscience. So to summarize Altman’s no expert in computer science. But make no mistake he is an expert….in marketing. While he may not have a degree, he has a lot of experience in the field. I wish I could say it was a story of success. But it is not. His first startup was Loopt, a company which allowed smartphone users to share their location with other smartphones. This company failed and was absorbed by Green Dot. The platform failed because it did not attract a good enough user base. If you do some poking you’ll find numerous failed attempts at products and opportunities. Yet still somehow he continues on, not because of perseverance or creativity, but because of friends in high places . Like a lot of Wall Street venture capitalists, He is not short of very rich friends. This combined with his experience in marketing makes him very good at role playing chat gpt, and by that I mean spitting out plausible sounding info that’s false.

The other major figurehead in the AI “Revolution” is Elon Musk. You know, the guy making racist tweets. Unfortunately, most people think of him as an expert in AI. With him posting tweets about how AGI (artificial general intelligence) is coming next year, it is easy to see why people believe this lie. Unfortunately, while Elon Musk is much more qualified than Sam Altman, boasting a bachelor’s degree in physics and economics , unfortunately he acts about as dumb and entitled as a 5 year old. That’s not an exaggeration. Here are some examples: The launch of the cyber truckA truly staggering amount of osha violations Making Twitter somehow worseAnd his Twitter page. And those are just the ones I found the most funny. There are many many more .

The tweet has been deleted by Elon Musk but I managed to find a screenshot

Why is this important? To put it simply, the so-called experts on AI often aren’t truly qualified to discuss it. While there are a few experts who echo the same points as Elon Musk and Sam Altman, they are usually employed by companies with vested interests in keeping the hype train running, making their opinions unreliable at best and misleading at worst. Interestingly, these experts seem to falter when challenged. For example, OpenAI CTO Mira Murati struggling to answer a straightforward question. This raises the question: Is AI actually a world-changing invention, or is it just another tech-fueled delusion like the metaverse or NFTs? If you haven’t guessed, I believe it’s the latter, and I’ll walk you through why.

Limits to Growth

There is no infinite growth on a finite planet, despite what some economists and delusional optimists might have you believe. Just like everything in this world, there are limits to generative AI, particularly its training data and soaring energy costs. The internet, often perceived as infinite, is not, and while this might not matter much on a human scale, it does when applied to AI. When AI “learns” something, it’s not truly learning; it’s setting parameters based on the data it’s fed. This is problematic because, to train an AI to differentiate between an apple and a banana, for instance, you need millions of pictures of apples and bananas. AI cannot infer or understand what an apple or a banana is; it’s merely guessing or, at best, memorizing. Additionally, it can’t use all the pictures of apples or bananas available on the internet; it can only use the “high-quality” ones, though I couldn’t find a precise benchmark for what constitutes high quality, only a vague definition by IBM. These issues present significant challenges in the AI industry. In fact AI is expected to run out of high quality data by 2026 . And those are optimistic predictions because there’s a third ironic data-related thing which could kill AI: AI outputs. In a paper titled “The Curse of Recursion: Training on Generated Data Makes Models Forget,” a model was fed its own outputs. It slowly got worse and worse until it was unable to answer simple questions. This is bad, really bad, for generative AI. You see it puts it in a Catch-22 of sorts. It needs human made data from the internet, but the internet is currently being filled with AI outputs. This isn’t a problem going away any time soon and while currently AI progress isn’t stopping research has shown it may stop soon.

A Million Dollar Grammar Machine

AI is, in my opinion, currently useless. But that statement fails to provide the necessary context needed for a cohesive argument. Because of this lack of context, we end up with truly groundbreaking remarks such as, ‘AI does my emails,’ and ‘AI isn’t useless; it helps me code.’ Let me explain why this rhetoric misses the point. Generative AI is somewhat useful; it can help with grammar and writers block. It can even help map protein structures . But are these trillion-dollar use cases? Because that’s how much these things are going to cost. The problem isn’t so much that AI is a useless technology; it’s that it’s useless in context. In a vacuum, it looks good or at least decent, but when you add in all the context, it becomes a suddenly very overhyped technology. The most evident proof of this is generative AI’s exorbitant resource consumption. Of course we all know that fossil fuels are bad but many alternatives are still harmful and unsustainable when you use power irresponsibly: solar and wind use unrenewable natural resourcesnuclear fission uses tons of waterand nuclear fusion is a pipe dream. This isn’t to say that we shouldn’t switch to cleaner alternatives to fossil fuels, we absolutely should, but they aren’t going to magic away these costs.

Men like Sam Altman know this. They see the cliff coming closer. While the private nature of these companies makes it hard to tell whether these companies are profitable or not, There have been a few leaks. For example, it appears Open AI is unprofitable and in chaos, with the company firing and then rehiring Altman. The company Stability (celebrated for its open source AI technologies) is also in a fix, facing many law suits and, you guessed it, is also unprofitable. These indecents bring into question the many other AI companies so called “success.” I can’t make any definitive statements and profitability isn’t everything of course. Unfortunately, lack of profitability isn’t the only skeletons these companies might have in there closets. Again, we don’t know much about many of these companies metrics, but it appears that Chat GPT may be losing popularitydropping from 1.8 billion users in April to just 260.2 million in July. Some may argue that Open AI isn’t the only or even the best AI company out there. But it does have name recognition. Ask a random person about Claude or llama and they’ll have no clue what your talking about. But Chat GPT? Yea, that’s pretty recognizable. These aren’t bad business decisions. These are fundamental problems. These aren’t bugs they’re features. Hallucinations won’t go away. They might even get worse. These companies won’t magically become profitable or useful. They’ll go out of business. Of course, I think bigger question we should be asking is how are these companies allowed to exist in the first place.

Society of Techno Optimism

Our society is obsessed with technology. What started out as a small group of people promoting science and technology has become a church of sorts, with pastors, apologetics, and, of course, followers. My problem isn’t so much with this form of religion but rather how it has spread into the mainstream, infecting gullible news platforms, universities, and the minds of people alike, all under the false belief that it’s all infallible logic and facts. How we got here is a story for another time, but I think the consequences are plain to see. A lot of the reporters interviewing these guys tend to take for granted that this technology is important or the future, without actually thinking critically about the matter. And, of course, the amount of craze surrounding the tech speaks for itself. I wouldn’t draw too broad a conclusion if this were an outlier, but it appears to be part of a growing trend. Think about crypto, NFTs, the metaverse—all things that, of course, were abandoned by their creators but were proclaimed “the future.”

Even newer tech like augmented reality and space travel appears to be following a similar trend. And it isn’t like these things are just opinions either. The rate of technological progress is slowing down. Yet the hype around Silicon Valley is bigger than ever. Why? This isn’t a rhetorical question either. Why do we keep listening to these charlatans? Why do we just take it as fact that they’re the future? Why do we take their ads as facts? I wish I knew how to move away from this cult of techno optimism but I don’t.

This piece originally appeared on the author’s Substack, Talking Gorillas.