Faults & Failures of A.I.
Wrote a minizine about A.I. because people outside of tech often ask me what the deal is and why I’m so anti-passionate about it. Printable version on Itch!
1. A.I. Revolution
The rise of artificial intelligence or A.I. has been compared to the invention of the calculator as well as the start of the industrial revolution.
These social and technological shifts, A.I. supporters say, had their doubters at the beginning, but have since made our lives easier by letting us spend our time at tasks more worth our while.
That’s the selling point of A.I. which all the major finance and tech companies have been peddling. But... is it true? Over $350 billion has been invested in building new A.I. data centers in 2025, driving 20% global (or 50% in the U.S.) increasing demand for electric power. The world economy put all its financial eggs in the A.I. basket. Why?
2. LLMs
Before asking why A.I. is being pushed by the world economy, we need to ask what A.I. does and how it works. Many technological processes (old and new) are being described as A.I. to increase companies’ stock value.
But the sort of A.I. on everyone’s mind are large language models (or LLMs): statistical models designed to predict and generate data, which are trained from analyzing existing data.
LLMs are a useful technique in science because they enable deeper statistical analyses of data than humans can do by themselves. One team used an LLM to analyze natural protein structures, which will help us better treat diseases by engineering custom proteins!
3. Generative A.I.
LLMs are useful to predict patterns in complex datasets. However, LLMs can generate new data based on the data originally used to train them, which is how generative A.I. apps like ChatGPT and Midjourney work.
Every bot message, generated image, and so on is an approximation of what a human would say or illustrate based on a description. A.I. doesn’t think how humans do, and it can’t tell truth from falsehood because it operates on the level of language alone.
There’s some controversy about how the data used to train generative A.I. is scraped from online media without regard for ownership of that media—but that’s not our business.
4. White-Collar A.I.
What started as a curiosity for some and a helpful statistical framework for others became a business opportunity for tech firms, always chasing the bag. A.I. seemed to promise to automate intellectual labor in the same way that machines did manual labor.
This has led to annoying situations for users, like having to talk to a dumbass chatbot to receive ‘customer support’, but also for workers.
For instance, software firms are under the impression that A.I. can speed up coding by 24% or more. A recent study found that coders think they code 20% faster, but are really 20% slower with A.I. than without! The best it can do is write awkward office emails.
5. Financial Costs
Whether A.I. in the workplace actually helps anyone or not, it should behoove one to ask what are the financial and social costs of A.I.? The reason why manual labor could be automated was because the machines cut down costs relative to wages.
But A.I. is expensive, costing hundreds of millions of dollars to train on data, plus hundreds of billions of dollars to build infrastructure. This funding was raised by venture capitalist firms and government subsidiaries, and it flows into tech and power firms.
The tech business model is to operate at a loss until enough market share is captured to hike up prices. Consumers will soon feel the real cost of A.I.
6. Social Costs
Billions of dollars are on the table for a thing that promises a technological revolution but delivers almost nothing of real value. Yet the tech firms have swallowed up the world economy and are laying off thousands of software developers to demonstrate confidence in their product (as well as to recover costs of A.I. development).
That means increasing unemployment as well as exploitation of workers, who are expected to do more with shoddy A.I.-powered tools.
That also means a massive increase in power usage to support AI training as well as generation, powered by cheap fossil fuels whose resurgence in turn will offset environmental gains.
7. Artificial Implications
Tech firms have spent the last decade trying to find new ways to make profit which didn’t really play out, from the blockchain to the so-called Metaverse. Capitalism always needs new markets, new blood to expand itself.
A.I. is an attempt to put a cart before a dead horse: to enjoy the benefits of automation without true automation. What’s disguised as a way to cut costs is really a collusion of tech and power firms to aggressively expand markets and drive up their stock prices.
This is perhaps the stupidest move for the world economy, shooting itself in the foot for baseless short-term profit. The market is certainly going to crash, and it'll take us with it.
I don't know if you're doing it intentionally for outreach purposes, but I've never heard "LLM" used by a specialist to refer to anything but an algorithm doing next word prediction. All of this is with the caveat that while I have written ML algorithms in the past, I am not an expert.
ReplyDeleteWhile I wouldn't be surprised if someone is trying to use an LLM for protein analysis, I would be highly skeptical of the results as an LLM is the wrong tool. I would call what you're describing as LLMs machine learning or ML, and I believe it was called this widely in the industry until recently. ML is a more general category than LLMs or even generative AI. I think of ML algorithms as optimization machines which rely on training data to map inputs to outputs. An LLM is a series of ML algorithms which collectively map a paragraph to the most likely next word in the paragraph. Then they just throw the whole thing back in the input for the next word. ML has definitely been used for things like protein analysis, identifying tumors, and the sorts of data analysis tasks you described, but those algorithms aren't LLMs unless they're generating text. It's definitely confusing because a lot of ML is being branded as AI right now to cash in on the bubble. That said other forms of machine learning aren't totally benign either. Social media algorithms use ML, so its already being used by surveillance capitalism to identify minorities. Right now it's for the purposes of advertising, but it's not hard to see how that could be utilized by a hostile state.
So what I'm trying to say is that I believe your social analysis is spot on, but you're actually giving LLMs too much credit. CEOs are going in all or nothing because they want to automate away white collar workers, but they're betting on algorithms which can't do anything but guess what the next word is frighteningly well because they don't understand the technology themselves.
i'm referring to studies over the last couple of years which analogize protein sequences as natural language and construct LLMs (distinguished from ML in general by tokenizing data) to generate new, "linguistically" valid sequences
Deletehttps://pmc.ncbi.nlm.nih.gov/articles/PMC10701588/
https://arxiv.org/abs/2502.17504
in other words, these studies refer to a particular technique or method of ML, rather than to specific content which an ML analyzes.
Ahh that's fascinating, thanks for sharing! I'd only heard about them being used for garbage till now. Sorry for bloviating at you. I gotta do a better job of thinking before commenting.
Delete