What a shallow, negative post. "Hype" is tautologically bad. Being negative and "above the hype" makes you sound smart, but this post adds nothing to the discussion and is just as fuzzy as the hype it criticizes.
> It is a real shame that some of the most beneficial tools ever invented, such as computers, modern databases, data centers, etc. exist in an industry that has become so obsessed with hype and trends that it resembles the fashion industry.
Would not the author have claimed at the time that those technologies were also "hype"? What consistent principle does the author use (a priori) to separate "useful facts" from "hype"?
Or, if the author would have considered those over-hyped at the time, then they should have some humility because in 10 years they may look back at AI as another one of the "most beneficial tools ever invented".
> In technology, AI is currently the new big hype. ... 10% of the AI hype is based on useful facts
The author ascribes malice to people who disagree with them about the use of AI. The author says proponents of AI are "greedy", "careless", unskilled, inexperienced, and unproductive. How does the author know that these people don't believe that AI has great utility and potential?
Don't waste your time on this article. I wish I hadn't. Go build something, or at least make thoughtful, well defined critiques of the world.
>> Would not the author have claimed at the time that those technologies were also "hype"? What consistent principle does the author use (a priori) to separate "useful facts" from "hype"?
Are you saying someone hyped ... databases? In the same way as AI is hyped today?
This is a tweet from Sam Altman, dated April 18 2025:
i think this is gonna be more like the renaissance than the industrial revolution
Do you remember someone from the databases industry claiming that databases are going to be "like the renaissance" or lik the industrial revolution? Oracle? Microsoft? PostgreSQL?
Here's another one with an excerpt of an interview with Demis Hssabis, dated April 17, 2025:
" I think maybe in the next 10, 15 years we can actually have a real crack at solving all disease."
Nobel Prize Winner and DeepMind CEO Demis Hassabis on how AI can revolutionize drug discovery doing "science at digital speed."
Who, in databases, has claimed that "in the next 10, 15 years we can actually have a real crack at solving all disease"? Data centers? Computers in general? All disease?
The last time I remember the hype being even remotely real was Web 2.0. And most of everything that made that hypeworthy is long gone (interoperability and open standards like RSS or free APIs) or turned out to be genuinely awful ("social media was a mistake") or has become far worse than what it replaced (SaaS).
It is an interesting comparison. Databases are objectively the more important technology, if we somehow lost AI the world would be equal parts disappointed and relieved. If we somehow lost database technology we'd be facing a dystopian nightmare.
If we cure all disease in the next 10-15 years, databases will be just as important as AI to that outcome. Databases supported a technology renaissance that reshaped the world on a level that is difficult to comprehend. But because most of the world doesn't interact directly with databases, as a technology it is not the focus of enthusiastic rhetoric.
LLMs are further along tech-chain and they might be an important part of world-changing human achievements, we won't know until we get there. In contrast, we can be certain databases were important. I imagine the people who were influential in their advancement understood how important the tech would be, even if they didn't breathlessly go on about it.
I’m not sure how hyped up databases were during their advent, but what do you mean “by partly because they are old?” The phonograph prototypes that were made by Thomas Alva Edison are old and they were hyped in a way. People called him the “Wizard of Menlo Park” for his work because they were seeing machines that could talk (or at least reproduce sounds in the same way photographs let you reproduce sights.)
Which of those things claimed it would be "like the renaissance" or that we'd cure all diseases?
In the clip I link above Hassabis says he hopes that in 10-15 years' time we'll be looking back on the way we do medicine today like the way they did it in the middle ages. In 10-15 years time. Modern medicine - you know, antibiotics, vaccines, transplants, radiotherapy, anti-retrovirals, the whole shebang, like medieval times.
Are you saying - what are you saying? Who has said things like that ever before in the tech industry? Azure? Facebook? Ethereum? Who?
The use of semantic web and linked data (a type of distributed database and ontology map) for protein folding (therefore, medical research too) was predicted by many and even used by some.
Databases were of key interest. Namely, the problem of relating different schemas.
So, yes. _It was claimed_ that database tech could help. And it probably did so already. To what extent I really don't know. Those consortiums included many big players.
It never hyped, of course. It did not stood the test of time either (as far as I know).
Claims, as you can see, don't always fully develop into reality.
LLMs now need to stand a similar test of time. Will they become niche and forgotten like semweb? We will know. Have patience.
You're taking a sliver of truth as though it dismantles their entire argument. The point was, nobody was _claiming_ databases would cure all diseases. That's the argument around the hype of AI here.
The thing is, the dot com hypesters were right about the impact of the Internet. Their timing was wrong, and they didn't pick the right winners mostly, but the one thing they were right about was that the Internet would change the world significantly and drive massive economic transformation.
They did not say database was hyped. Although I think computers(both enterprise and personal) were hyped and so was internet and smartphone, long before they began to deliver value. It takes a decade to say which hype lives up to expectation and which doesn't.
> Who, in databases, has claimed that "in the next 10, 15 years we can actually have a real crack at solving all disease"?
I doubt anyone claimed 10-15 years specifically, but it does actually seem like a pretty reasonable claim that without databases progress will be a snails pace and with databases it will be more of a horses trot. I imagine the human body requires a fair amount of data to be organised to analyse and simulate all the parts and I'd recommend storing all that in some sort of database.
This might count as unsatisfying semantics, but there is a huge leap going from physical ledgers and ad-hoc formats to organised and standardised data storage (ie, a database - even if it is just excel sheets that counts to me). Suddenly scientists can record and theorise on order(s) of magnitude more raw material and the results are interchangeable! That is a big deal and a necessary step to make the sort of progress we can make in modern times.
Regardless, it does seem fair to compare the AI boom to the renaissance or industrial revolution. We appear to be poking at the biggest thing to ever be poked in history.
> but it does actually seem like a pretty reasonable claim that without databases progress will be a snails pace and with databases it will be more of a horses trot.
Database hype was relatively muted and databases made a massive impact on our ability to cure diseases. AI hype is wildly higher and there is a reasonable chance it will lead to the curing of all diseases - it is going to have a much bigger impact than databases did.
The 10-15 year timeframe is obviously impossible for logistic reasons if nothing else - but the end goal is plausible and the direction we need to head in next as a society is clear. As unreasonable claims go it is unobjectionable and I'd rather be standing with Hassabis in the push to end disease than with naysayers worried that we won't do it as quickly as an uninformed optimist expects.
> there is a reasonable chance it will lead to the curing of all diseases
This is complete nonsense. AI might help with the _identification_ of diseases, but there is nothing to support the idea that every human ailment is curable.
Perhaps AI can help find cures, but the idea that it can cure every human ailment deserves to be mocked.
> I'd rather be standing with Hassabis in the push to end disease than with naysayers worried that we won't do it as quickly as an uninformed optimist expects.
> but there is nothing to support the idea that every human ailment is curable.
There is; we can conceivably cure everything we know about right now. There isn't a law of nature that says organisms have to live less than centuries and we can start talking seriously about brain-in-jar or consciousness uploading now that we appear to be developing the computing tech to support it.
Everything that exists stops eventually but we're on the cusp of some pretty massive changes here. We're moving from a world with 8 1-in-a-billion people wandering around to one with an arbitrary number of superhuman intelligences. That is going to have a major impact larger than anything we've seen to date. A bunch of science fiction stuff is manifesting in real time.
I think you're only reinforcing the contrast. Yes databases are massivly useful and have been self evidently so for decades; and yet, none of the current outlandish AI claims were ever made about them. VCs weren't running around 30 or 40 years ago claiming that SQL would cure disease and usher in a utopia.
Yes LLMs are useful and might become vastly more useful, but the hype:value ratio is currently insane. Technologies that have produced like 9 orders of magnitude more value to date have never recieved the hype that LLMs are getting.
- Company hires tens of people to build an undefined problem. They have a solution (and even that is rather nebulous) and are looking for a problem to solve.
- Company pushes the thing down your throat. The goal is not clear. They make authoritative-sounding statements on how it improves productivity, or throughput, or some other metric, only to retract later when you pull off those people into a private meeting.
- People who claim what all the things that nebulous solution can accomplish when, in fact, nobody really knows because the thing is in a research phase. These are likely the "charlatans" OP is referring to, and s/he's not wrong.
- Learning "next hot thing" instead of the principles that lead to it and, worse still, apply "next hot thing" in the wrong context when the trade-offs have reversed. My own example: writing a single-page web application with "next hot JS framework" when you haven't even understood the trade-off between client-side and server-side rendering (this is just my example, not OP's, but you can probably relate.)
etc. etc. Perhaps the post isn't very well articulated, but it does make several points. If you haven't experienced any of the above, then you're just not in the kind of company that OP probably has worked at. But the things they describe are very real.
I agree there is nothing wrong with "hype" per se, but the author is using the word in a very particular context.
There are issues with our current economic model and it blows down to rent. The service need model is allowing the owners and controllers of capital to set up systems that allow them to extract as much rent as possible, AI is just another approach to this.
And then if it is successful for building, as you say we'll have yet another production issue as that building is essentially completely automatic. Read how over production has affect society for pretty much ever and then ask yourself will it really be good for the masses.
Additionally all the media is so thoroughly captured that we're in "1984" yet so few people seem to realise it. The elites will start wars, crush people's livelihoods and brainwash everyone into being true believers as they march their sons to war while living in poverty.
By telling others not to read something doesn't it just make them curious and want to read it even more. Do HN readers actually obey such commands issued by HN commenters.
It's one of the stupidest concepts on the face of the earth and tons of people ascribe to it unknowingly: hype = bad.
AI is one of the most revolutionary things to ever happen in the last couple of years. But with enough hype tons of people now think it's complete garbage.
I'm literally shocked how we can spend a couple decades fantasizing and writing stories about this level of AI, only to be bored of it in two years when a rudimentary version of it is finally realized.
What especially pisses me off is the know it all tone, like they knew all along it's garbage and that they're above it all. These people are tools with no opinions other then hype = bad and logic = nonexistent.
> I'm literally shocked how we can spend a couple decades fantasizing and writing stories about this level of AI
It was never this level of AI. The stories we wrote and fantasised were about AI you could blindly rely on, trust, and reason about. No one ever fantasised about AI which couldn’t accurately count the number of letters in a common word or that would give you provably wrong information in an assertive authoritative tone. No one longed for a level of AI where you have to double check everything.
> No one longed for a level of AI where you have to double check everything.
This has basically been why it's a non-starter in a lot of (most?) business applications.
If your dishwasher failed to clean anything 20% of the time, would you rely on it? No, you'd just wash the dishes by hand, because you'd at least have a consistent result.
That's been the result of AI experimentation I've seen: it works ~80% of the time, which sounds great... except there's surprisingly few tasks where a 20% fail rate is acceptable. Even "prompt engineering" your way to a 5% failure/inaccuracy rate is unacceptable for a fully automated solution.
So now we're moving to workflows where AI generates stuff and a human double checks. Or the AI parses human text into a well-defined gRPC method with known behavior. Which can definitely be helpful, but is a far cry from the fantasized AI in sci-fi literature.
It feels a bit like LLMs rely a lot on _us_ to be useful. Which is a big point to the author's article about how companies are trimming off staff for AI.
We've frozen hiring (despite already being under staffed) and our leadership has largely pointed to advances in AI as being accelerative to the point that we shouldn't need more bodies to be more productive. Granted it's just a personal anecdote but it still affects hundreds of people that otherwise would have been hired by us. What reason would they have to lie about that to us?
One type of question that a 20%-failure-rate AI can still be very useful for is ones that are hard to answer but easy to verify.
For example say you have a complex medical problem. It can be difficult to do a direct Internet search that covers the history and symptoms. If you ask AI though, it'll be able to give you some ideas for specific things to search. They might be wrong answers, but now you can easily search specific conditions and check them.
You put too much faith in doctors. Pretty much every woman I know has been waived off for issues that turned serious later and even as a guy I have to do above average leg work to get them to care about anything.
All the recent studies I’ve read actually show the opposite - that even models that are no longer considered useful are as good or better at diagnosis than the mean human physician.
"The stories we wrote and fantasised were about AI you could blindly rely on, trust, and reason about."
Stanley Kubrick's 2001: A Space Odyssey - some of the earliest mainstream AI science fiction (1968, before even the Apollo moon landing!) was very much about an AI you couldn't trust.
that's a different kind of distrust, though, that was an AI that was capable of malice. In that case, "trust" had to do with loyalty.
The GP means "trust" in the sense of consistency. I trust that my steering wheel doesn't fly off, because it is well-made. I trust that you won't drive into traffic while I'm in the passenger seat, because I don't think you will be malicious towards us.
Going on a tangent here: not sure 2001's HAL was a case of outright malice. It was probably a malfunction (he incorrectly predict a failure) and then conflicting mission parameters that placed higher value on the mission than the crew (the crew discussed shutting down HAL because it seemed unreliable, and he reasoned it would jeopardize the mission and the right course of action was killing the crew). HAL was capable of deceit in order to ensure his own survival, that much is true.
In the followup 2010, when HAL's mission parameters are clarified and de-conflicted, he doesn't attempt to harm the crew anymore.
I... actually can see the 2001's scenario happening with ChatGPT if it was connected to ship peripherals and told mission > crew and that this principle overrides all else.
In modern terms it was about both unreliability (hallucinations?) and a badly specified prompt!
You're completely out of it. We couldn't even get AI to hold a freaking conversation. It was so bad we came up with this thing called the turing test and that was the benchmark.
Now people like you are all, like "well it's obvious the turing test was garbage".
No. It's not obvious. It's the hype got to your head. If we found a way to travel at light speed for 3 dollars the hype would be insane and in about a year we get people like you writing blog posts about how light speed travel is the dumbest thing ever. Oh man too much hype.
You think LLMs are stupid? Sometimes we all just need to look in the mirror and realize that humans have their own brand of stupidity.
I invite you to reread what I wrote and think about your comment. You’re making a rampant straw man, literally putting in quotes things I have never said or argued for. Please engage with what was written, not the imaginary enemy in your head. There’s no reason for you to be this irrationally angry.
LLMs are glorified, overhyped autocomplete systems that fail, but in different, nondeterministic ways than existing autocomplete systems fail. They are neat, but unreliable toys, not “more profound than fire or electricity” as has been breathless claimed.
I remember how ~5 years ago I said - here on HN - that AI will pass TT within 2 years. I was downvoted into oblivion. People said I was delusional and that it won’t happen in their lifetime.
Dig that quote up, find anyone who gave you a negative reply, and just randomly reply to them with a link to what you just posted here (along with the link to your old prediction) lol. Be like "told you so"
Please find me someone with any background in technology who thinks AI is complete garbage (zero value or close to it). The author doesn't think so, they assert that "perhaps 10% of the AI hype is based upon useful facts" and "AI functions greatly as a "search engine" replacement". There is a big difference between thinking something is garbage and thinking something is a massive bubble (in the case of AI, this could be the technology is worth hundreds of billions rather than trillions).
Yeah well this hype comes with a lot of financial investment, which means I get affected when the market crash.
If people makes cool thing on their own money ( or just not consume as much of our total capital ), and it turns out not as affective as they would like, I would be nice to them.
Yeah the effectiveness of the hype on investment is more important than the effectiveness of the technology. AI isn't the product, the promise of the stock going up is. Buy while you can, the Emperor's New Clothes are getting threadbare.
Sounds like you bought the hype about LLMs without any understanding anything about LLMs and are now upset that the hype train is crashing because it was based on promises that not only wouldn’t but couldn’t be kept.
It is more probable that people who have used it more have a more realistic and balanced view of their capabilities, based on the experience. Unless their livelihood depends on not having a relalistic view of the capabilities.
Hype is good. Hype is the explosion of exploration that comes in the wake of something new and interesting. We wouldn't be on this website of no one was ever hyped about transistors or networking or programming languages. Myriad people tried myriad ideas each time, and most of them didn't pan out, but here we are with the ideas that stuck. An armchair-naysayer like the author declaring others fools for getting excited and putting in work to explore new ideas is the only true fool involved.
It's sad to see such a terrible comment at the top of the discussion. You start with an ad-hominem against author assuming they want to "look smart" by writing negatively about hype, you construct a straw-man to try to make your point, and you barely touch on any of the points made by them, and when you do, you pick on the weakest one. Shame.
To me, AI hype seems to be the most tangible/real hype in a decade.
Ever since mobile & cloud era at their peaks in 2012 or 2014, we’ve had Crypto, AR, VR, and now AI.
I have some pocket change bitcoin, ethereum, played around for 2 minutes on my dust-gathering Oculus & Vision Pro; but man, oh man! Am I hooked to ChatGpt or what!
It’s truly remarkably useful!
You just can’t get this type of thing in one click before.
For example, here’s my latest engineering productivity boosting query:
“when using a cfg file on the cmd line what does "@" as a prefix do?”
It's astonishing how the two camps of LLM believers vs LLM doubters has evolved even though we as people are largely very similar, doing similar work.
Why is it that e.g. you believe LLMs are truly revolutionary, whereas e.g. I think they are not? What are the things you are doing with LLMs day to day that are life changing, which I am not doing? I'm so curious.
When I think of things that would be revolutionary for my job, I imagine: something that could input a description + a few resources, and write all the code, docs, etc for me - creating an application that is correct, maintainable, efficient, and scalable. That would solve 80% of my job. From my trials of LLMs, they are nowhere near that level, and barely pass the "correct" requirement.
Further, the cynic in me wonders what work we can possibly be doing where text generation is revolutionary. Keeping in mind that most of our jobs are ultimately largely pointless anyway, so that implies a limit on the true usefulness of any tool. Why does it matter if I can make a website in 1/10th the time if the website doesn't contribute meaningfully to society?
> I imagine: something that could input a description + a few resources, and write all the code, docs, etc for me
It could be that you’re falling into a complete solution fallacy. LLMs can already be great at working each of these problems. It helps to work on a small piece of these problems. It does take practice and any sufficiently complicated problem will require practice and multiple attempts.
But the more you practice with them, you start getting a feel for it and these things start to eat away at this 80% you’re describing.
It is not self driving, if anything, software engineering, automation is only accessible to those who nerd out at it, the same way using a PC used to be sending email or programming.
A lot of the attention is on being able to run increasingly capablemodels on machines with less resources. But there’s not much use to fuss over Gemini 2.5 Pro if you don’t already have a pretty good feel for deep interaction with sonnet or GPT 4o.
It is already impressive and can seriously accelerate software engineering.
But the complete solution fallacy is what the believers are claiming will occur, isn't it? I'm 100% with you that LLMs will make subsets of problems easier. Similar to how great progress in image recognition has been made with other ML techniques. That seems like a very reasonable take. However, that wouldn't be "revolutionary", I don't think. That's not "fire all your developers because most jobs will be replaced by AI in a few years" (a legitimate sentiment shared to me from an AI-hyped colleague).
I think the difference is between people who accept nondeterministic behavior from their computers and those who don’t. If you accept your computer being confidently wrong some unknowable percentage of the time, then LLMs are miraculous and game changing software. If you don’t, then the same LLMs are defective and unreliable toys, not suitable as serious tools.
People have different expectations out of computers, and that accounts for the wildly different views on current AI capabilities.
Perhaps. Then how do you handle the computer being confidently wrong a large proportion of the time? From my experience it's inaccurate in proportion to the significance of the task. So by the time it's writing real code it's more wrong than right. How can you turn that into something useful? I don't think the system around us is configured to handle such an unreliable agent. I don't want things in my life to be less reliable, I want them to be more reliable.
(Also if you exist in an ecosystem where being confidently wrong 70% of the time is acceptable, that's kinda suspect and I'll return to the argument of "useless jobs")
Filters. If you can come up with a problem where incorrect solutions can be filtered out, and you accept that LLM outputs are closer to a correct answer than a random string then LLM's are a way to get to a correct answer faster than previously possible for a whole class of problems we previously didn't have answer generators for.
And that's just the theory, in practice the LLM's are orders of magnitude closer to generating correct answers than anything we previously had.
And then there's the meta aspect of them: they can also act as filters themselves. What is possible if you can come with filters for almost any problem a human can filter for, even if that filter has a chance of being incorrect? The possibilities are impossible to tell, but to me very exciting/worrying. LLM's really have expanded the realm of what it is possible to do with a computer. And in a much more useful domain than fintech.
As long as it’s right more than random chance, it’s potentially useful - you just have to iterate enough times to reach your desired level of statistical certainty.
If you take the current trend of the cost of inference and assume that’s going to continue for even a few more cycles, then we already have sufficient accuracy in current models to more than satisfy the hype.
Firstly, something has to verify the work is correct right? Assuming you have a robust way to do this (even with humans coding it's challenging!), at some point the accuracy is so low that it's faster to create it manually than verify many times - a problem I frequently run into with LLM autocomplete and small scale features.
Second, on certain topics the LLM is biased towards the wrong answer and is further biased by previous wrong reasoning if it's building off itself. It becomes less likely that the LLM will choose the right method. Without strong guidance it will iterate itself to garbage, as we see with vibe coding shenanigans. How would you iterate on an entire application created by LLM, if any individual step it takes is likely to be wrong?
Third, I reckon it's just plain inefficient to iterate many times to get something we humans could've gotten correct in 1 or 2 tries. Many people seem to forget the environmental impact from running AI models. Personally I think we need to be doing less of everything, not producing more stuff at an increasing rate (even if the underlying technology gets incrementally more efficient).
Now maybe these things are solved by future models, in which case I will be more excited then and only then. It does seem like an open question whether this technology will keep scaling to where we hope it will be.
Your example is a better search engine. The AI hype however is the promise that it will be smarter (not just more knowledgeable) than humans and replace all jobs.
And it isn't on the way there. Just today, a leading state of the art model, that supposedly passed all the most difficult math entry exams and whatever they "benchmark", reasoned with the assumption of "60 days in January". It would simply assume that and draw conclusions, as if that were normal. It also wasn't able to corrrectly fill out all possible scores in a two player game with four moves and three rules, that I made up. It would get them wrong over and over.
It's not a better search engine, it's qualitatively different to search. An LLM compose its answers based on what you ask it. Search returns pre-existing texts to you.
That's the third type actually. But for some reason GP decided to qualify each type as either smart or dumb. Here's put better: there's 3 types of people: those who resist hype, those who profit off of it, and those who enjoy it.
Exactly, and since it's good to profit, and it's good to have fun, surely it's the smart people who are doing that, and the dumb people who are resisting.
I certainly feel dumb for dismissing crypto as only an effective store of value with some minor improvements over the status quo (and some downsides) without considering the implications.
I imagine that the people who profit off of it enjoy it too, though? So perhaps we have a “hype enjoyers” superset that includes “profiteers” and “suckers”, but then we need “neutrals” or something to describe those who enjoy it without making or losing money. And then from there …
I’ll do whatever shit the industry wants me to do, I don’t particularly care if it’s dumb. I mean, it doesn’t FEEL great to work on dumb things, but at the end of the day, I’m around to help implement whatever the paycheck writer wants to see. I genuinely don’t mean that negatively either, I feel like I’m just describing… employment?
Software just isn’t a core part of my identity. I like building it, I like most of the other people who write it, and I like most/some of the people paying me to build it. When I’m done for the day, I very much stop thinking about it (not counting shower thoughts and whatnot on deeper problems)
So what if I end up fixing slop code from AI hype in a couple years? I have been cleaning up slop code from other people for 15 years. I am painfully aware of slop I left for others to deal with too (sorry).
So yeah anyway, your comment resonated. Hype is annoying, but if it sticks around and becomes dominant, my point is, whatever, okay, new thing to learn.
A winner of the Nobel Prize in Economics, Paul Krugman wrote in 1998, “The growth of the Internet will slow drastically, as the flaw in ‘Metcalfe’s law’—which states that the number of potential connections in a network is proportional to the square of the number of participants—becomes apparent: most people have nothing to say to each other! By 2005 or so, it will become clear that the Internet’s impact on the economy has been no greater than the fax machine’s.”
The amount of misuse this quote gets is as absurd as it is tiresome. People were using it to defend NFTs too. Someone’s opinion on one thing says nothing about someone else’s opinion about a different thing.
Yeah, the Krugman quote gets pulled out around literally everything. Bitcoin/Blockchain/NFTs, Metaverse, AR/VR, I'm sure some other things I was forgetting.
For every Krugman "the internet will be as useful as the fax machine" quote, there's a corresponding quote like Gartner in 2022 - "Gartner Predicts 25% of People Will Spend At Least One Hour Per Day in the Metaverse by 2026."[0]
The point is that the upside of investing in a technology before everyone else does is huge.
The downside is whatever you end up investing it in.
Hype is then technologies fighting for that slice of the pie.
The upside to current AI is that we've solved natural language interaction with computers. We have the Star Trek computer you can talk to. You probably don't want to because natural language is naturally terrible at exactness which is why humans have endless meetings to discuss the next meeting about next quarters targets.
This was science fiction in 2020. Today I can do it on a top end consumer grade GPU.
By comparison blockchain is bullshit - to find out why just ask what the clearance rate is of the bitcoin network. You won't change the world at 7 transactions a second. If someone manages to build a block chain that can clear 10 million transactions a second you'd have something on par with the current AI hype train because it would be a legitimately useful finance tool and it will be worth investing billions in.
We certainly do not. I haven’t watched all of Star Trek, but from what I did the computer always understood the question and either executed it perfectly or was unable to comply for some reason outside of its control and broke down exactly what it was so the characters could fix it or find an alternative. Characters didn’t regularly have to verify the computer’s work, they just knew it was correct.
You're almost describing the lightning network, which depending on where you get your numbers can handle around 10-40x the transactions per second of Visa or MasterCard, neither of which are above 70k in anything I found with a quick search.
Granted it sounds like the lightning network has other issues.
He’s 100 percent correct and if you disagree you underestimate the effects of the Fax Machine on the economy and over-estimate the effects of the internet of 2005.
> In technology, AI is currently the new big hype. Before AI, it was "The Cloud", which unfortunately has still not settled, but are now also being interwoven with AI.
Cloud computing is a multi-billion dollar industry and it underpins many of the largest internet companies out there. I fail to see how that's hype.
The hype is that “cloud” is and makes everything magically better / easier / more secure / more efficient etc. Many companies jumped head first into large-scale cloud migrations and buildouts without any thought about where and how “cloud” makes sense, what the risks / downsides / true costs are, etc. Just like they are doing now with AI.
Server management, data center and related business existed ages ago. What's hype is this cloud? People are made to believe that cloud is not a server or servers.
It's an enhancement sure, but not something completely new or different.
> it underpins many of the largest internet companies out there
And most of them would be fine without it. Rent some racks. Run some servers. That's exactly why it's hype.
Sure there's plenty of hype. But its justified to some extent at least. LLMs are one of the biggest advances in technology in human history. In computing the big ones are:
* creation of computers
* personal computers
* the Internet and world wide web
* LLMs
So the hype is at some level entirely warranted - its a revolutionary technology with real impact. As opposed to for example the hype around crypto or NFTs or blockchain or garbage like that.
Speaking as someone who just rebuked another commenter for daring to suggest Dijkstra’s algorithm should be on that list… yeah you may have a point. Mobile phones probably belong in the S-Tier with the Internet.
In 2012-2014 I lived in Thailand right as the Smartphone Wave was cresting. I saw people who had never owned a computer leapfrog laptops and get online for the first time (on their own terms) with a cheap smartphone. And sure, many of them just went straight to instagram, but also a generation of working class Thai nerds got access to Wikipedia at their leisure, and that has no doubt led to uncountable good things.
> I would like to hear some uplifting stories about creative things people do with their phones, rather than consume media
Ever since the iPhone X introduced infrared face tracking I’ve been using them in live theatre shows to drive digital characters. Here’s me and my friend Jordan playing 3 “AI Gods” during the Baltimore Rock Opera Society’s 2019 show Space Kumite using an iPhone X for facial motion capture: https://youtu.be/wSYWC1GCZA8?si=RtWDODFwHxEnaB9u
I almost think deep learning and mobile phones had an analogous and surprising applicability
Deep learning failed to solve self-driving. In 2012 people said self driving would probably be ubiquitous by 2018, and it definitely would be by 2020
Instead we got chat bots in 2022 - that turned out to be the killer app, certainly the most widely adopted one
Likewise, instead of mobile phones being used as an aid in the physical world, they became the world
For example, the media they transmit seems to be how elections are won and lost now. It’s the device by which people form impressions of huge parts of the world
Pretty arbitrary list, no? You could replace "LLMs" with various technologies that seem important (particularly at the beginning of their existence before their true value is determined). Why not: C, cloud computing, neural networks, Dijkstra's algorithm, WiFi, FFT, etc?
5 years after Dijkstra’s algorithm was invented, could non-tech people look back and see a huge impact on their everyday lives? No.
Of the things in your list, WiFi is the closest to rising to this level, but even WiFi can’t claim to be as big of a deal as the Internet (unless you’re being extremely cute).
Your list is all A- and B-Tier stuff. Definitely important, but not on the level of like, the personal computer. LLMs are likely on the S-Tier with the other things GP mentioned. Within a month of its release ChatGPT had 100 million users. Everyone in my life can tell me a way they’ve been affected by LLMs.
We can argue about whether LLMs are/will be good for humanity. We cannot argue about whether they’re a big deal, they are undeniable.
You raise good points and I want to dig into it more.
One counterpoint is that LLMs are still young. I think it's preemptive to proclaim now that LLMs are world-changing when we really don't know how they will affect us in the future. For e.g. the internet, it's undeniable that it changed the world because now everything is so much more interconnected than it was 30 years ago. The internet has become a foundational element of technology. Will LLMs do the same? Surely we don't know.
Second counterpoint: "everyone in my life can tell me a way they've been affected by LLMs". Can they? How are they affected, for real? Everyone is certainly talking about them, does that necessarily mean the impact is large? Honestly for me life is basically the same and I'm a developer! Still go to the same job, SDLC is largely the same, my hobbies are the same, eat the same food, etc. The important day to day is the same. Except that I code a little faster, have a fancier search / problem solving tool, and every now and then I see a crappy AI gen image. Compare to e.g. the internet which has undoubtably changed day to day by drastically reducing physical interaction between people and systems.
Smartphones deserve to be on that list as a technology which fundamentally affected the way we interact, and once you realise that you also understand how important WiFi is.
Its a matter of opinion who cares what I think - if your list includes those things as fundamental changes that redefined society and technology then that's valid.
All I am saying is that LLMs are amongst the most revolutionary changes that break new ground and completely change the world - in my personal opinion.
LLMs could be one of the most revolutionary changes. The problem is that LLMs as a phenomenon are mostly wishful thinking and an acceptance of downgraded quality and reliability.
Sometimes they do useful things. But the gap between "sometimes" and "reliably" is not trivial.
It's honestly rather culty. I can see the hope that mistakes in today's vibe coding will be fixed by future vibe coding, which will be better in every way.
But it's not a given that's going to happen. Not on the evidence so far.
Yeah I guess my point was there's a lot of opinion involved, especially at the moment. Which can play into the hype cycle. Full disclaimer I am an LLM semi-doubter so of course my opinion skews in that direction.
We should see what the future evidence shows for the value of LLMs. Part of what the article touched on is historically the evidence often doesn't back up the hype, hence their disappointment.
LLMs are at this point almost 8 years old (dating from the Attention is All You Need paper). If it's truly a revolutionary technology, you should be able to point me to a company leveraging LLMs to make absolute bank, leaving all of its non-LLM-based competitors in the dust.
But instead, what I see in this thread in defense of the revolutionary ideals of LLMs, is how good future LLMs are going to be. That's not a sign of a revolutionary technology, that's a sign of hype. If you want to convince me otherwise, point to how good they are right now.
Large Language Models are not 8 years old. GPT-3 or arguably GPT-2 is the first LLM.
Moreover, I can think of 0 revolutionary technologies that did what you've said in such timelines. The Internet, Smartphones, The Steam Engine - the idea that revolutionary technology is created and everything changes in an instant is bizarre fiction.
And how well would you know really? Not everyone using LLMs internally is screaming about it from the rooftops.
Steam locomotives: the first practical steam locomotive was Stephenson's Rocket in 1829, although maybe you want count from the earlier locomotive developed for the Stockington and Darlington in 1825. In either case, by 1830, there was already a successful company producing steam locomotives, and everyone pursing the building of railroads were using or imminently planning to convert to using steam locomotives for tractive power.
Naval technology can be even more stark. HMS Dreadnought was a revolution in terms of battleship design; all the major naval countries were building dreadnoughts before she was completed (actually, a few started before she was even laid down!).
That's a feature you see a lot of revolutionary technologies experience: even the earliest incarnations have enough "wow" factor to push people to using them and improving from the get-go.
>That's a feature you see a lot of revolutionary technologies experience: even the earliest incarnations have enough "wow" factor to push people to using them and improving from the get-go.
ChatGPT alone had over 400 Million weekly active users in February (and seem to have hit 500M) and was the 6th most visited site in the world last month.
This is software that was introduced to the public in Nov of 2022. To put it frankly, it's the fastest growing software service ever.
Google, Youtube, Facebook, Instagram and Twitter were the only sites with more visits. It's close enough to Twitter that it may well overtake it this month.
It's not up for debate. LLMs absolutely have that "wow" factor.
They've been around for years now. Obviously not as long as the others, but years nonetheless. What makes you think the years so far may not continue into the future?
The only pattern I can see is it being potentially unsustainable, but I find it hard to believe that, considering I can run an LLM that is more capable that SOTA from 2 years ago on a single box in my living room.
It's those with your attitude, in a state of constant smug denial, who look like tools right now. We'll see how it goes in a few years, but I doubt that will change. The constant goalpost-shifting as LLMs keep getting better and better is becoming embarrassing, and even if they stopped getting better from now on, which is certainly not the case, their impact on communications is on the same order of magnitude as that of the WWW. The curmudgeons I work with are too stubborn to realize that LLMs have moved past GPT-3, and most of the haters on HN sound the same.
That is the goal of a hype trend. To make even healthy skeptics feel like they are missing a great thing.
I am actually an early adopter though. I just don't like bragging around.
Researchers know that most "goal posts moved" are actually great challenges. It would be sad if they saw it as pessimism.
Think of it as a more creative way to engage in the hype.
Skeptics are more valuable than blind believers. You want a blind believer testing your airbag or a skeptic that will move some goal posts to try and sniff out any bugs?
There are so many items I'd put on the list before LLMs tbh. It's not even worth the time having the conversation. My quality of life went up much more in the dial up to dsl transition. I wouldn't say DSL to fibre was as much of a jump, e.g.
Anywho, that quality of life jump to DSL ultimately led to the enshitification of the internet I knew and loved and now I'm stuck talking about LLMs I wish didn't exist - I would trade the conveniences of today for the internet of the 2000s any day. I genuinely view us on a downwards QOL trajectory re: technology, even if it feels more novel and useful in the moment. Every new novelty in tech that makes life easier for us seems to actually further degraded the human experience of the internet. I don't know how to reconcile this trend with LLMs as being "great".
It really feels like the more we do "good" to "progress" information technology breakthroughs, the worse the entire field feels. Much shallower, less personal, and dumber.
A "gaming PC" defines how GPUs are important. General people know something there ticks different, specialists know, industries know. It matters. It matters for a long time now.
"Broadband" was a hype, temporary. Once people understood that the speed is the thing, not any name, it stopped mattering. Now every couple of years there's a mini "new broadband tech" but it's all the same. Of course the tech is important. But revolutionary? I don't know. It boils down to being what is defined as generally "the internet".
when dial up was initially the new thing you could also take that away and the world would keep chugging, same for phone lines, same for electricity. but you take away something the world has had time to become dependent on and suddenly the world has a harder time.
The worlds dependency on LLM tools just hasnt had time to develop and that doesnt mean it wont. most people here are likely on the bleeding edge of utilizing them. most people not paying attention will just use it like they use google or not at all, until tools are built and they dont even realize they are using an LLM, or using a service that is dependent on an LLM.
Most people here, unless researchers actually browse HN (unlikely, that would be a dumb move), are equivalent to notepad bleeding edge users in regards to LLMs.
If you don't train LLMs, you are just an user. I am sorry, that is the reality and it is cruel (for now).
If "prompt engineering" becomes a thing, it means the tech is less impressive than it declares itself to be.
It should be a natural language based interface. I will trust it and learn natural language instead, worst thing that can happen is I learn to communicate better (actually a good thing).
Take it away, leave it. Does it really matter in the context I presented?
If nobody was paying attention to what the foundational companies have been doing for the past few years, I'm pretty sure I'd be a wild advocate singing their praises on these and other forums.
But since everyone is extremely into it, I just kind of watch and try to measure my expectations.
What makes LLMs especially impressive is how skillfully they read text. That’s more interesting than text generation, since a lot of writing is formulaic and since they do not do the non-formulaic writing well at all. But I don’t think anyone truly knows why they are so adept at reading text.
> LLMs are one of the biggest advances in technology in human history.
See, that’s what the article is about.
Saying LLM is on the same level of steam power, the computer, the internet, airplanes, etc. when the technology hasn’t even been around enough to have real impact, and all I read is extrapolation about how everything will be based on it “in the future” — that’s the definition of hype.
LLMs do about half of my work for me. Today I spent most of my time interacting with o3 and 2.5-pro and I have accomplished what would take me 3 days to accomplish in the past. That’s real impact.
> In technology, AI is currently the new big hype. Before AI, it was "The Cloud", which unfortunately has still not settled, but are now also being interwoven with AI.
I envy being able to write a statement like this without mentioning The Blockchain.
The Cloud has joined The Information Superhighway as boring, foundational technology. Blockchain started out as hype and is still hype. AI/LLMs already provide infinitely more value than blockchain (which, to be fair, remains close to zero).
I typically disagree with the practice of hating things just because they are trending, this is the stuff hipsters are made of.
But, there is a such thing as negative hype. A seller of AI models telling the world he's not hiring engineers anymore (and they too can cut their workforce) because his models are that good would be negative hype.
Remember all those articles about some minor advance in surface chemistry which was then hyped into Trillion Dollar Industry Real Soon Now? They usually appeared in one of Nature's off-brand journals, or just arxiv, not in Chemical Engineering News or IEEE Trans. on Power Engineering. Such articles usually lacked the usual performance numbers (Wh/L, Wh/Kg, and Wh/$).
Then there's Javascript framework hype, which makes everyone run very hard to stay in the same place.
AI is at least making rapid progress. It's been less than three years since ChatGPT came out. Having lived and worked through the "AI Winter" (1984-2005), this is an improvement. The main problem now remains "hallucinations", or worse, "agentic" systems which act on hallucinations.
>Nobody wants to talk to an AI when they need support. We all HATE that! It is bad enough that when you need service and support you end up talking to someone on the other side of the planet who's using some kind of answer sheet with absolutely no clue on how to really help you.
This is true from my personal experience. Had switched my fiber provider because, with my previous provider, i was never able to talk to a human.
I've learned to treat AI hype the same way I think about sports. Just ignore it. Sure I can name the popular models of the day in the same way I can name the Dallas Cowboys, but none of it matters and none of it affects me.
Grammarly is among the best applications of LLMs I have seen. I'm glad to be paying for it. It even detects and modifies "typical AI phrases," which is ironic.
“AI is misleading, there’s no actual intelligence.”
Oh wow, you figured out “Artificial Intelligence” isn’t literal. You should tell the rest of the planet. Maybe we should rename it “Statistical Pattern Prediction Machines That Are Better at Your Job Than You Are.” It’s a mouthful, but more honest.
Kind of a bad article (IMO) when it sideswipes cloud computing.
The "cloud" was actually useful, and helped to scale so many companies that could bring their products to millions of people quickly and without too many issues, and with good reliability.
Blockchain, hyper enabled gambling, cryptocurrency, and jamming "AI" into everything are bad though.
hype is short for hyperbole, and if there is any hyperbole more excessive than claiming that LLMs are vaguely equivalent to people by calling them "AI", it would have to be a claim of godhood or similar.
I tend to agree with the author, but I think the real problem with hype is the opportunity cost. Somewhere along the way we bought into this idea that "the promise of X" is worth more than "the reality of X".
The longer the hype goes on - which is to say the longer it takes to demonstrate the hype is actually reality - the more people become more heavily invested in it.
If the hype never materialises, you basically build a larger and larger black swan when the crash arrives. The people who win are the few who got out early enough, and everyone else who ignored the hype.
If the hype does eventuate, the number of people you're now competing with in a new market is proportional to the length of time it took to eventuate, because for all those people that were onboard, some of them would enter the space in competition, not just as consumers. Again, the only outsized winners are those who got out early enough. The rest are now just working BAU. Like everyone else who ignored the hype.
In both cases, you're just as good or better off ignoring the hype because the chances of you winning big are tiny (unless you're a billionaire, but in that case you were already winning anyway).
If the hype materializes, it can still change course.
Learned some classical ASP to join the web dev hype? Well, tough luck. The hype was good but materialized in another vessel. If you learned enough fundamentals, you could still jump ship.
But what are the fundamentals for LLM? Hype rarely reveals them right away.
For LLMs, tough luck. You can't learn fundamentals with your toy PC yet. You need to be rich and have server grade hardware. Also, you cannot buy server grade hardware because it is out of stock.
But that's ok, few people will learn it and I will look from the sidelines until an acessible opportunity reveals itself.
If it doesn't get acessible, well, it probably will be some niche tech like COBOL. Pays awesomelly, but not my jam. That is winning for some people.
So many possible scenarios. You or someone else will try to expand and deflect on a single one, won't you? I know you will. No crystal ball, just intuition.
So, I will fucking learn what I can learn, what I like to learn, and I will hope I can survive on what it pays. That is winning in my books (at least on such narrow economic definition of it).
I like LLMs, but the good part is in training. Just using it is kind of... being an user. I can do that without faking as if it were truly learning it.
You are presenting an economic angle. I won't ignore it, but there is more to it for me.
I agree that situations are much more complex than an economic analysis. And I'm completely onboard with doing the things that take your interest. I do agree with most of the points you're making, and for some I can clarify the direction of my thoughts:
> But what are the fundamentals for LLM? Hype rarely reveals them right away.
I think this is true, and that it is actually a core component of hype: it is not interested in fundamentals. This is true for hype of unproven things, but is also true for hype of proven things (e.g. the iPhone). When I compare the hype around AI to Crypto I see AI has having a much stronger case for "there are fundamentals here". However when I compare them BOTH to the iPhone, it's a different story.
The iPhone already existed when its hype intensified. In this way the hype doesn't really need to "materialise", the thing is already proven, and the hype is the world exploring a thing that exists, not a thing that is promised (e.g. Crypto) or partially-proven (e.g. AI).
> If it doesn't get acessible, well, it probably will be some niche tech like COBOL. Pays awesomelly, but not my jam. That is winning for some people.
I agree, and this is not dissimilar from:
> So, I will fucking learn what I can learn, what I like to learn, and I will hope I can survive on what it pays. That is winning in my books (at least on such narrow economic definition of it).
This sounds to me a lot like the acting profession. People do it because they love it, even though getting gigs is hard, stable income is hard, callbacks are rare, there are skill and relationship and economic ceilings everywhere, but breaking through implies large wins in all of those domains and more.
I don't at all begrudge people for making low-odds bets - I have a number of them myself, just in other places - but I think assessing your bets is important.
> Premise, reality, etc. So maleable.
This is where I pretty strongly disagree. [Assuming you mean "promise"] promises are extremely malleable, but that doesn't make them valuable. In fact I would say it reduces their value, and not realising this is exactly what grifters and dodgy salespeople (etc) hope for.
Reality is not malleable. This is the reason many people who have not bought into the hype keep asking for examples of where AI has is useful. They're not necessarily trying to suggest that it won't ever reach the heights the hype claims (though some certainly are suggesting that). But hype is presented as value NOW, and while reality may change over time, I think a lot of people RIGHT NOW see AI as either not helpful to them, or detrimental to industries broadly (visual art in particular).
It's worth noting though that these two views are somewhat contradictory: we say that it is detrimental right now but also that it is not useful right now. It can be both but the argument is naturally weak.
For my part: I'm starting to find AI enormously useful for search and self-instruction purposes. It's essentially replaced StackOverflow and related sites for me.
But that reality for me doesn't match what the hype says is currently happening. So I consider it a high risk area until the situation changes. The fact that it IS hyped so much is a counter-signal to me that suggests I shouldn't believe everything that is being said until I start seeing strong evidence.
Hmm I think I see what you mean - and that's fair.
I guess I'm a bit too conservative to trust I'd get out soon enough (in financial investment, or work time spent) if the hype doesn't materialise. And if it does, there's always time to learn a new domain, etc.
Fancy me pushing into the Late Majority of the bell curve. Must be getting old.
find the only 10% is real part interesting. I'm starting to note two distinct camps emerge I think? Transformer architectures are just search (/bad search) vs they are the first commercial grade compression of language into real vectors. The trap I think is it's somewhat subtle in how they discuss things?
I worked for a very large industrial some years ago. Our leaders were so ate up with the VR hype that they greenlit a VR training experience for field technicians wherein the technician would wear a VR headset and then use a virtual iPad to diagnose some of our heavy equipment. Someone asked why they couldn’t just use an iPad IRL to learn. There was no rational justification.
Shortly after that web 3.0 took off and I started to hear that we were going to use blockchain to track the maintenance of our heavy equipment.
Ah, yes, the classic “this thing I don’t like is medieval alchemy” comparison. Because if you can’t refute the tools, just compare it to snake oil. It’s cute how this guy thinks he’s the only one who’s noticed not every AI startup is curing cancer. The tech world has always had grifters. That doesn’t mean the tools themselves are fake—it means some people are dumb and others are greedy, which is kind of how humanity works.
Congratulations on identifying marketing exists. What’s next? You gonna blow the whistle on toothpaste commercials? Of course hype is driven by profit. That’s literally the point of business. But you know what else was hyped once? The internet. And electricity. And antibiotics. Being annoyed that people are excited about tech is not a worldview—it's a vibe. A really boring one.
The reason for this AI hype is how bullshit Western economies have become. I write an AI email which you summarise and we can both be happy that we contributed to the GDP through spending tokens.
In the meantime, nothing tangible has been produced. We’re just hyping ourselves over selling snake oil faster.
This strikes me as curmudgeonly and unnecessarily contrarian.
While it's true that investors, entrepreneurs, corporations, etc. have a vested interest in AI to the tune of trillions of dollars, the impulse to dismiss this as 90% hype (as the author does) is insane.
We're only three years into this, and we have:
- LLMs with grad student-level competency in everything
- Multimodality with complex understanding of photography and technical documents
- Image generators that can generate high-quality photos, in any style, with just a text description
- Song generators that make pretty decent music and video generators that aren't half bad
- Excellent developer tooling & autocomplete; very competent code generation
This is still early and the foundations are still being laid. Imagine where we'll be in 10 years, assuming even a linear growth rate in capabilities.
Think of what the internet is today, and its permanence in everything, and where it was just 30 years ago.
By all means, resist the hype - but don't go so far in the other direction that your head is in the sand.
Why would we assume linear growth in capabilities and not a logarithmic growth rate? It seems time and time again, it gets harder and harder to make progress.
I think back to using Dragon Natural Dictation in 1998, there seemed to be exponential promise and a ton of excitement in my young mind. But the reality was more logarithmic improvements so it is finally pretty good 25 years later.
Combine an exponential growth in investments (that is inherent to our economy) with a logarithmic return in capabilities, and you get a linear increase in capabilities.
Sorry, these claims are just not true. AI generations in these categories are impressive on release, but blatantly generic, recognizable, predictable and boring after I have seen about 100 or so. Also, if you want to put them to use to replace "real work" outside of the ordinary/pretrained, you hit limitations quickly.
The scaling laws of the Internet were clear from the start.
Probably these things exist, but if they are truly unhypeable, only a select few will ever know of it, and their economic and societal impact will remain negligible.
You can measure hype by how many people are talking about an idea space. Write an article about "X hype is bad" and yup, you're talking about the idea space i.e. you're participating in the hype.
As a Linux user, I do think there's a bit of an echo chamber which leads to groupthink such as this article. And it's ironic because a lot of the underpinnings of AI use Linux as their underpinning.
The amount of AI boosters and defenders wading in here to bash anyone with a less-than-glowing perspective of the technology is...disappointing. If the technology is truly as revolutionary as you proclaim it to be, then why are you here defending it instead of changing the world with it, as you claim it can? Stop trying to convince detractors with empty arguments, and show us the actual results you claim to have.
> Hype is always bad.
As someone who actually has to implement stuff and support it thereafter, rather than just bolt it onto my resume and ride the hypetrain to equityville, I wholeheartedly agree with OP's core message here. After web2.0 devolved into walled gardens, closed APIs, and gargantuan surveillance apparatuses designed to serve advertisements in a more precise way than actual cruise missiles, I became soured on the very field I am also fiercely passionate about (IT).
The modern technology field is exclusively hype-oriented. There's something new every year you simply must adopt and become an expert in, or you'll lose your job. The ROI is irrelevant (namely because it rarely exists for most organizations in the early adopter phase, if ever), the functionality is irrelevant, the use case is especially irrelevant. It's "new", it's shiny, and you simply must have it to be a "modern" business.
Hype is meant to be a direct replacement for objectivity. It warps math and statistics to justify its necessity (like those cloud migration calculators every vendor likes to point to, in order to justify a wholesale migration off your present estate), the salesfolk strong-arm your leadership into adoption regardless of the advice of their internal architects and engineers, and C-Suites can recite dozens of brands in the Gartner "Leader" quadrant for any given technology while simultaneously having no clue what that technology actually does or is used for. It's all hype.
And an economy built on the hype-cycle has very real, immediate consequences for the average person. It raises energy rates (https://www.usatoday.com/story/business/energy-resource/2025...) and takes water from communities not provisioned for such large scale industrial use (https://www.11alive.com/article/news/investigations/11alive-...), both of which harm locals while the companies skirt by without paying for their fair share of consumption (https://eelp.law.harvard.edu/extracting-profits-from-the-pub...). That doesn't even get into the grifts involved with IP, copyright, or elimination of well-paying jobs (https://green.spacedino.net/the-final-grift/), or the boom-bust cycles that often saddle consumers and taxpayers with the steep losses (both monetary and jobs) incurred once the early investors have sailed off into the sunset with their cartoonishly-large sacks of money and a new superyacht to show for their efforts.
The most immediate and pressing concern for the AI hype is the squandering of finite resources (fresh water, land, and energy chief among them) to train and operate these models on a highly speculative assumption that this revolution in AI will finally be the one that brings humanity into the future, cures all disease, and enables us all to live a life of leisure (rather than just the offensively rich) while waiting for immortality to arrive so we can explore the cosmos. To make this po-faced argument in the face of the present climate disaster demonstrates a complete lack of basic situational awareness, undermining any sort of credibility they may have with anyone who can consider the interactions of two separate systems; we can literally only pursue AI or ameliorate climate change right now with our current capacities in energy, water, and rare earths, and of those two only the latter is actually, demonstrably solved and merely requires global implementation. It demonstrates a selfish mindset of self-preservation (your career in AI) over the protection and support of the whole, and it's all just hype.
I'm not opposed to new technologies. Containers are a game changer, Kubernetes is improving its usability (in baby steps), composable infrastructure through code is a godsend, and the consumption-based models of public cloud providers have made hosting your own place on the internet cheaper than ever before. We've had some great innovations in these past ten years, almost all of which have been completely overshadowed by the perpetual hype train around get-rich investment schemes (crypto, blockchain, NFTs, AI). The hype is the problem, which is what the author was getting at.
I hate obnoxious assholes who insist on taking a contrarian position on technology to such an extreme they become the same puritans they revolted against early in their career.
What a shallow, negative post. "Hype" is tautologically bad. Being negative and "above the hype" makes you sound smart, but this post adds nothing to the discussion and is just as fuzzy as the hype it criticizes.
> It is a real shame that some of the most beneficial tools ever invented, such as computers, modern databases, data centers, etc. exist in an industry that has become so obsessed with hype and trends that it resembles the fashion industry.
Would not the author have claimed at the time that those technologies were also "hype"? What consistent principle does the author use (a priori) to separate "useful facts" from "hype"?
Or, if the author would have considered those over-hyped at the time, then they should have some humility because in 10 years they may look back at AI as another one of the "most beneficial tools ever invented".
> In technology, AI is currently the new big hype. ... 10% of the AI hype is based on useful facts
The author ascribes malice to people who disagree with them about the use of AI. The author says proponents of AI are "greedy", "careless", unskilled, inexperienced, and unproductive. How does the author know that these people don't believe that AI has great utility and potential?
Don't waste your time on this article. I wish I hadn't. Go build something, or at least make thoughtful, well defined critiques of the world.
>> Would not the author have claimed at the time that those technologies were also "hype"? What consistent principle does the author use (a priori) to separate "useful facts" from "hype"?
Are you saying someone hyped ... databases? In the same way as AI is hyped today?
This is a tweet from Sam Altman, dated April 18 2025:
https://x.com/sama/status/1913320105804730518
Whence I quote:
Do you remember someone from the databases industry claiming that databases are going to be "like the renaissance" or lik the industrial revolution? Oracle? Microsoft? PostgreSQL?Here's another one with an excerpt of an interview with Demis Hssabis, dated April 17, 2025:
https://x.com/reidhoffman/status/1912929020905206233
Whence I quote:
Who, in databases, has claimed that "in the next 10, 15 years we can actually have a real crack at solving all disease"? Data centers? Computers in general? All disease?The last time I remember the hype being even remotely real was Web 2.0. And most of everything that made that hypeworthy is long gone (interoperability and open standards like RSS or free APIs) or turned out to be genuinely awful ("social media was a mistake") or has become far worse than what it replaced (SaaS).
It is an interesting comparison. Databases are objectively the more important technology, if we somehow lost AI the world would be equal parts disappointed and relieved. If we somehow lost database technology we'd be facing a dystopian nightmare.
If we cure all disease in the next 10-15 years, databases will be just as important as AI to that outcome. Databases supported a technology renaissance that reshaped the world on a level that is difficult to comprehend. But because most of the world doesn't interact directly with databases, as a technology it is not the focus of enthusiastic rhetoric.
LLMs are further along tech-chain and they might be an important part of world-changing human achievements, we won't know until we get there. In contrast, we can be certain databases were important. I imagine the people who were influential in their advancement understood how important the tech would be, even if they didn't breathlessly go on about it.
My favorite that I’ve heard a couple times is “solve math” and/or “solve physics”
Altman’s claimed LLMs will figure out climate change. Solid stuff.
Sure, databases didn't get as much hype but that's partly because they are old.
Look at something more recent: "cloud", "social networking", "blockchain", "mobile".
Plenty of hype! Some delivered, some didn't.
I’m not sure how hyped up databases were during their advent, but what do you mean “by partly because they are old?” The phonograph prototypes that were made by Thomas Alva Edison are old and they were hyped in a way. People called him the “Wizard of Menlo Park” for his work because they were seeing machines that could talk (or at least reproduce sounds in the same way photographs let you reproduce sights.)
Which of those things claimed it would be "like the renaissance" or that we'd cure all diseases?
In the clip I link above Hassabis says he hopes that in 10-15 years' time we'll be looking back on the way we do medicine today like the way they did it in the middle ages. In 10-15 years time. Modern medicine - you know, antibiotics, vaccines, transplants, radiotherapy, anti-retrovirals, the whole shebang, like medieval times.
Are you saying - what are you saying? Who has said things like that ever before in the tech industry? Azure? Facebook? Ethereum? Who?
Ray Kurzweil?
AI is old too.
“In from three to eight years we will have a machine with the general intelligence of an average human being.” (Minsky, 1970)
https://aiws.net/the-history-of-ai/this-week-in-the-history-...
The use of semantic web and linked data (a type of distributed database and ontology map) for protein folding (therefore, medical research too) was predicted by many and even used by some.
Databases were of key interest. Namely, the problem of relating different schemas.
So, yes. _It was claimed_ that database tech could help. And it probably did so already. To what extent I really don't know. Those consortiums included many big players.
It never hyped, of course. It did not stood the test of time either (as far as I know).
Claims, as you can see, don't always fully develop into reality.
LLMs now need to stand a similar test of time. Will they become niche and forgotten like semweb? We will know. Have patience.
You're taking a sliver of truth as though it dismantles their entire argument. The point was, nobody was _claiming_ databases would cure all diseases. That's the argument around the hype of AI here.
Maybe it will cure all diseases, I don't know. Hard to put an honest "I don't know" in a box, isn't it?
I am actually having a blast seeing the hooks for many kinds of arguments and counter-arguments.
It will not
I guess OP hated it when Bill Gates said "personal computers have become the most empowering tool we've ever created."
Or Vint Cerf, "The Internet is the most powerful tool we have for creating a more open and connected world."
Yea, and the internet never went through a hype bubble that ultimately burst ¯\_(ツ)_/¯
The thing is, the dot com hypesters were right about the impact of the Internet. Their timing was wrong, and they didn't pick the right winners mostly, but the one thing they were right about was that the Internet would change the world significantly and drive massive economic transformation.
it doesn't really compare, but the "paperless office" was hyped for decades
They did not say database was hyped. Although I think computers(both enterprise and personal) were hyped and so was internet and smartphone, long before they began to deliver value. It takes a decade to say which hype lives up to expectation and which doesn't.
> Are you saying someone hyped ... databases? In the same way as AI is hyped today?
Nah, but they hyped Clippy (Office Assistant). Oh wait... maybe that's "AI" back in the days...
> Who, in databases, has claimed that "in the next 10, 15 years we can actually have a real crack at solving all disease"?
I doubt anyone claimed 10-15 years specifically, but it does actually seem like a pretty reasonable claim that without databases progress will be a snails pace and with databases it will be more of a horses trot. I imagine the human body requires a fair amount of data to be organised to analyse and simulate all the parts and I'd recommend storing all that in some sort of database.
This might count as unsatisfying semantics, but there is a huge leap going from physical ledgers and ad-hoc formats to organised and standardised data storage (ie, a database - even if it is just excel sheets that counts to me). Suddenly scientists can record and theorise on order(s) of magnitude more raw material and the results are interchangeable! That is a big deal and a necessary step to make the sort of progress we can make in modern times.
Regardless, it does seem fair to compare the AI boom to the renaissance or industrial revolution. We appear to be poking at the biggest thing to ever be poked in history.
> but it does actually seem like a pretty reasonable claim that without databases progress will be a snails pace and with databases it will be more of a horses trot.
This isn't what anyone is saying
Fair point; let me put it this way:
Database hype was relatively muted and databases made a massive impact on our ability to cure diseases. AI hype is wildly higher and there is a reasonable chance it will lead to the curing of all diseases - it is going to have a much bigger impact than databases did.
The 10-15 year timeframe is obviously impossible for logistic reasons if nothing else - but the end goal is plausible and the direction we need to head in next as a society is clear. As unreasonable claims go it is unobjectionable and I'd rather be standing with Hassabis in the push to end disease than with naysayers worried that we won't do it as quickly as an uninformed optimist expects.
> there is a reasonable chance it will lead to the curing of all diseases
This is complete nonsense. AI might help with the _identification_ of diseases, but there is nothing to support the idea that every human ailment is curable.
Perhaps AI can help find cures, but the idea that it can cure every human ailment deserves to be mocked.
> I'd rather be standing with Hassabis in the push to end disease than with naysayers worried that we won't do it as quickly as an uninformed optimist expects.
It's a good thing those aren't our only options!
> but there is nothing to support the idea that every human ailment is curable.
There is; we can conceivably cure everything we know about right now. There isn't a law of nature that says organisms have to live less than centuries and we can start talking seriously about brain-in-jar or consciousness uploading now that we appear to be developing the computing tech to support it.
Everything that exists stops eventually but we're on the cusp of some pretty massive changes here. We're moving from a world with 8 1-in-a-billion people wandering around to one with an arbitrary number of superhuman intelligences. That is going to have a major impact larger than anything we've seen to date. A bunch of science fiction stuff is manifesting in real time.
I think you're only reinforcing the contrast. Yes databases are massivly useful and have been self evidently so for decades; and yet, none of the current outlandish AI claims were ever made about them. VCs weren't running around 30 or 40 years ago claiming that SQL would cure disease and usher in a utopia.
Yes LLMs are useful and might become vastly more useful, but the hype:value ratio is currently insane. Technologies that have produced like 9 orders of magnitude more value to date have never recieved the hype that LLMs are getting.
Some issues with this "hype":
- Company hires tens of people to build an undefined problem. They have a solution (and even that is rather nebulous) and are looking for a problem to solve.
- Company pushes the thing down your throat. The goal is not clear. They make authoritative-sounding statements on how it improves productivity, or throughput, or some other metric, only to retract later when you pull off those people into a private meeting.
- People who claim what all the things that nebulous solution can accomplish when, in fact, nobody really knows because the thing is in a research phase. These are likely the "charlatans" OP is referring to, and s/he's not wrong.
- Learning "next hot thing" instead of the principles that lead to it and, worse still, apply "next hot thing" in the wrong context when the trade-offs have reversed. My own example: writing a single-page web application with "next hot JS framework" when you haven't even understood the trade-off between client-side and server-side rendering (this is just my example, not OP's, but you can probably relate.)
etc. etc. Perhaps the post isn't very well articulated, but it does make several points. If you haven't experienced any of the above, then you're just not in the kind of company that OP probably has worked at. But the things they describe are very real.
I agree there is nothing wrong with "hype" per se, but the author is using the word in a very particular context.
There are issues with our current economic model and it blows down to rent. The service need model is allowing the owners and controllers of capital to set up systems that allow them to extract as much rent as possible, AI is just another approach to this.
And then if it is successful for building, as you say we'll have yet another production issue as that building is essentially completely automatic. Read how over production has affect society for pretty much ever and then ask yourself will it really be good for the masses.
Additionally all the media is so thoroughly captured that we're in "1984" yet so few people seem to realise it. The elites will start wars, crush people's livelihoods and brainwash everyone into being true believers as they march their sons to war while living in poverty.
"Don't waste your time on this article."
By telling others not to read something doesn't it just make them curious and want to read it even more. Do HN readers actually obey such commands issued by HN commenters.
[flagged]
It's one of the stupidest concepts on the face of the earth and tons of people ascribe to it unknowingly: hype = bad.
AI is one of the most revolutionary things to ever happen in the last couple of years. But with enough hype tons of people now think it's complete garbage.
I'm literally shocked how we can spend a couple decades fantasizing and writing stories about this level of AI, only to be bored of it in two years when a rudimentary version of it is finally realized.
What especially pisses me off is the know it all tone, like they knew all along it's garbage and that they're above it all. These people are tools with no opinions other then hype = bad and logic = nonexistent.
> I'm literally shocked how we can spend a couple decades fantasizing and writing stories about this level of AI
It was never this level of AI. The stories we wrote and fantasised were about AI you could blindly rely on, trust, and reason about. No one ever fantasised about AI which couldn’t accurately count the number of letters in a common word or that would give you provably wrong information in an assertive authoritative tone. No one longed for a level of AI where you have to double check everything.
> No one longed for a level of AI where you have to double check everything.
This has basically been why it's a non-starter in a lot of (most?) business applications.
If your dishwasher failed to clean anything 20% of the time, would you rely on it? No, you'd just wash the dishes by hand, because you'd at least have a consistent result.
That's been the result of AI experimentation I've seen: it works ~80% of the time, which sounds great... except there's surprisingly few tasks where a 20% fail rate is acceptable. Even "prompt engineering" your way to a 5% failure/inaccuracy rate is unacceptable for a fully automated solution.
So now we're moving to workflows where AI generates stuff and a human double checks. Or the AI parses human text into a well-defined gRPC method with known behavior. Which can definitely be helpful, but is a far cry from the fantasized AI in sci-fi literature.
It feels a bit like LLMs rely a lot on _us_ to be useful. Which is a big point to the author's article about how companies are trimming off staff for AI.
> how companies are trimming off staff for AI
But they're not. That's just the excuse. The real truth is somewhere along pandemic over hire and bad / unstable economy.
Also attempts to influence investors/stock-price.
https://newrepublic.com/article/178812/big-tech-loves-lay-of...
We've frozen hiring (despite already being under staffed) and our leadership has largely pointed to advances in AI as being accelerative to the point that we shouldn't need more bodies to be more productive. Granted it's just a personal anecdote but it still affects hundreds of people that otherwise would have been hired by us. What reason would they have to lie about that to us?
One type of question that a 20%-failure-rate AI can still be very useful for is ones that are hard to answer but easy to verify.
For example say you have a complex medical problem. It can be difficult to do a direct Internet search that covers the history and symptoms. If you ask AI though, it'll be able to give you some ideas for specific things to search. They might be wrong answers, but now you can easily search specific conditions and check them.
Sort of P vs. NP for questions.
> For example say you have a complex medical problem.
Or you go to a doctor instead of imagining answers.
You put too much faith in doctors. Pretty much every woman I know has been waived off for issues that turned serious later and even as a guy I have to do above average leg work to get them to care about anything.
Doctors are still better than LLMs, by a lot.
All the recent studies I’ve read actually show the opposite - that even models that are no longer considered useful are as good or better at diagnosis than the mean human physician.
literally the LAST place I would go (I am American)
"The stories we wrote and fantasised were about AI you could blindly rely on, trust, and reason about."
Stanley Kubrick's 2001: A Space Odyssey - some of the earliest mainstream AI science fiction (1968, before even the Apollo moon landing!) was very much about an AI you couldn't trust.
that's a different kind of distrust, though, that was an AI that was capable of malice. In that case, "trust" had to do with loyalty.
The GP means "trust" in the sense of consistency. I trust that my steering wheel doesn't fly off, because it is well-made. I trust that you won't drive into traffic while I'm in the passenger seat, because I don't think you will be malicious towards us.
These are not the same.
Going on a tangent here: not sure 2001's HAL was a case of outright malice. It was probably a malfunction (he incorrectly predict a failure) and then conflicting mission parameters that placed higher value on the mission than the crew (the crew discussed shutting down HAL because it seemed unreliable, and he reasoned it would jeopardize the mission and the right course of action was killing the crew). HAL was capable of deceit in order to ensure his own survival, that much is true.
In the followup 2010, when HAL's mission parameters are clarified and de-conflicted, he doesn't attempt to harm the crew anymore.
I... actually can see the 2001's scenario happening with ChatGPT if it was connected to ship peripherals and told mission > crew and that this principle overrides all else.
In modern terms it was about both unreliability (hallucinations?) and a badly specified prompt!
>It was never this level of AI.
You're completely out of it. We couldn't even get AI to hold a freaking conversation. It was so bad we came up with this thing called the turing test and that was the benchmark.
Now people like you are all, like "well it's obvious the turing test was garbage".
No. It's not obvious. It's the hype got to your head. If we found a way to travel at light speed for 3 dollars the hype would be insane and in about a year we get people like you writing blog posts about how light speed travel is the dumbest thing ever. Oh man too much hype.
You think LLMs are stupid? Sometimes we all just need to look in the mirror and realize that humans have their own brand of stupidity.
I invite you to reread what I wrote and think about your comment. You’re making a rampant straw man, literally putting in quotes things I have never said or argued for. Please engage with what was written, not the imaginary enemy in your head. There’s no reason for you to be this irrationally angry.
LLMs are glorified, overhyped autocomplete systems that fail, but in different, nondeterministic ways than existing autocomplete systems fail. They are neat, but unreliable toys, not “more profound than fire or electricity” as has been breathless claimed.
I remember how ~5 years ago I said - here on HN - that AI will pass TT within 2 years. I was downvoted into oblivion. People said I was delusional and that it won’t happen in their lifetime.
Dig that quote up, find anyone who gave you a negative reply, and just randomly reply to them with a link to what you just posted here (along with the link to your old prediction) lol. Be like "told you so"
Don't be mad about their opinions, be grateful for the arbitrage opportunity
I like this approach, the challenge us that without a good grasp of finance it is really hard to leverage these opportunities
Please find me someone with any background in technology who thinks AI is complete garbage (zero value or close to it). The author doesn't think so, they assert that "perhaps 10% of the AI hype is based upon useful facts" and "AI functions greatly as a "search engine" replacement". There is a big difference between thinking something is garbage and thinking something is a massive bubble (in the case of AI, this could be the technology is worth hundreds of billions rather than trillions).
Nobody is talking about a financial bubble. That's orthoganol.
Something can be worth zero and still be fucking amazing.
The blog post is talking about the hype in general and about AI in general. It is not just referring to the financial opportunity.
You can use chatGPT for free. Does that mean it's total shit because openAI allowed you to use it for free? No. It's still freaking revolutionary.
> Something can be worth zero and still be fucking amazing.
Gull-wing doors on cars. Both awesome and flawed.
Yeah well this hype comes with a lot of financial investment, which means I get affected when the market crash.
If people makes cool thing on their own money ( or just not consume as much of our total capital ), and it turns out not as affective as they would like, I would be nice to them.
Yeah the effectiveness of the hype on investment is more important than the effectiveness of the technology. AI isn't the product, the promise of the stock going up is. Buy while you can, the Emperor's New Clothes are getting threadbare.
Sounds like you bought the hype about LLMs without any understanding anything about LLMs and are now upset that the hype train is crashing because it was based on promises that not only wouldn’t but couldn’t be kept.
> hype train is crashing
According to who? Perhaps the people who aren't paying attention. People who use AI frequently and see the rate of progress are still quite hyped.
It makes sense that people who don't believe the (current wave of generative) AI hype aren't using it and those who do are.
It is more probable that people who have used it more have a more realistic and balanced view of their capabilities, based on the experience. Unless their livelihood depends on not having a relalistic view of the capabilities.
Agreed. “Hype is always bad” was where I had to stop.
It could lead to good things. Most startups have hype.
Hype is good. Hype is the explosion of exploration that comes in the wake of something new and interesting. We wouldn't be on this website of no one was ever hyped about transistors or networking or programming languages. Myriad people tried myriad ideas each time, and most of them didn't pan out, but here we are with the ideas that stuck. An armchair-naysayer like the author declaring others fools for getting excited and putting in work to explore new ideas is the only true fool involved.
It's sad to see such a terrible comment at the top of the discussion. You start with an ad-hominem against author assuming they want to "look smart" by writing negatively about hype, you construct a straw-man to try to make your point, and you barely touch on any of the points made by them, and when you do, you pick on the weakest one. Shame.
To me, AI hype seems to be the most tangible/real hype in a decade.
Ever since mobile & cloud era at their peaks in 2012 or 2014, we’ve had Crypto, AR, VR, and now AI.
I have some pocket change bitcoin, ethereum, played around for 2 minutes on my dust-gathering Oculus & Vision Pro; but man, oh man! Am I hooked to ChatGpt or what!
It’s truly remarkably useful!
You just can’t get this type of thing in one click before.
For example, here’s my latest engineering productivity boosting query: “when using a cfg file on the cmd line what does "@" as a prefix do?”
It's astonishing how the two camps of LLM believers vs LLM doubters has evolved even though we as people are largely very similar, doing similar work.
Why is it that e.g. you believe LLMs are truly revolutionary, whereas e.g. I think they are not? What are the things you are doing with LLMs day to day that are life changing, which I am not doing? I'm so curious.
When I think of things that would be revolutionary for my job, I imagine: something that could input a description + a few resources, and write all the code, docs, etc for me - creating an application that is correct, maintainable, efficient, and scalable. That would solve 80% of my job. From my trials of LLMs, they are nowhere near that level, and barely pass the "correct" requirement.
Further, the cynic in me wonders what work we can possibly be doing where text generation is revolutionary. Keeping in mind that most of our jobs are ultimately largely pointless anyway, so that implies a limit on the true usefulness of any tool. Why does it matter if I can make a website in 1/10th the time if the website doesn't contribute meaningfully to society?
> I imagine: something that could input a description + a few resources, and write all the code, docs, etc for me
It could be that you’re falling into a complete solution fallacy. LLMs can already be great at working each of these problems. It helps to work on a small piece of these problems. It does take practice and any sufficiently complicated problem will require practice and multiple attempts.
But the more you practice with them, you start getting a feel for it and these things start to eat away at this 80% you’re describing.
It is not self driving, if anything, software engineering, automation is only accessible to those who nerd out at it, the same way using a PC used to be sending email or programming.
A lot of the attention is on being able to run increasingly capablemodels on machines with less resources. But there’s not much use to fuss over Gemini 2.5 Pro if you don’t already have a pretty good feel for deep interaction with sonnet or GPT 4o.
It is already impressive and can seriously accelerate software engineering.
But the complete solution fallacy is what the believers are claiming will occur, isn't it? I'm 100% with you that LLMs will make subsets of problems easier. Similar to how great progress in image recognition has been made with other ML techniques. That seems like a very reasonable take. However, that wouldn't be "revolutionary", I don't think. That's not "fire all your developers because most jobs will be replaced by AI in a few years" (a legitimate sentiment shared to me from an AI-hyped colleague).
I think the difference is between people who accept nondeterministic behavior from their computers and those who don’t. If you accept your computer being confidently wrong some unknowable percentage of the time, then LLMs are miraculous and game changing software. If you don’t, then the same LLMs are defective and unreliable toys, not suitable as serious tools.
People have different expectations out of computers, and that accounts for the wildly different views on current AI capabilities.
Perhaps. Then how do you handle the computer being confidently wrong a large proportion of the time? From my experience it's inaccurate in proportion to the significance of the task. So by the time it's writing real code it's more wrong than right. How can you turn that into something useful? I don't think the system around us is configured to handle such an unreliable agent. I don't want things in my life to be less reliable, I want them to be more reliable.
(Also if you exist in an ecosystem where being confidently wrong 70% of the time is acceptable, that's kinda suspect and I'll return to the argument of "useless jobs")
Filters. If you can come up with a problem where incorrect solutions can be filtered out, and you accept that LLM outputs are closer to a correct answer than a random string then LLM's are a way to get to a correct answer faster than previously possible for a whole class of problems we previously didn't have answer generators for.
And that's just the theory, in practice the LLM's are orders of magnitude closer to generating correct answers than anything we previously had.
And then there's the meta aspect of them: they can also act as filters themselves. What is possible if you can come with filters for almost any problem a human can filter for, even if that filter has a chance of being incorrect? The possibilities are impossible to tell, but to me very exciting/worrying. LLM's really have expanded the realm of what it is possible to do with a computer. And in a much more useful domain than fintech.
As long as it’s right more than random chance, it’s potentially useful - you just have to iterate enough times to reach your desired level of statistical certainty.
If you take the current trend of the cost of inference and assume that’s going to continue for even a few more cycles, then we already have sufficient accuracy in current models to more than satisfy the hype.
I'm not following the statistical argument.
Firstly, something has to verify the work is correct right? Assuming you have a robust way to do this (even with humans coding it's challenging!), at some point the accuracy is so low that it's faster to create it manually than verify many times - a problem I frequently run into with LLM autocomplete and small scale features.
Second, on certain topics the LLM is biased towards the wrong answer and is further biased by previous wrong reasoning if it's building off itself. It becomes less likely that the LLM will choose the right method. Without strong guidance it will iterate itself to garbage, as we see with vibe coding shenanigans. How would you iterate on an entire application created by LLM, if any individual step it takes is likely to be wrong?
Third, I reckon it's just plain inefficient to iterate many times to get something we humans could've gotten correct in 1 or 2 tries. Many people seem to forget the environmental impact from running AI models. Personally I think we need to be doing less of everything, not producing more stuff at an increasing rate (even if the underlying technology gets incrementally more efficient).
Now maybe these things are solved by future models, in which case I will be more excited then and only then. It does seem like an open question whether this technology will keep scaling to where we hope it will be.
Your example is a better search engine. The AI hype however is the promise that it will be smarter (not just more knowledgeable) than humans and replace all jobs.
And it isn't on the way there. Just today, a leading state of the art model, that supposedly passed all the most difficult math entry exams and whatever they "benchmark", reasoned with the assumption of "60 days in January". It would simply assume that and draw conclusions, as if that were normal. It also wasn't able to corrrectly fill out all possible scores in a two player game with four moves and three rules, that I made up. It would get them wrong over and over.
It's not a better search engine, it's qualitatively different to search. An LLM compose its answers based on what you ask it. Search returns pre-existing texts to you.
There's three types of people w.r.t hype: smart people who resist hype, smart people who want to profit off of it, and dumb people who like it.
The first type of person already agrees with you. The second type knows but doesn't care. The third isn't going to read this article.
Fourth type: smart people who like hype because hype is fun and having fun is good.
That's the third type actually. But for some reason GP decided to qualify each type as either smart or dumb. Here's put better: there's 3 types of people: those who resist hype, those who profit off of it, and those who enjoy it.
Exactly, and since it's good to profit, and it's good to have fun, surely it's the smart people who are doing that, and the dumb people who are resisting.
I certainly feel dumb for dismissing crypto as only an effective store of value with some minor improvements over the status quo (and some downsides) without considering the implications.
I imagine that the people who profit off of it enjoy it too, though? So perhaps we have a “hype enjoyers” superset that includes “profiteers” and “suckers”, but then we need “neutrals” or something to describe those who enjoy it without making or losing money. And then from there …
Could also be the second type in denial about themselves too
The really smart people are those who figure out which part of the hype is real and which is exaggerated.
No, the really smart people, and I mean really really smart, are the ones who figured out how to teach rocks to think.
I’ll do whatever shit the industry wants me to do, I don’t particularly care if it’s dumb. I mean, it doesn’t FEEL great to work on dumb things, but at the end of the day, I’m around to help implement whatever the paycheck writer wants to see. I genuinely don’t mean that negatively either, I feel like I’m just describing… employment?
Software just isn’t a core part of my identity. I like building it, I like most of the other people who write it, and I like most/some of the people paying me to build it. When I’m done for the day, I very much stop thinking about it (not counting shower thoughts and whatnot on deeper problems)
So what if I end up fixing slop code from AI hype in a couple years? I have been cleaning up slop code from other people for 15 years. I am painfully aware of slop I left for others to deal with too (sorry).
So yeah anyway, your comment resonated. Hype is annoying, but if it sticks around and becomes dominant, my point is, whatever, okay, new thing to learn.
Yeah, that's the mature perspective. Having a job is always going to involve some level of bullshit and you have to make peace with it.
A winner of the Nobel Prize in Economics, Paul Krugman wrote in 1998, “The growth of the Internet will slow drastically, as the flaw in ‘Metcalfe’s law’—which states that the number of potential connections in a network is proportional to the square of the number of participants—becomes apparent: most people have nothing to say to each other! By 2005 or so, it will become clear that the Internet’s impact on the economy has been no greater than the fax machine’s.”
https://www.laphamsquarterly.org/revolutions/miscellany/paul...
The amount of misuse this quote gets is as absurd as it is tiresome. People were using it to defend NFTs too. Someone’s opinion on one thing says nothing about someone else’s opinion about a different thing.
Yeah, the Krugman quote gets pulled out around literally everything. Bitcoin/Blockchain/NFTs, Metaverse, AR/VR, I'm sure some other things I was forgetting.
For every Krugman "the internet will be as useful as the fax machine" quote, there's a corresponding quote like Gartner in 2022 - "Gartner Predicts 25% of People Will Spend At Least One Hour Per Day in the Metaverse by 2026."[0]
[0] https://www.gartner.com/en/newsroom/press-releases/2022-02-0...
The point is that the upside of investing in a technology before everyone else does is huge.
The downside is whatever you end up investing it in.
Hype is then technologies fighting for that slice of the pie.
The upside to current AI is that we've solved natural language interaction with computers. We have the Star Trek computer you can talk to. You probably don't want to because natural language is naturally terrible at exactness which is why humans have endless meetings to discuss the next meeting about next quarters targets.
This was science fiction in 2020. Today I can do it on a top end consumer grade GPU.
By comparison blockchain is bullshit - to find out why just ask what the clearance rate is of the bitcoin network. You won't change the world at 7 transactions a second. If someone manages to build a block chain that can clear 10 million transactions a second you'd have something on par with the current AI hype train because it would be a legitimately useful finance tool and it will be worth investing billions in.
> We have the Star Trek computer you can talk to.
We certainly do not. I haven’t watched all of Star Trek, but from what I did the computer always understood the question and either executed it perfectly or was unable to comply for some reason outside of its control and broke down exactly what it was so the characters could fix it or find an alternative. Characters didn’t regularly have to verify the computer’s work, they just knew it was correct.
You're almost describing the lightning network, which depending on where you get your numbers can handle around 10-40x the transactions per second of Visa or MasterCard, neither of which are above 70k in anything I found with a quick search.
Granted it sounds like the lightning network has other issues.
He’s 100 percent correct and if you disagree you underestimate the effects of the Fax Machine on the economy and over-estimate the effects of the internet of 2005.
https://www.bloomberg.com/news/articles/2021-05-24/paul-krug...
> most people have nothing to say to each other!
Now to be fair. He wasn't wrong in all his claims.
I guess it works if you replace the internet with email and if people don't include spammers.
Even though the Internet has displaced fax machines, fax machines enabled instant, worldwide, legal (signing contracts and such) communication.
That's not insignificant. They're stupid now because we have the Internet but they were a major leap in conducting business.
> In technology, AI is currently the new big hype. Before AI, it was "The Cloud", which unfortunately has still not settled, but are now also being interwoven with AI.
Cloud computing is a multi-billion dollar industry and it underpins many of the largest internet companies out there. I fail to see how that's hype.
The hype is that “cloud” is and makes everything magically better / easier / more secure / more efficient etc. Many companies jumped head first into large-scale cloud migrations and buildouts without any thought about where and how “cloud” makes sense, what the risks / downsides / true costs are, etc. Just like they are doing now with AI.
> I fail to see how that's hype.
Server management, data center and related business existed ages ago. What's hype is this cloud? People are made to believe that cloud is not a server or servers.
It's an enhancement sure, but not something completely new or different.
> it underpins many of the largest internet companies out there
And most of them would be fine without it. Rent some racks. Run some servers. That's exactly why it's hype.
It is a service layer that enabled organizational efficiency.
Sure there's plenty of hype. But its justified to some extent at least. LLMs are one of the biggest advances in technology in human history. In computing the big ones are:
* creation of computers
* personal computers
* the Internet and world wide web
* LLMs
So the hype is at some level entirely warranted - its a revolutionary technology with real impact. As opposed to for example the hype around crypto or NFTs or blockchain or garbage like that.
If the personal computer counts, do mobile phones count?
I would certainly think so, except all those high quality sensors have been hindered by app stores and subpar apps, imo :-(
I would like to hear some uplifting stories about creative things people do with their phones, rather than consume media
Speaking as someone who just rebuked another commenter for daring to suggest Dijkstra’s algorithm should be on that list… yeah you may have a point. Mobile phones probably belong in the S-Tier with the Internet.
In 2012-2014 I lived in Thailand right as the Smartphone Wave was cresting. I saw people who had never owned a computer leapfrog laptops and get online for the first time (on their own terms) with a cheap smartphone. And sure, many of them just went straight to instagram, but also a generation of working class Thai nerds got access to Wikipedia at their leisure, and that has no doubt led to uncountable good things.
> I would like to hear some uplifting stories about creative things people do with their phones, rather than consume media
Ever since the iPhone X introduced infrared face tracking I’ve been using them in live theatre shows to drive digital characters. Here’s me and my friend Jordan playing 3 “AI Gods” during the Baltimore Rock Opera Society’s 2019 show Space Kumite using an iPhone X for facial motion capture: https://youtu.be/wSYWC1GCZA8?si=RtWDODFwHxEnaB9u
Thank you, great example of creativity with a phone!
I used to download and play with the music apps, but I didn't like how the music was "stuck" there
Miniaturization is a common trend.
Once modern computers appeared, smartphones became just a matter of time.
One could say that sucessful miniaturization is one of the tells that something is more likely to stick around.
I almost think deep learning and mobile phones had an analogous and surprising applicability
Deep learning failed to solve self-driving. In 2012 people said self driving would probably be ubiquitous by 2018, and it definitely would be by 2020
Instead we got chat bots in 2022 - that turned out to be the killer app, certainly the most widely adopted one
Likewise, instead of mobile phones being used as an aid in the physical world, they became the world
For example, the media they transmit seems to be how elections are won and lost now. It’s the device by which people form impressions of huge parts of the world
the majority of global internet users access the internet via a phone.
Pretty arbitrary list, no? You could replace "LLMs" with various technologies that seem important (particularly at the beginning of their existence before their true value is determined). Why not: C, cloud computing, neural networks, Dijkstra's algorithm, WiFi, FFT, etc?
5 years after Dijkstra’s algorithm was invented, could non-tech people look back and see a huge impact on their everyday lives? No.
Of the things in your list, WiFi is the closest to rising to this level, but even WiFi can’t claim to be as big of a deal as the Internet (unless you’re being extremely cute).
Your list is all A- and B-Tier stuff. Definitely important, but not on the level of like, the personal computer. LLMs are likely on the S-Tier with the other things GP mentioned. Within a month of its release ChatGPT had 100 million users. Everyone in my life can tell me a way they’ve been affected by LLMs.
We can argue about whether LLMs are/will be good for humanity. We cannot argue about whether they’re a big deal, they are undeniable.
You raise good points and I want to dig into it more.
One counterpoint is that LLMs are still young. I think it's preemptive to proclaim now that LLMs are world-changing when we really don't know how they will affect us in the future. For e.g. the internet, it's undeniable that it changed the world because now everything is so much more interconnected than it was 30 years ago. The internet has become a foundational element of technology. Will LLMs do the same? Surely we don't know.
Second counterpoint: "everyone in my life can tell me a way they've been affected by LLMs". Can they? How are they affected, for real? Everyone is certainly talking about them, does that necessarily mean the impact is large? Honestly for me life is basically the same and I'm a developer! Still go to the same job, SDLC is largely the same, my hobbies are the same, eat the same food, etc. The important day to day is the same. Except that I code a little faster, have a fancier search / problem solving tool, and every now and then I see a crappy AI gen image. Compare to e.g. the internet which has undoubtably changed day to day by drastically reducing physical interaction between people and systems.
Smartphones deserve to be on that list as a technology which fundamentally affected the way we interact, and once you realise that you also understand how important WiFi is.
I'd say it was closer to 25 years from the invention of the personal computer to the point it made an impact on everyday life.
wifi is radio though
You could replace LLMs with Excel. Shit, even today, people are probably just using LLMs to populate excel sheets for consumption later lol.
Its a matter of opinion who cares what I think - if your list includes those things as fundamental changes that redefined society and technology then that's valid.
All I am saying is that LLMs are amongst the most revolutionary changes that break new ground and completely change the world - in my personal opinion.
LLMs could be one of the most revolutionary changes. The problem is that LLMs as a phenomenon are mostly wishful thinking and an acceptance of downgraded quality and reliability.
Sometimes they do useful things. But the gap between "sometimes" and "reliably" is not trivial.
It's honestly rather culty. I can see the hope that mistakes in today's vibe coding will be fixed by future vibe coding, which will be better in every way.
But it's not a given that's going to happen. Not on the evidence so far.
Yeah I guess my point was there's a lot of opinion involved, especially at the moment. Which can play into the hype cycle. Full disclaimer I am an LLM semi-doubter so of course my opinion skews in that direction. We should see what the future evidence shows for the value of LLMs. Part of what the article touched on is historically the evidence often doesn't back up the hype, hence their disappointment.
LLMs are at this point almost 8 years old (dating from the Attention is All You Need paper). If it's truly a revolutionary technology, you should be able to point me to a company leveraging LLMs to make absolute bank, leaving all of its non-LLM-based competitors in the dust.
But instead, what I see in this thread in defense of the revolutionary ideals of LLMs, is how good future LLMs are going to be. That's not a sign of a revolutionary technology, that's a sign of hype. If you want to convince me otherwise, point to how good they are right now.
Large Language Models are not 8 years old. GPT-3 or arguably GPT-2 is the first LLM.
Moreover, I can think of 0 revolutionary technologies that did what you've said in such timelines. The Internet, Smartphones, The Steam Engine - the idea that revolutionary technology is created and everything changes in an instant is bizarre fiction.
And how well would you know really? Not everyone using LLMs internally is screaming about it from the rooftops.
Steam locomotives: the first practical steam locomotive was Stephenson's Rocket in 1829, although maybe you want count from the earlier locomotive developed for the Stockington and Darlington in 1825. In either case, by 1830, there was already a successful company producing steam locomotives, and everyone pursing the building of railroads were using or imminently planning to convert to using steam locomotives for tractive power.
Naval technology can be even more stark. HMS Dreadnought was a revolution in terms of battleship design; all the major naval countries were building dreadnoughts before she was completed (actually, a few started before she was even laid down!).
That's a feature you see a lot of revolutionary technologies experience: even the earliest incarnations have enough "wow" factor to push people to using them and improving from the get-go.
>That's a feature you see a lot of revolutionary technologies experience: even the earliest incarnations have enough "wow" factor to push people to using them and improving from the get-go.
ChatGPT alone had over 400 Million weekly active users in February (and seem to have hit 500M) and was the 6th most visited site in the world last month.
This is software that was introduced to the public in Nov of 2022. To put it frankly, it's the fastest growing software service ever.
Google, Youtube, Facebook, Instagram and Twitter were the only sites with more visits. It's close enough to Twitter that it may well overtake it this month.
It's not up for debate. LLMs absolutely have that "wow" factor.
https://www.google.com/amp/s/www.cnbc.com/amp/2025/02/20/ope...
https://x.com/Similarweb/status/1909544985629721070
https://www.digitaltrends.com/computing/viral-trend-drives-c...
I don't know who's making money with it, but are you using Google search more than you're using an LLM at this point? Most people I talk to aren't.
* creation of computers
* personal computers
* the Internet and world wide web
Full stop.
There, I fixed the list in a way that will stand the test of time.
Just because you don't like LLMs doesn't make them not revolutionary with regards to how people use computers.
I like LLMs, I am rooting for them.
They will be revolutionary and join the list once the technology stands the test of time.
There you go. A simple test that cannot be rushed by more datacenters.
They've been around for years now. Obviously not as long as the others, but years nonetheless. What makes you think the years so far may not continue into the future?
The only pattern I can see is it being potentially unsustainable, but I find it hard to believe that, considering I can run an LLM that is more capable that SOTA from 2 years ago on a single box in my living room.
> What makes you think the years so far may not continue into the future?
The same thing that makes me think that it could: I do not have a crystal ball.
It's a simple test. Have patience. With your tone, even if you are right and it stands the test of time, you will look like a tool.
It's those with your attitude, in a state of constant smug denial, who look like tools right now. We'll see how it goes in a few years, but I doubt that will change. The constant goalpost-shifting as LLMs keep getting better and better is becoming embarrassing, and even if they stopped getting better from now on, which is certainly not the case, their impact on communications is on the same order of magnitude as that of the WWW. The curmudgeons I work with are too stubborn to realize that LLMs have moved past GPT-3, and most of the haters on HN sound the same.
I am ok looking like a tool right now.
That is the goal of a hype trend. To make even healthy skeptics feel like they are missing a great thing.
I am actually an early adopter though. I just don't like bragging around.
Researchers know that most "goal posts moved" are actually great challenges. It would be sad if they saw it as pessimism.
Think of it as a more creative way to engage in the hype.
Skeptics are more valuable than blind believers. You want a blind believer testing your airbag or a skeptic that will move some goal posts to try and sniff out any bugs?
> A simple test that cannot be rushed by more datacenters.
So you're saying we need more datacenters ~ Jensen Huang, probably
GPUs might actually make the list before LLMs.
One could argue that it is the fourth item on that list, since it is a technology that is standing for a long time by now.
There are so many items I'd put on the list before LLMs tbh. It's not even worth the time having the conversation. My quality of life went up much more in the dial up to dsl transition. I wouldn't say DSL to fibre was as much of a jump, e.g.
Anywho, that quality of life jump to DSL ultimately led to the enshitification of the internet I knew and loved and now I'm stuck talking about LLMs I wish didn't exist - I would trade the conveniences of today for the internet of the 2000s any day. I genuinely view us on a downwards QOL trajectory re: technology, even if it feels more novel and useful in the moment. Every new novelty in tech that makes life easier for us seems to actually further degraded the human experience of the internet. I don't know how to reconcile this trend with LLMs as being "great".
It really feels like the more we do "good" to "progress" information technology breakthroughs, the worse the entire field feels. Much shallower, less personal, and dumber.
Sorry for the rant.
To define what a breakthrough is, is hard.
A "gaming PC" defines how GPUs are important. General people know something there ticks different, specialists know, industries know. It matters. It matters for a long time now.
"Broadband" was a hype, temporary. Once people understood that the speed is the thing, not any name, it stopped mattering. Now every couple of years there's a mini "new broadband tech" but it's all the same. Of course the tech is important. But revolutionary? I don't know. It boils down to being what is defined as generally "the internet".
I think the easiest way to define it would be to take it a way for a bit and see how the world changes.
If we sent people back to dial up, the world would probably stop working.
If we took away LLMs, GPUs, the base world will probably keep chugging along outside of financial markets. Just IMO.
when dial up was initially the new thing you could also take that away and the world would keep chugging, same for phone lines, same for electricity. but you take away something the world has had time to become dependent on and suddenly the world has a harder time.
The worlds dependency on LLM tools just hasnt had time to develop and that doesnt mean it wont. most people here are likely on the bleeding edge of utilizing them. most people not paying attention will just use it like they use google or not at all, until tools are built and they dont even realize they are using an LLM, or using a service that is dependent on an LLM.
Most people here, unless researchers actually browse HN (unlikely, that would be a dumb move), are equivalent to notepad bleeding edge users in regards to LLMs.
If you don't train LLMs, you are just an user. I am sorry, that is the reality and it is cruel (for now).
If "prompt engineering" becomes a thing, it means the tech is less impressive than it declares itself to be.
It should be a natural language based interface. I will trust it and learn natural language instead, worst thing that can happen is I learn to communicate better (actually a good thing).
Take it away, leave it. Does it really matter in the context I presented?
So, the test of time then? Sounds good to me!
Hey, COBOL stood the test of time. In software, it was revolutionary.
Agreed, but it's also so very relative.
If nobody was paying attention to what the foundational companies have been doing for the past few years, I'm pretty sure I'd be a wild advocate singing their praises on these and other forums.
But since everyone is extremely into it, I just kind of watch and try to measure my expectations.
What makes LLMs especially impressive is how skillfully they read text. That’s more interesting than text generation, since a lot of writing is formulaic and since they do not do the non-formulaic writing well at all. But I don’t think anyone truly knows why they are so adept at reading text.
> LLMs are one of the biggest advances in technology in human history.
See, that’s what the article is about.
Saying LLM is on the same level of steam power, the computer, the internet, airplanes, etc. when the technology hasn’t even been around enough to have real impact, and all I read is extrapolation about how everything will be based on it “in the future” — that’s the definition of hype.
LLMs do about half of my work for me. Today I spent most of my time interacting with o3 and 2.5-pro and I have accomplished what would take me 3 days to accomplish in the past. That’s real impact.
LLMs can’t do anything for my work besides being an alternative to Google, and even then I need to double check on another source.
You get a lot of positive reinforcement from positive opinions online, but the people who don’t depend on this to work won’t be vocal about it.
Missing from the list is the daily march of improved silicon performance through 50+ years.
> In technology, AI is currently the new big hype. Before AI, it was "The Cloud", which unfortunately has still not settled, but are now also being interwoven with AI.
I envy being able to write a statement like this without mentioning The Blockchain.
The Cloud has joined The Information Superhighway as boring, foundational technology. Blockchain started out as hype and is still hype. AI/LLMs already provide infinitely more value than blockchain (which, to be fair, remains close to zero).
Blockchain provides more utility to people living in sanctioned countries, countries with unstable currencies and to criminals than AI ever could.
I typically disagree with the practice of hating things just because they are trending, this is the stuff hipsters are made of.
But, there is a such thing as negative hype. A seller of AI models telling the world he's not hiring engineers anymore (and they too can cut their workforce) because his models are that good would be negative hype.
It's not as bad as battery hype.
Remember all those articles about some minor advance in surface chemistry which was then hyped into Trillion Dollar Industry Real Soon Now? They usually appeared in one of Nature's off-brand journals, or just arxiv, not in Chemical Engineering News or IEEE Trans. on Power Engineering. Such articles usually lacked the usual performance numbers (Wh/L, Wh/Kg, and Wh/$).
Then there's Javascript framework hype, which makes everyone run very hard to stay in the same place.
AI is at least making rapid progress. It's been less than three years since ChatGPT came out. Having lived and worked through the "AI Winter" (1984-2005), this is an improvement. The main problem now remains "hallucinations", or worse, "agentic" systems which act on hallucinations.
>Nobody wants to talk to an AI when they need support. We all HATE that! It is bad enough that when you need service and support you end up talking to someone on the other side of the planet who's using some kind of answer sheet with absolutely no clue on how to really help you. This is true from my personal experience. Had switched my fiber provider because, with my previous provider, i was never able to talk to a human.
I've learned to treat AI hype the same way I think about sports. Just ignore it. Sure I can name the popular models of the day in the same way I can name the Dallas Cowboys, but none of it matters and none of it affects me.
They should have let an AI check their spelling and grammar, maybe they wouldn’t have used “loose” instead of “lose” multiple times.
Grammarly is among the best applications of LLMs I have seen. I'm glad to be paying for it. It even detects and modifies "typical AI phrases," which is ironic.
“AI is misleading, there’s no actual intelligence.”
Oh wow, you figured out “Artificial Intelligence” isn’t literal. You should tell the rest of the planet. Maybe we should rename it “Statistical Pattern Prediction Machines That Are Better at Your Job Than You Are.” It’s a mouthful, but more honest.
Kind of a bad article (IMO) when it sideswipes cloud computing.
The "cloud" was actually useful, and helped to scale so many companies that could bring their products to millions of people quickly and without too many issues, and with good reliability.
Blockchain, hyper enabled gambling, cryptocurrency, and jamming "AI" into everything are bad though.
hype is short for hyperbole, and if there is any hyperbole more excessive than claiming that LLMs are vaguely equivalent to people by calling them "AI", it would have to be a claim of godhood or similar.
I tend to agree with the author, but I think the real problem with hype is the opportunity cost. Somewhere along the way we bought into this idea that "the promise of X" is worth more than "the reality of X".
The longer the hype goes on - which is to say the longer it takes to demonstrate the hype is actually reality - the more people become more heavily invested in it.
If the hype never materialises, you basically build a larger and larger black swan when the crash arrives. The people who win are the few who got out early enough, and everyone else who ignored the hype.
If the hype does eventuate, the number of people you're now competing with in a new market is proportional to the length of time it took to eventuate, because for all those people that were onboard, some of them would enter the space in competition, not just as consumers. Again, the only outsized winners are those who got out early enough. The rest are now just working BAU. Like everyone else who ignored the hype.
In both cases, you're just as good or better off ignoring the hype because the chances of you winning big are tiny (unless you're a billionaire, but in that case you were already winning anyway).
If the hype materializes, it can still change course.
Learned some classical ASP to join the web dev hype? Well, tough luck. The hype was good but materialized in another vessel. If you learned enough fundamentals, you could still jump ship.
But what are the fundamentals for LLM? Hype rarely reveals them right away.
For LLMs, tough luck. You can't learn fundamentals with your toy PC yet. You need to be rich and have server grade hardware. Also, you cannot buy server grade hardware because it is out of stock.
But that's ok, few people will learn it and I will look from the sidelines until an acessible opportunity reveals itself.
If it doesn't get acessible, well, it probably will be some niche tech like COBOL. Pays awesomelly, but not my jam. That is winning for some people.
So many possible scenarios. You or someone else will try to expand and deflect on a single one, won't you? I know you will. No crystal ball, just intuition.
So, I will fucking learn what I can learn, what I like to learn, and I will hope I can survive on what it pays. That is winning in my books (at least on such narrow economic definition of it).
I like LLMs, but the good part is in training. Just using it is kind of... being an user. I can do that without faking as if it were truly learning it.
You are presenting an economic angle. I won't ignore it, but there is more to it for me.
Premise, reality, etc. So maleable.
I agree that situations are much more complex than an economic analysis. And I'm completely onboard with doing the things that take your interest. I do agree with most of the points you're making, and for some I can clarify the direction of my thoughts:
> But what are the fundamentals for LLM? Hype rarely reveals them right away.
I think this is true, and that it is actually a core component of hype: it is not interested in fundamentals. This is true for hype of unproven things, but is also true for hype of proven things (e.g. the iPhone). When I compare the hype around AI to Crypto I see AI has having a much stronger case for "there are fundamentals here". However when I compare them BOTH to the iPhone, it's a different story.
The iPhone already existed when its hype intensified. In this way the hype doesn't really need to "materialise", the thing is already proven, and the hype is the world exploring a thing that exists, not a thing that is promised (e.g. Crypto) or partially-proven (e.g. AI).
> If it doesn't get acessible, well, it probably will be some niche tech like COBOL. Pays awesomelly, but not my jam. That is winning for some people.
I agree, and this is not dissimilar from:
> So, I will fucking learn what I can learn, what I like to learn, and I will hope I can survive on what it pays. That is winning in my books (at least on such narrow economic definition of it).
This sounds to me a lot like the acting profession. People do it because they love it, even though getting gigs is hard, stable income is hard, callbacks are rare, there are skill and relationship and economic ceilings everywhere, but breaking through implies large wins in all of those domains and more.
I don't at all begrudge people for making low-odds bets - I have a number of them myself, just in other places - but I think assessing your bets is important.
> Premise, reality, etc. So maleable.
This is where I pretty strongly disagree. [Assuming you mean "promise"] promises are extremely malleable, but that doesn't make them valuable. In fact I would say it reduces their value, and not realising this is exactly what grifters and dodgy salespeople (etc) hope for.
Reality is not malleable. This is the reason many people who have not bought into the hype keep asking for examples of where AI has is useful. They're not necessarily trying to suggest that it won't ever reach the heights the hype claims (though some certainly are suggesting that). But hype is presented as value NOW, and while reality may change over time, I think a lot of people RIGHT NOW see AI as either not helpful to them, or detrimental to industries broadly (visual art in particular).
It's worth noting though that these two views are somewhat contradictory: we say that it is detrimental right now but also that it is not useful right now. It can be both but the argument is naturally weak.
For my part: I'm starting to find AI enormously useful for search and self-instruction purposes. It's essentially replaced StackOverflow and related sites for me.
But that reality for me doesn't match what the hype says is currently happening. So I consider it a high risk area until the situation changes. The fact that it IS hyped so much is a counter-signal to me that suggests I shouldn't believe everything that is being said until I start seeing strong evidence.
I meant "premise", and reality.
In the economic competition frame you initially used, they are not that strong.
Hmm I think I see what you mean - and that's fair.
I guess I'm a bit too conservative to trust I'd get out soon enough (in financial investment, or work time spent) if the hype doesn't materialise. And if it does, there's always time to learn a new domain, etc.
Fancy me pushing into the Late Majority of the bell curve. Must be getting old.
"Less competent employees are often the ones promoting AI."
I don't understand how this post is HN front page…
I passively hate haters especially AI haters
find the only 10% is real part interesting. I'm starting to note two distinct camps emerge I think? Transformer architectures are just search (/bad search) vs they are the first commercial grade compression of language into real vectors. The trap I think is it's somewhat subtle in how they discuss things?
I worked for a very large industrial some years ago. Our leaders were so ate up with the VR hype that they greenlit a VR training experience for field technicians wherein the technician would wear a VR headset and then use a virtual iPad to diagnose some of our heavy equipment. Someone asked why they couldn’t just use an iPad IRL to learn. There was no rational justification.
Shortly after that web 3.0 took off and I started to hear that we were going to use blockchain to track the maintenance of our heavy equipment.
Now they won’t shut up about AI.
Post is from August of last year. 2024-08-21
“Modern tech hype mirrors medieval charlatans.”
Ah, yes, the classic “this thing I don’t like is medieval alchemy” comparison. Because if you can’t refute the tools, just compare it to snake oil. It’s cute how this guy thinks he’s the only one who’s noticed not every AI startup is curing cancer. The tech world has always had grifters. That doesn’t mean the tools themselves are fake—it means some people are dumb and others are greedy, which is kind of how humanity works.
Medieval alchemy gave us modern chemistry, just like ancient and medieval astrology gave us modern astronomy.
The Enlightenment age brought about an increased interest in occultism right next to an increased interest in science, often by the same people.
When a new knowledge area is starting out, it can be pretty hard to discriminate between the real and the imaginary.
People who are so reactive and sensitive to "hype" and "clickbait" have something of a mental illness.
“Hype is harmful and deceptive.”
Congratulations on identifying marketing exists. What’s next? You gonna blow the whistle on toothpaste commercials? Of course hype is driven by profit. That’s literally the point of business. But you know what else was hyped once? The internet. And electricity. And antibiotics. Being annoyed that people are excited about tech is not a worldview—it's a vibe. A really boring one.
I believed crypto was a scam and missed life time fortune and not goona believe what others say about AI. I'm all in on AI
How delightfully contrarian.
find the good, avoid the bad
anti-hype hype
The reason for this AI hype is how bullshit Western economies have become. I write an AI email which you summarise and we can both be happy that we contributed to the GDP through spending tokens.
In the meantime, nothing tangible has been produced. We’re just hyping ourselves over selling snake oil faster.
Bookmark this for a good laugh in a few few years.
This strikes me as curmudgeonly and unnecessarily contrarian.
While it's true that investors, entrepreneurs, corporations, etc. have a vested interest in AI to the tune of trillions of dollars, the impulse to dismiss this as 90% hype (as the author does) is insane.
We're only three years into this, and we have:
- LLMs with grad student-level competency in everything
- Multimodality with complex understanding of photography and technical documents
- Image generators that can generate high-quality photos, in any style, with just a text description
- Song generators that make pretty decent music and video generators that aren't half bad
- Excellent developer tooling & autocomplete; very competent code generation
This is still early and the foundations are still being laid. Imagine where we'll be in 10 years, assuming even a linear growth rate in capabilities.
Think of what the internet is today, and its permanence in everything, and where it was just 30 years ago.
By all means, resist the hype - but don't go so far in the other direction that your head is in the sand.
Why would we assume linear growth in capabilities and not a logarithmic growth rate? It seems time and time again, it gets harder and harder to make progress.
I think back to using Dragon Natural Dictation in 1998, there seemed to be exponential promise and a ton of excitement in my young mind. But the reality was more logarithmic improvements so it is finally pretty good 25 years later.
Combine an exponential growth in investments (that is inherent to our economy) with a logarithmic return in capabilities, and you get a linear increase in capabilities.
So if you want a baby in one month, invest in 9 mothers?
Pretty much all your claims are false, in my experience:
- LLMs do not have grad student-level competency in anything; they often make elementary mistakes in very basic reasoning.
- Image generators produce spooky pictures with garbled text
- Song generators produce nothing remarkable
- LLM code generation is still sloppy and prone to many blindspots
Sorry, these claims are just not true. AI generations in these categories are impressive on release, but blatantly generic, recognizable, predictable and boring after I have seen about 100 or so. Also, if you want to put them to use to replace "real work" outside of the ordinary/pretrained, you hit limitations quickly.
The scaling laws of the Internet were clear from the start.
I'll be hyped for an AI that can do my dishes / fold and put away laundry / clean my tub
Rather, let's look for things in tech that never hyped but are very useful.
If hype is bad, then there must be something good, and unhyped to such high proportions. Something unhypable.
Otherwise it's all crap, isn't it? It can't be like that.
Probably these things exist, but if they are truly unhypeable, only a select few will ever know of it, and their economic and societal impact will remain negligible.
I know of a couple used by millions unaware they are actually a thing.
You can measure hype by how many people are talking about an idea space. Write an article about "X hype is bad" and yup, you're talking about the idea space i.e. you're participating in the hype.
As a Linux user, I do think there's a bit of an echo chamber which leads to groupthink such as this article. And it's ironic because a lot of the underpinnings of AI use Linux as their underpinning.
The amount of AI boosters and defenders wading in here to bash anyone with a less-than-glowing perspective of the technology is...disappointing. If the technology is truly as revolutionary as you proclaim it to be, then why are you here defending it instead of changing the world with it, as you claim it can? Stop trying to convince detractors with empty arguments, and show us the actual results you claim to have.
> Hype is always bad.
As someone who actually has to implement stuff and support it thereafter, rather than just bolt it onto my resume and ride the hypetrain to equityville, I wholeheartedly agree with OP's core message here. After web2.0 devolved into walled gardens, closed APIs, and gargantuan surveillance apparatuses designed to serve advertisements in a more precise way than actual cruise missiles, I became soured on the very field I am also fiercely passionate about (IT).
The modern technology field is exclusively hype-oriented. There's something new every year you simply must adopt and become an expert in, or you'll lose your job. The ROI is irrelevant (namely because it rarely exists for most organizations in the early adopter phase, if ever), the functionality is irrelevant, the use case is especially irrelevant. It's "new", it's shiny, and you simply must have it to be a "modern" business.
Hype is meant to be a direct replacement for objectivity. It warps math and statistics to justify its necessity (like those cloud migration calculators every vendor likes to point to, in order to justify a wholesale migration off your present estate), the salesfolk strong-arm your leadership into adoption regardless of the advice of their internal architects and engineers, and C-Suites can recite dozens of brands in the Gartner "Leader" quadrant for any given technology while simultaneously having no clue what that technology actually does or is used for. It's all hype.
And an economy built on the hype-cycle has very real, immediate consequences for the average person. It raises energy rates (https://www.usatoday.com/story/business/energy-resource/2025...) and takes water from communities not provisioned for such large scale industrial use (https://www.11alive.com/article/news/investigations/11alive-...), both of which harm locals while the companies skirt by without paying for their fair share of consumption (https://eelp.law.harvard.edu/extracting-profits-from-the-pub...). That doesn't even get into the grifts involved with IP, copyright, or elimination of well-paying jobs (https://green.spacedino.net/the-final-grift/), or the boom-bust cycles that often saddle consumers and taxpayers with the steep losses (both monetary and jobs) incurred once the early investors have sailed off into the sunset with their cartoonishly-large sacks of money and a new superyacht to show for their efforts.
The most immediate and pressing concern for the AI hype is the squandering of finite resources (fresh water, land, and energy chief among them) to train and operate these models on a highly speculative assumption that this revolution in AI will finally be the one that brings humanity into the future, cures all disease, and enables us all to live a life of leisure (rather than just the offensively rich) while waiting for immortality to arrive so we can explore the cosmos. To make this po-faced argument in the face of the present climate disaster demonstrates a complete lack of basic situational awareness, undermining any sort of credibility they may have with anyone who can consider the interactions of two separate systems; we can literally only pursue AI or ameliorate climate change right now with our current capacities in energy, water, and rare earths, and of those two only the latter is actually, demonstrably solved and merely requires global implementation. It demonstrates a selfish mindset of self-preservation (your career in AI) over the protection and support of the whole, and it's all just hype.
I'm not opposed to new technologies. Containers are a game changer, Kubernetes is improving its usability (in baby steps), composable infrastructure through code is a godsend, and the consumption-based models of public cloud providers have made hosting your own place on the internet cheaper than ever before. We've had some great innovations in these past ten years, almost all of which have been completely overshadowed by the perpetual hype train around get-rich investment schemes (crypto, blockchain, NFTs, AI). The hype is the problem, which is what the author was getting at.
It just wouldn’t be a solid rant against Big Tech hype cycles like AI and spellcheckers without using the word sheeple.
I hate obnoxious assholes who insist on taking a contrarian position on technology to such an extreme they become the same puritans they revolted against early in their career.
This forum is riddled with people like this.