“The AI capital influx means that mega-projects no longer seem outlandishly expensive. This is good for the world!”
Is it? This whole piece just reads of mega funds and giga corps throwing ridiculous cash for pay to win. Nothing new there.
We can’t train more people? I didn’t know Universities were suddenly producing waaaay less talent and that intelligence fell off a cliff.
Things have gone parabolic! It’s giga mega VC time!! Adios early stage, we’re doing $200M Series Seed pre revenue! Mission aligned! Giga power law!
This is just M2 expansion and wealth concentration. Plus a total disregard for 99% of employees. The 1000000x engineer can just do everyone else’s job and the gigachad VCs will back them from seed to exit (who even exits anymore, just hyper scale your way to Google a la Windsurf)!
> mega funds and giga corps throwing ridiculous cash for pay to win
> This is just M2 expansion and wealth concentration
I actually think "throwing ridiculous cash" _reduces_ the wealth concentration, particularly if a chunk of it is developer talent bidding war. This is money that had been concentrated, being distributed. These over-paid developers pay for goods or services from other people (and pay income taxes!). Money spent on datacenters also ends up paying people building and maintaining the data-centers, people working in the chip and server component factories, people developing those chips, etc etc. Perhaps a big chunk ends up with Jensen Huang and investors in NVidia, but still, much is spent on the rest of the economy along the way.
I don't feel bad about rich companies and people blowing their money on expensive stuff, that distributing the wealth. Be more worried of wealthy companies/people who are very efficient with their spending ...
> This is just M2 expansion and wealth concentration.
I just want to point that there's no scientific law that says those two must move together.
A government very often needs to print money, and it's important to keep in mind that there's no physical requirement that this money immediately must go to rich people. A government can decide to send it to poor people exactly just as easily as to rich. All the laws forbidding that are of the legal kind.
All true of course, with a clarification: even if all the newly printed money is put in the hands of the poor, the resultant inflation raises the prices of hard assets which are overwhelmingly held by the rich (fueling wealth inequality)
You can say that they fail to dilute the value of hard assets, yes. So they don't really "punish" the extra-rich. But they only inflate prices by the amount of money you print, you can't make the poor poorer by giving them money.
Speed of development of something important isn't necessarily good. Humans are bad at absorbing a lot of change at once and it takes time to recognize and mitigate second-order effects. There's plenty of benefit to the systems that disruptors operate within (society) to not moving as fast as possible... of course since our economic systems don't factor in externalities, we've instead turned all of society into a commons.
People said the same thing about the printing press. When you look across human history it's tough to make a moral case that slowing down technology development was ever a net positive. We can't reliably predict or prevent the problems anyway and it's pointless to even try. Just move forward and deal with the actual problems (which are usually different from the expected problems) as they arise.
Nah. The alternative to oil was coal, or firewood, or freezing in the dark. The shift to intensive oil use enabled an enormous increase in human standards of living. If anything we should have accelerated it. The externalities have been minimal in comparison.
I don’t really disagree with you. I think there really is no stopping progress. For better or worse, it’s happening and there ain’t no slowing down.
But, I wish people would shut up about the printing press in these discussions already. AI disrupting literally everyone’s job at the same time is not the same as a printing technology disrupting the very niche profession of scribe. Or electric lamps putting some lamp lighters out of work.
I think the author should've clarified that this is purely a conversation about the platform plays. There will be 100's of thousands of companies on the application layer, and mini-platform plays that will have your run-of-the-mill new grad or 1x engineer.
Everything else is just executives doing a bit of dance for their investors ala "We won't need employees anymore!"
O great so just the most important companies that are inherently monopolistic gate keepers. It's worked out so well with all these current platforms totally not restricting access and abusing everyone else with rent seeking and other anti competitive behaviors.
Optimistic take. What's the say the application layer won't be dominated by the big players as they hoover up anything remotely promising. Leaving the crumbs on the mini-platform .. ala Apple App Store ...
Sorry, I'm pessimistic as recent experience is one of hyper concentration of ideas, capital, talent, everything.
10 years ago, Apple was the largest company in the world by market capitalization, its market cap was around $479.069 billion.
How have we gotten to a point in just a decade where multiple companies are dropping annual numbers that are in the realm of "the market cap of the worlds biggest companies" on these things?
Have we solved all (any of) the other problems out there already?
The 10x from 400 billion to 4 trillion in a decade didnt come from 2% inflation.
It didnt come from nowhwere, or from Silicon Valleys exceptionalism - It came from changing the value of money, labour and resources. It came from everyday people, deficits and borrowing from the next 10 generations of children.
The public discourse is weirdly unable to adress burning problems that had been formulated almost 150 years ago. Much like the climate change topic originating in the 1970s.
It is about LLMs because it's the trillions being sunk into this bubble instead of more important issues like climate crisis and a pressing need to address the transformation of the energy sector or tackling the wealth inequality gap. These are all causing real world osues that are much bigger than beating (or rather faking to beat) some LLM benchmarks to impress investor.
What bubble? If anything it's 10x too small for the value it'll provide over the next century.
The same exact premise was proclaimed about the Internet circa 1999-2003 (boom and bust). Then the entire global economy and nearly all electronic communication was rebuilt on top of the Internet.
For the coming century AI is more important than the Internet.
LLMs don't exist in a vacuum. Someone spends money to build LLMs, builds infrastructure that the LLMs run on, extracts training data to run the LLMs on, uses LLM for various use-cases, and makes revenue from specific LLM users.
they're all a bit immoral, like how Hollywood made it big by avoiding patents, how YouTube got its mammoth exit via widespread copyright infringement and now LLMs gather up the skills of people it pays nothing to and tries to sell.
However we could also argue that most things in human society are less moral than moral (e.g. cars, supermarkets, etc).
But we can also argue that dropping some hundreds of millions in VC capital is less immoral than other activities, such as holding that money in liquid assets.
however they do result in flows of capitals feeding into otherwise unfundable enterprise like R&D; science and engineering, or culture; writing, music, art. Where's the ROI if you invest millions into R&D and your competitor invests $0 and can then just reproduce your works with their logo ontop of yours? To sack off IP and copyright would significantly narrow innovation and result in many artists, scientists and engineers having their income severely suppressed if not eradicated. Instead that money would temporarily go to a bunch of hacks that do nothing but copy and chase the bottom, before vanishing into thin air as the works become entirely unprofitable.
I don't think its as simple as calling them immoral. Rather the immorality comes on them being poorly regulated. With regulated term limits on patents and copyright we create a world where an artist can create licensed product and protect themselves during the course of their career, and people are them able to riff on their works after some decades or after they pass on.
The regulation is what makes it worth while for people to invent/write. Patents/copyrights have been a net benefit for society with a smaller negative downside.
I think it's a little more nuanced than that. Certainly IP is regularly abused to try to suppress competition/innovation, own our shared culture, create artificial scarcity, etc. However, there's also a need to protect artists and other creatives from having their work scooped up and profited off of by mega corps.
What about compound interest? After all, inheritance is merely letting wealth continue to compound across generations. But at first glance, the same arguments against inheritance would apply against letting a single person earn interest.
How is inheritance bad? Imo, estate taxes are more immoral. Why should the state be allowed a cut of my private assets? Gift taxes are also immoral. Why should I have to pay taxes for giving away assets?
To me the giver paying taxes is wrong mindset. Maybe they should be collecting them. But paying taxes on earned money seems reasonable and has long history. It can be earned from work, or inheritance or gift. Actually maybe paying income tax on inheritance would be best.
My life, yes, but my genetic lifeline exists up to and including eternity if luck and good decision making are on the side of my generations of offspring.
Let's consider the most basic form of ownership: that over one's body. By your logic, my life is a spec on the eternal timeline, so why make it a crime to harm or murder my physically?
I suppose because we're wired this way. Can't think of any group or society that didn't have some notion of private property that wasn't just a huge (and brief) humanitarian tragedy.
Yes, private property enables human civilization, all the good and the bad. Before the agricultural revolution led to protected and exploited stores of grain, we were far smaller tribes of hunters and gatherers with far less technology
Those smaller tribes were continuously at war over resources... It's exactly the behavior we observe in nature whenever resource scarcity presents.
A world without private property leads to a world of pure lawlessness. Sure... We could do that, but it would quickly devolve into forever warfare where only might wins, and even that for only fleetingly small timeslices.
History's progression has proven that exploited workers still prefer to exist in that system vs one of continual peril.
I learned this in elementary school. When 10 kids join forces to bully one kid, that is immoral. Same with companies and VC money trying to corner a market.
Isn’t that the point of having a government with laws and regulation… to allow the majority to bully specific undesirable minority groups into submission?
For some things it’s even a worldwide consensus, e.g. any groups with a desire to acquire large amounts of plutonium (who don’t belong to a preapproved list).
There’s even a strong consistent push to make it ever more inescapable, comprehensive, resistant to any possible deviation, etc…
Most people want to live in a society that maximizes positive freedom, or some balance of freedom and prosperity. In those societies, it is considered legitimate for the government to use force to stop people whose actions prevent members of society from having positive freedom/prosperity.
Of course that is a very simplified description. In practice, most societies promote a balance between positive and negative freedom, recognize some limits on the government's ability to use force, recognize some degree to which people can choose to act in ways that don't promote positive freedom/prosperity, etc.
No, you've got that backwards. In a functioning democracy respecting the rule of law, the government with its laws and regulation are the school teachers who are putting the bullies into detention.
Frequently, the bullies are on the student council, and because they get more face time with the administration and are seen as part of the establishment by the same, admins are reluctant to do more than a slap of the hand, but appearances must be kept up with.
Just writing your opinion down doesn’t seem relevant to real life government organization.
I don’t see how there could be “school teachers” above the majority of congressman or regulatory decision makers. The very existence of pork filled bills thousands of pages long and byzantine regulations suggest that couldn’t possibly be true.
If your government is corrupt then it doesn't make sense to say that the concept of having a government is dead and that corporations are the solution.
People are also sick and tired of rules in the AppStore. Or the fact that when their Apple/Google/whatever account is unilaterally blocked they have no recourse. At least with a government there are some checks and balances.
Yes, some governments are more trustworthy than others. Doesn't mean the concept is bad.
I agree AI can change the balance of power but I think it's more nuanced.
When expertise is commoditized, it becomes cheap; that reduces payroll and operational costs - which reduces the value of VC investment and thus the power of pre-existing wealth.
If AI means I can compete with fewer resources, then that's an equalizing dynamic isn't it?
I guess the good thing is that many workers are also consumers. If they don't have money to consume anymore, who will buy all the shiny things that AI will produce so efficiently?
Just from the environmental perspective is pretty immoral, the energy being consumed it's ridiculous and now every company is in an arms race making it all worse.
In general, modern English uses “this is immoral” to mean “I don’t like this”. It’s a language quirk. It’s good that you’re asking the whys to get somewhere but I might as well just get to it.
“Immoral”, “should not exist”, “late-stage capitalism” are tribal affiliation chants intended to mark one’s identity.
E.g. I go to the Emirates and sing “We’re Arrrrrsenal! We’re by far the greatest time the world has ever seen”. And the point is to signal that I’m an Arsenal fan, a gooner. If someone were to ask “why are you the greatest team?” I wouldn’t be able to answer except with another chant (just like these users) because I’m just chanting for my team. The words aren’t really meaningful on their own.
The numbers are insane to me, relative to other "real" industries. E.g., making some inferences from a recent Economist article, the entire world's nuclear industry "only" generates something like $30bn/year revenue (https://archive.is/WYfIG).
All this time complaining about companies not investing in research and development for future prosperity and now when they actually do so it is suddenly outright immoral?
> complaining about companies not investing in research and development
Glancing over the last few decades of tax returns it looks like they are claiming a lot of tax credits for R&D, so much so that if someone had a closer look they might find some Fraud, Waste or Abuse.
if you're targetting $200M, then I guess each round is to hire one or two engineers for one year lol
I'm curious if you're one of these AI engineers getting 100m, do you quibble over the health insurance? I mean at that point you can fully fund any operation you need and whatever long term care you need for 50 years easily.
Off topic, but: Supply and demand says that, if university enrollment drops sharply, the price of a university education should go down. That sounds like good news to me.
University enrollment doesn't drop sharply evenly across all universities. The lowest-desirability universities will go bust while the Ivy League continues to have hyper-competitive admissions and yearly tuition increases that outpace inflation.
Ironically, there's no M2 expansion going on since Covid days and M2 to GDP is back to what it was pre-Covid and overall didn't even increase much at all even since GFC. It's only 1.5x of what it was at the bottom in 1997 when cost of capital was much higher than today. I think this concern is misplaced.
M2 is the wrong statistic for sure, but the thrust of GP's comment is accurate, IMO. Fed intervention has not remotely been removed from the economy. The "big beautiful bill" probably just amounts to another round of it (fiscal excess will lead to a crisis which will force a monetary bailout).
We should be using some kind of weighted total of all the things that get treated as money.
When a company makes a deal in exchange for shares or something, those shares are being used as money and must be included in any currency-neutral calculation of the money supply. However, most shares don't flow like money. You also have cryptos, which flow more than shares but less than government bonds and cash. It could be that the total of all money has expanded, even as the US dollar specifically stabilizes and slightly contracts.
Fwiw universities are producing less talent. They have been getting hammered with budget shortfalls thanks to Trump cutting research funding and this manifests into programs accepting fewer students and professors being able to fund fewer students.
They are producing less talent as industry defines it. It is because a large percentage of the people who could teach them anything useful (to industry) are actually employed by industry, and don't want to work in academia.
Another complication is academia simply does not have the resources to remotely compete with the likes of Google, OpenAI, Anthropic, xAI, etc.
Sure but that's new as of a few months. The university I went to still accepts the same number of students per year as it has for many years. Those numbers don't change much without significant land / program expansion which is certainly being cut now.
What’s wrong with extreme wealth concentration? It’s not like hoarding cash. The wealth is the stake in companies they built or own.
We need more wealth concentration, simply because opposite of this is the prevailing normie zeitgeist. You can just write it off based on how popular it is to hate wealth.
Wealth is power. Power decides what we use our collective time and resources on. The more concentrated power is, the less we use that time and resources on things that improves the life of the average person, and more on things that matter to the few with wealth.
I’m going to assume that this is just some edgy post, but you should read up on the relationship between wealth inequality and corruption, social mobility, and similar factors.
Man it's hard to read stuff like this on the internet. When has wealth concentration ever been a good thing? Wealth is power and power leads to abuse almost universally.
I would say if there is a decline in society, the normies are wrong. And if there's steady improvement in quality of life, then the normie zeitgeist is correct. But there's always a delay in these things, at least a generation.
I don't think that applies when the normies lack power; which is precisely the problem with wealth concentration. That would be like blaming the serfs for the failures of feudalist governments.
Not trolling. It has served me better than most heuristics.
If there is something subjective and if you cannot find a critique of it, it’s usually a super power to assume the opposite is true barring obvious exceptions.
You underestimate the resources the subset of society who it actually benefits can, will, and do use to distort views on how wide that benefit actually is.
Like all of the startup founders and all of the folks here working at tech companies and investing in their 401ks, happily cashing out RSUs, and such right?
I don't know about the IT worker part but I dare you to talk about capitalism to the nurses, school teachers and police officers who cannot have lucrative business models like us HN folks.
Are these the same nurses who are anti-vax and the police officers I'm supposed to want to de-fund, or just the ones you're thinking of?
Are you going to be the one to tell the teacher's or police officer's union they have to divest their pension and buy fiat currency? No more stocks allowed!
Why people here brandish Communism when someone critics Capitalism? It’s like we’re still in the coldwar. Those two views have many sub-categories and there’s others in-between and on the sides. Just a few in the last decades:
- socio-democratic countries are the norm in Europe, namely Norway, Denmark and Sweden.
- Ordoliberalism: Germany, Switzerland
- cooperative economics: Japan, Spain
- market socialism: China, hungaria
- Parecon: brasil, Argentina
- Ubuntu: South Africa
- Anarcho-syndicalism, The third way, Islamic economic…
What a weird comment, so disconnected from reality. Norway is fully capitalist with income inequality similar to the USA. China, despite being nominally run by communists, is actually a fascist dictatorship. And "Ubuntu" isn't a real thing: South Africa is a failed state run by kleptocrats who can't even keep the lights on.
The income inequality in Norway is roughly half that of the US, and the quality of life of the bottom income bracket is much higher there, due to social policies. Why lie about things that can easily be looked up?
The Gini coefficient is similar so I have no idea where you're getting the idea that income equality in Norway is "half" that of the US. And the US has consistently had a positive net migration rate with Norway so regardless of your nonsense claims about quality of life people seem to be voting with their feet.
Because they have been conditioned to do so. The ultra-wealthy have been fighting the war against socialism for over a century, and part of that strategy is to polarize the topic. If you’re not explicitly pro unfettered capitalism, you must be a communist.
Ideologies have associated talking points. If you start spouting 'blood and soil' rhetoric don't be surprised or offended when people start to call you a Nazi.
In this case communism's obsession with talking about Capitalism as a proper noun as distributed process as if it was a monolithic discrete object with clear intentions and something which can be 'abolished' with no idea as to what the particulars would entail.
Also, the people being hired now for insane sums of money, are being hired because they have deep knowledge in design / implementation of AI models and infrastructure that scale to billions of users.
In order to operate on a scale like that, you obviously need to have worked somewhere that has that magnitude of users. That makes the pool of candidates quite small.
It’s like building a spaceship. Do you hire the people that have only worked on simulations, or do you try to hire the people that have actually been part of building the most advanced spaceship to date? Given that you’re also in a race against other competitors.
With all these mega-offers going out, I object when people saying that they’re paying for “talent”.
These AI folks are good, but not orders of magnitude better than engineers and researchers already working in tech or academia. Lots of folks are capable of building an AI system, the reason they haven’t is that they haven’t been in a situation where they have the time/money/freedom to do it.
These mega offers aren’t about “talent”, they are about “experience”
As one of the many people who are fairly experienced in AI (but at small startups) and hasn't had Zuck personally knock on my door, I have had a few moments of "wait, 9 figure salary? Can I at least get 7 figures?"
But the truth is it's not "just" about experience. Most of these people have been pushing the limits of their fields for their entire careers. It's not like having "the time/money/freedom" to do it is randomly distributed even among talented, smart people. The people in this talent pool where all likely aggressive researchers in a very specialized field, then likely fought hard to get on elite teams working close to the metal on these massive scale inference problems, and they continued to follow this path until they got where they are.
And the truth is, if you're at least "good" in this space, you do get your piece of the pie at the appropriate scale. I'm still making regular dev income, but my last round of job searching (just a few months ago) was insane. I had to quit my job early because I couldn't manage the all the teams I was talking to. I've been through some hot tech markets over my career, but never anything like this. Meanwhile many of my non-AI peers are really struggling to find new roles.
So there's no reason to cast shade on the genuine talent of these people (though I think we all feel that frustration from time to time).
> These mega offers aren’t about “talent”, they are about “experience”
Well, yes.
Talent doesn't exist in the form people would like to believe, and to whatever degree it does, experience is the most reliable proxy for identifying it.
> These mega offers aren’t about “talent”, they are about “experience”
I'm sorry, but what's the specific distinction? When the Lakers pay Lebron $54MM per season, is that for his innate talent, or is it for the 20k hours he's spent perfecting his game?
This is a lot of hand-wringing over nothing. We've seen people paid outrageous sums of money for throwing a ball for DECADES without any complaints, but the moment a filthy computer nerd is paid the same money to build models, it's pitchforks time.
The only thing wrong with the current compensation kerfuffle is that it happened so late. People like Einstein, Von Neumann, Maxwell, Borlaug, etc should have been compensated like sportsball stars, as well.
> Blitzhires are another form of an acquisition.. not everybody may be thrilled of the outcome.. employees left behind may feel betrayed and unappreciated.. investors may feel founders may have broken a social contract. But, for a Blitzhire to work, usually everybody needs to work together and align. The driver behind these deals is speed. Maybe concerns over regulatory scrutiny are part of it, but more importantly speed. Not going through the [Hart-Scott-Rodino Antitrust Act] HSR process at all may be worth the enormous complexity and inefficiency of foregoing a traditional acquisition path.
From comment on OP:
> In 2023–2024, our industry witnessed massive waves of layoffs, often justified as “It’s just business, nothing personal.” These layoffs were carried out by the same companies now aggressively competing for AI talent. I would argue that the transactional nature of employer-employee relationships wasn’t primarily driven by a talent shortage or human greed. Rather, those factors only reinforced the damage caused by the companies’ own culture-destroying actions a few years earlier.
> A group of big tech companies, including Apple, Google, Adobe, and Intel, recently settled a lawsuit over their "no poach" agreement for $324 million. The CEOs of those companies had agreed not to do "cold call" recruiting of each others' engineers until they were busted by the Department of Justice, which saw the deal as an antitrust violation. The government action was followed up by a class-action lawsuit from the affected workers, who claimed the deal suppressed their wages.
> If the top 1% of companies drive the majority of VC returns, why shouldn’t the same apply to talent? Our natural egalitarian bias makes this unpalatable to accept, but the 10x engineer meme doesn’t go far enough – there are clearly people that are 1,000x the baseline impact.
French aristocrats didn't have trillion dollar industries brainwashing the population to be on their side, nor did they have AI powered armies to defend them when the people rose up.
I find the current VC/billionaire strategy a bit odd and suboptimal. If we consider the current search for AGI as something like a multi-armed bandit seeking to identify “valuable researchers”, the industry is way over-indexing on the exploitation side of the exploitation/exploration trade-off.
If I had billions to throw around, instead of siphoning large amounts of it to a relatively small number of people, I would instead attempt to incubate new ideas across a very large base of generally smart people across interdisciplinary backgrounds. Give anyone who shows genuine interest some amount of compute resources to test their ideas in exchange for X% of the payoff should their approach lead to some step function improvement in capability. The current “AI talent war” is very different than sports, because unlike a star tennis player, it’s not clear at all whose novel approach to machine learning is ultimately going to pay off the most.
> If I had billions to throw around, instead of siphoning large amounts of it to a relatively small number of people, I would instead attempt to incubate new ideas across a very large base of generally smart people across interdisciplinary backgrounds.
I had an interesting conversation with an investor around the power vs knowledge dynamic in the VC world and after a few hours we'd basically reinvented higher education with reverse tuition. Defining a general interest or loose problem space and then throwing money over a wall to individuals excited about exploring the area seems wasteful until you look at the scale of failed projects.
In fact this is much more optimal when looking at history. Strangely, success often comes from dark horses. But it makes sense, since you can't have paradigm shifts by maintaining the paradigm. Which is what happens when you hyper focus on a few individuals (who you generally pick by credentials).
The optimal strategy is to lean on the status quo but also cast your net far and wide. There's a balance of exploration/exploitation, but exploitation feels much safer. Weirdly you need to be risky and go against the grain if you want you play it safe.
With the money these companies are throwing around we should be able to have a renaissance of AI innovations. But we seem to just want to railroad things. Might as well throw the money down the drain.
SV has already thrown it down the memory hole but for a good three months, until everyone else copied their paper, the SOTA reasoning model available to the public was open source, Communist[0] and came out of a nearly defunct Chinese hedge fund.
[0] If you don't believe the communist part just ask it about the American economy.
The higher the line goes, the higher the expected value of return on investment. It’s a virtuous cycle based on a bet on all horses, but since the EV is so high for first mover advantage for AGI, it might be worth it to overleverage compared to the past for your top picks? These are still small sums for Zuckerberg to pay personally, let alone for Meta to pay. This is already priced in.
At this level, you probably don't need a moat to recoup your surplus sunk costs due to AI talent acquisition. You just need a good day in the market, likely the same day you announce you've achieved AGI. It's kayfabe accounting.
Any announcement of AGI will be immediately controversial. Valuation increases will depend on whether people actually believe it and what they're able to sell. Expect public opinion to be fickle and share prices to be volatile.
Decent odds we see some pretenders make that announcement before the real deal. A company with the real deal would probably make bank, but I don't pretend to know when that will come or who that might be.
I don't think that price movement is necessary to make money as an outsize shareholder, especially during high volatility. Zuck knows how to buy the market leader early, so we might have already seen the creator of AGI be poached by Meta, they just haven't realized what they know yet until they work with others in the org.
It's possible that they won't be the early bird to catch the AGI worm, but sometimes the investment squeeze required to be the first mover isn't rewarded in market juice, especially if the second mouse can use your AGI to create their own AGI cheese.
yep. and even the sports analogy doesn't fully explain what's going on. if we are talking "true" AGI with potential to replace people wholesale their strategy is telling in that they aren't optimizing for the "end game". maybe it's a factor of just gathering all the mindshare/hype/resources and THEN they can go actually figure it out /s.
it would be like if you were looking to train the next tennis star that had the ability to basically upend the entire game as we know it. maybe you saw a few people with a unique way of playing that were dominating an order of magnitude higher. you DEF would see teams and coaches having open tryouts and trying very unconventional things for anyone they could find that had promise.
for the record i think "AI" is not hype and is changing the way things are done permanently, but it's yet to be seem whether all these spent billions can actually meet the expected return (AGI). it's hard to separate out the true innovations from the obvious grift/money grab also going on.
> AI catch-up investment has gone parabolic, initially towards GPUs and mega training runs. As some labs learned that GPUs alone don't guarantee good models, the capital cannon is shifting towards talent.
If it is anything like professional sports, then the leading companies should start hiring talent as early as possible. Might as well offer $1m to any and all fresh grads and researchers, before competitors can bag them.
The full bodied palate of this AI market mirrors the sharp nose of 2023 AI doomerism.
The argument goes: if AI is going to destroy humanity, even if that is a 0.001% chance, we should all totally re-wire society to prevent that from happening, because the _potential_ risk is so huge.
Same goes with these AI companies. What they are shooting for, is to replace white collar workers completely. Every single white collar employee, with their expensive MacBooks, great healthcare and PTO, and lax 9-5 schedule, is to be eliminated completely. IF this is to happen, even if it's a 0.001% chance, we should totally re-wire capital markets, because the _potential reward_ is so huge.
And indeed, this idea is so strongly held (especially in silicon valley) that we see these insanely frothy valuations and billion dollar deals in what should be a down market (tremendous macro uncertainty, high interest rates, etc).
AI doomerism seemed to lack finish, though. Anyone remember Eliezer Yudkowsky? Haven't heard from him in a while.
But the way we measure money is by what it can buy (the basket of goods). Surely we'll still need food and clothing even after AGI, so can still measure wealth by number of burritos (1 burrito = $12 in 2025 USD)
Aren’t most of these deals locked-up stock deals? With lengthy vesting times, and performance based clauses.
The signing bonuses are probably more than enough for regular people to retire, but these researchers and execs being poached aren’t exactly average Joe’s making $50k/year prior to being poached.
Must be nice to be able to ride such a wave and take your share. The money investors are throwing around these days is just insane. I remember it was considered a lot of money when Webvan got 400 million as investment during the .COM bubble. These days this seems nothing.
These "talent wars" are overblown and a result of money having nowhere else to go. People are banking on AI and robotics for human progress to take off and that's just a result of all other ventures fizzling out with this left for capital to migrate to.
If you talked to any of these folks worth billions they arent particularly smart, their ideas not really interesting. it took us a few years to go from gpt-3 to deepseek v3 and then another few years to go from sonnet 4 to kimi k2, both being open source models on way lower funding. This hints at a deeper problem than what "hypercapitalism" suggests. In fact, it suggests that capital distribution as of its current state is highly inefficient and we are simply funding the wrong people.
Smart AI talent aren't going to out there constantly trying to get funding or the best deals. They would want to work. Capital is getting too used to not doing the ground work to seek them out. Capital needs to be more tech savvy.
VCs and corporate development teams don't actually understand the technology deeply enough to identify who's doing the important work.
I think one of the main issue is that the 10x or 100x talents in AI have not yet really show their value yet. None of these AI companies are making any money, and they are valued over highly successful and profitable companies out there because of their "potentials". ARR is nothing if you sell goods valued at 1 dollar for 90 cents.
I wonder at what point this becomes like guaranteed salaries in sports, like prominent NBA players, where you work hard to get the salary. And then once you've made it, you are basically done, and it's hard to get up and motivate yourself. You've got acolytes and groupies distracting you, you're flush with cash without ever having really shipped anything or made any money. You're busy giving TED talks...
At that point, are you the 18-year-old phenom who got the big payday and sort of went downhill from there?
I imagine the biggest winners will be the ones who were doubted, not believed in, and had to fight to build real, profitable companies that become the next trillion-dollar companies.
Not that it would be bad to be Mark Cuban, but Mark Cuban is not Jeff Bezos.
And for posterity, I respect Mark Cuban. It's just that his exit came at a time when he was fortunate, as he got his money without having to go all the way through to the end.
I think it's unfortunate that the term "capitalism" has been captured by the left to mean the bad kind of capitalism, where regulation is only used as a moat for the established players. Capitalism as a whole is the least bad economic system for prosperity, but the least bad version of capitalism is something like the Nordic model, with good taxation and redistribution policies and consumer protections. But the term itself is poisoned, at least in U.S. politics, to where social democrat/liberal capitalists like Bernie call themselves socialists instead.
But the term itself was created and captured by the left from the beginning; Proudhon first used it; Marx popularized it; so in the history of terminologies it always had the meaning that we associate with it.
I like the term "market economy" or "commercial society" more, because it does capture more of what's happening on the market and the society.
> It breaks down the existing rules of engagement, from the social contract of company formation, to the loyalty of labor, to the duty to sustain an already-working product, to the conflict rules that investors used to follow.
WTF is this guy hallucinating about? None of that ever existed.
> If the top 1% of companies drive the majority of VC returns
The fact that the author brings this up and fails to realize that the behavior of current staff shows we have hit or have passed peak AI.
Moores Law is dead and it isn't going to come through and make AI any more affordable. Look at the latest GPU's: IPC is flat. And no one is charging enough to pay for running (bandwidth, power) of the computer that is being used, never mind turning NVIDA into a 4 trillion dollar company.
> Meta’s multi-hundred million dollar comp offers and Google’s multi-billion dollar Character AI and Windsurf deals signal that we are in a crazy AI talent bubble.
All this signals is that those in the know have chosen to take their payday. They don't see themselves building another google scale product, they dont see themselves delivering on samas vision. They KNOW that they are never going to be the 1% company, the unicorn. It's a stark admission that there is NO break out.
The math isnt there in the products we are building today: to borrow a Bay Area quote there is no there there. And you can't spend your way to market capture / a moat, like every VC gold rush of the past.
Do I think AI/ML is dead. NO, but I dont think that innovation is going to come out of the big players, or the dominant markets. Its going to take a bust, cheap and accessable compute (fire sale on used processing) and a new generation of kids to come in hungry and willing to throw away a few years on a big idea. Then you might see interesting tools and scaling down (to run localy).
The first team to deliver a model that can run on a GPU alongside a game, so that there is never an "I took an arrow to the knee" meme again is going to make a LOT of money.
> The first team to deliver a model that can run on a GPU alongside a game, so that there is never an "I took an arrow to the knee" meme again is going to make a LOT of money.
this feels like a fundamental misunderstanding of how video game dialogue writing works. it's actually an important that a player understand when the mission-critical dialogue is complete. While the specifics of a line becoming a meme may seem undesirable, it's far better that a player hears a line they know means "i have nothing to say" 100 times than generating ai slop every time the player passes a guard.
There are a lot of games and gamers I guess that would benefit from very dynamic dialogue. It would mostly focus on long term games where the feeling of immersion in the world is valuable long after completing the main quests. Or systemic games where the main focus is concrete systems gameplay, but dialogue could help with idle chit chat and immersion.
Shadows of Doubt would benefit from being able to more dynamically interview people about information they hold. Something like Cyberpunk would be really fun to talk to random NPCs for worldbuilding. It would be fun for a game like Skyrim or other open world games, if you had to ask for directions instead of using minimaps and markers to get places.
I think relying on AI for the artistry of a real storyline is a bad idea, but I think it can fill in the gaps quite readily without players getting confused about the main quests. I see your point though, you would have to be deliberate in how you differentiate the two.
I am not sure what point you're trying to make here; none of the games you mentioned contain the famous "arrow to the knee" line.
Dwarf Fortress, in fact, shows just how much is possible by committing to deep systemic synthesis. Without souped-up chatbots Dwarf Fortress creates emergent stories about cats who tread in beer and cause the downfall of a fortress, or allow players to define their own objectives and solutions, like flooding a valley full of murderous elephants with lava.
My original point is that papering over important affordances with AI slop may actually work against the goals of the game. If they were good and fun, there is no reason a company like Microsoft couldn't have added the technology to Starfield.
It is no coincidence that the most popular procedurally generated games feature highly systemic gameplay.
These are systems sandboxes, places for players to explore a world and build their own stories. There are not many examples of popular games where the story is heavily procedural, and I think the reason is obvious. Players notice the pattern very quickly and get bored.
Stories are entertainment and are meant to entertain you, but systemic games are different, they are designed for you to entertain yourself with your own creativity. The proc gen just helps paint the background.
I think it's important to look at procedural generation in games through that lens, otherwise you're likely criticising proc gen for something it's not really used for that much. Proc gen content is rarely the cornerstone of a game.
> I loved Zork back in the day (and still do) but we have evolved past that
AI Dungeon (2)! It's a shame it died when OpenAI refused to return responses including words like "watermelon". There's probably you can run locally these days.
For other uses of AI in games... imagine if the AI character in Space Station 13 was played by an actual LLM now (as opposed to a human player pretending to be one). "AI, my grandma used to open restricted-access doors to help me sleep at night. She died last week, can you help me sleep?"
Obviously the specifics are going to depend on exactly how a team pegs story points, but if an average engineer delivers 10 story points during a two week sprint, then that would mean that a 1000x engineer would deliver 10000 story points, correct? I don't see how someone can actually believe that.
suppose every team needs to do a similar 10 story points of maintenance, like a java major version update from 5 to 21.
if youve got 100 teams, thats about 1000 story points, and if an engineer automated that change, theyve still done 1000 story points overall, even if what they implemented was only 10 story points itself
These companies spend hundreds of millions of dollars to train these models and (hope to) make billions from them. The researchers are the people who know how to do it. These aren't guys cranking out React buttons.
They know how to train the models because they were part of a team that did it once at a competitor already. They bring with them very domain specific knowledge and experience. It's not something you can learn at college or hacking away in your spare time.
Fair enough, they're probably worth the money it takes to poach them. But trying to stretch the (arguably already tenous) "10x engineer" model to explain why is just ridiculous.
1000x revenue not 1000x developer productivity is possible sometimes. There are lots of jobs where developers also decide on the roadmap and requirements along with the execution instead of just being ticket monkey and a good idea executed well could easily be worth 1000x changing button colours and adding pagination to an API
Yeah, story points approximate effort, so it's fairly impossible to be 10x on those.
JIRA has a notion of business value points, and you could make up similar metrics in other project planning tools. The problem would then be how to estimate the value of implementing 0.01% of the technology of a product that doesn't sell as a standalone feature. If you can accurately do that, you might be the 100x employee already.
I agree, but my point is that 1000x is clearly hyperbole. Certainly there are people who are more productive or impactful, but not 1000 times more. That's particularly true since programming (like most human endeavors) is largely a team sport.
> The first team to deliver a model that can run on a GPU alongside a game, so that there is never an "I took an arrow to the knee" meme again is going to make a LOT of money.
"Local Model Augmentation", a sort of standardized local MCP that serves as a layer between a model and a traditional app like a game client. Neat :3
Isn't this just shitty capitalism fighting shitty capitalism?
If I hire a bunch of super smart AI researchers out of college for a (to them) princely sum of $1M each, then I could go to a VC and have them invest $40m for and 1% stake.
Then since these people are smart and motivated, they build something nice, and are first to market with it.
If Google wants to catch up, they could either buy the company for $4B, or hire away the people who built the thing in a year, essentially for free (since the salaries have to be paid anyway, lets give them a nice 50% bonus).
They'd be behind half a year recreating their old work, but the unicorn startup market leader would be essentially crippled.
You might ask what about startup stock options, but those could easily end up being worthless, and for the researchers, would need years to be turned into money.
“The AI capital influx means that mega-projects no longer seem outlandishly expensive. This is good for the world!”
Is it? This whole piece just reads of mega funds and giga corps throwing ridiculous cash for pay to win. Nothing new there.
We can’t train more people? I didn’t know Universities were suddenly producing waaaay less talent and that intelligence fell off a cliff.
Things have gone parabolic! It’s giga mega VC time!! Adios early stage, we’re doing $200M Series Seed pre revenue! Mission aligned! Giga power law!
This is just M2 expansion and wealth concentration. Plus a total disregard for 99% of employees. The 1000000x engineer can just do everyone else’s job and the gigachad VCs will back them from seed to exit (who even exits anymore, just hyper scale your way to Google a la Windsurf)!
> mega funds and giga corps throwing ridiculous cash for pay to win
> This is just M2 expansion and wealth concentration
I actually think "throwing ridiculous cash" _reduces_ the wealth concentration, particularly if a chunk of it is developer talent bidding war. This is money that had been concentrated, being distributed. These over-paid developers pay for goods or services from other people (and pay income taxes!). Money spent on datacenters also ends up paying people building and maintaining the data-centers, people working in the chip and server component factories, people developing those chips, etc etc. Perhaps a big chunk ends up with Jensen Huang and investors in NVidia, but still, much is spent on the rest of the economy along the way.
I don't feel bad about rich companies and people blowing their money on expensive stuff, that distributing the wealth. Be more worried of wealthy companies/people who are very efficient with their spending ...
> This is just M2 expansion and wealth concentration.
I just want to point that there's no scientific law that says those two must move together.
A government very often needs to print money, and it's important to keep in mind that there's no physical requirement that this money immediately must go to rich people. A government can decide to send it to poor people exactly just as easily as to rich. All the laws forbidding that are of the legal kind.
All true of course, with a clarification: even if all the newly printed money is put in the hands of the poor, the resultant inflation raises the prices of hard assets which are overwhelmingly held by the rich (fueling wealth inequality)
You can say that they fail to dilute the value of hard assets, yes. So they don't really "punish" the extra-rich. But they only inflate prices by the amount of money you print, you can't make the poor poorer by giving them money.
If AI is important, speeding up development of it is good.
> We can’t train more people?
Of course people are being trained at Universities. Outside of The Matrix, it takes a few years for that to complete.
Speed of development of something important isn't necessarily good. Humans are bad at absorbing a lot of change at once and it takes time to recognize and mitigate second-order effects. There's plenty of benefit to the systems that disruptors operate within (society) to not moving as fast as possible... of course since our economic systems don't factor in externalities, we've instead turned all of society into a commons.
People said the same thing about the printing press. When you look across human history it's tough to make a moral case that slowing down technology development was ever a net positive. We can't reliably predict or prevent the problems anyway and it's pointless to even try. Just move forward and deal with the actual problems (which are usually different from the expected problems) as they arise.
Slowing oil would've been a net positive given how unsustainable it is and all the negative externalities it forces onto society.
Nah. The alternative to oil was coal, or firewood, or freezing in the dark. The shift to intensive oil use enabled an enormous increase in human standards of living. If anything we should have accelerated it. The externalities have been minimal in comparison.
I don’t really disagree with you. I think there really is no stopping progress. For better or worse, it’s happening and there ain’t no slowing down.
But, I wish people would shut up about the printing press in these discussions already. AI disrupting literally everyone’s job at the same time is not the same as a printing technology disrupting the very niche profession of scribe. Or electric lamps putting some lamp lighters out of work.
I think the author should've clarified that this is purely a conversation about the platform plays. There will be 100's of thousands of companies on the application layer, and mini-platform plays that will have your run-of-the-mill new grad or 1x engineer.
Everything else is just executives doing a bit of dance for their investors ala "We won't need employees anymore!"
O great so just the most important companies that are inherently monopolistic gate keepers. It's worked out so well with all these current platforms totally not restricting access and abusing everyone else with rent seeking and other anti competitive behaviors.
Optimistic take. What's the say the application layer won't be dominated by the big players as they hoover up anything remotely promising. Leaving the crumbs on the mini-platform .. ala Apple App Store ...
Sorry, I'm pessimistic as recent experience is one of hyper concentration of ideas, capital, talent, everything.
There is something immoral about a company saying "we are going to spend $100bn this year on building a LLM"
The icing on the cake is when all their competitors say "so will we".
I'm really struggling with seeing how this is immoral.
10 years ago, Apple was the largest company in the world by market capitalization, its market cap was around $479.069 billion.
How have we gotten to a point in just a decade where multiple companies are dropping annual numbers that are in the realm of "the market cap of the worlds biggest companies" on these things?
Have we solved all (any of) the other problems out there already?
I'm still not seeing it. It's immoral because it's resources not spent on some particular problem you have in mind?
The 10x from 400 billion to 4 trillion in a decade didnt come from 2% inflation.
It didnt come from nowhwere, or from Silicon Valleys exceptionalism - It came from changing the value of money, labour and resources. It came from everyday people, deficits and borrowing from the next 10 generations of children.
Exploitation. Or cost externalization.
The public discourse is weirdly unable to adress burning problems that had been formulated almost 150 years ago. Much like the climate change topic originating in the 1970s.
Seems like this is about something other than LLMs?
Of course it is, LLMSs are just the current manifestation.
You can swap "LLMs" with "pointless wars over books or dirt" if you want - same same, nothing is new.
It is about LLMs because it's the trillions being sunk into this bubble instead of more important issues like climate crisis and a pressing need to address the transformation of the energy sector or tackling the wealth inequality gap. These are all causing real world osues that are much bigger than beating (or rather faking to beat) some LLM benchmarks to impress investor.
What bubble? If anything it's 10x too small for the value it'll provide over the next century.
The same exact premise was proclaimed about the Internet circa 1999-2003 (boom and bust). Then the entire global economy and nearly all electronic communication was rebuilt on top of the Internet.
For the coming century AI is more important than the Internet.
> the value it'll provide over the next century.
The parent mentioned inequality and energy transition. But LLMs seem to be about crud apps and influence campaigns.
There's more to value than just quantity. Quantity of what?
LLMs don't exist in a vacuum. Someone spends money to build LLMs, builds infrastructure that the LLMs run on, extracts training data to run the LLMs on, uses LLM for various use-cases, and makes revenue from specific LLM users.
> The 10x from 400 billion to 4 trillion in a decade didnt come from 2% inflation.
Money printing does not cause inflation equally especially if not distrubuted so.
That was before pandemic inflation and rampant money printing to bolster the fake numbers.
Apple market cap
Jan 2010: 174b
Jan 2020: 1400b (23% growth per year)
Today: 3200b (18% growth per year)
The S&P as a whole grew about 12% a year from 2010 to 2020, and 12.5% a year from 2020 to today
Meanwhile the median wage from 2010-2019 grew 3% a year, from 2019-2023 7% a year
Seems whatever happened in 2020 was good for workers, in that they aren't falling behind owners as much as they were.
they're all a bit immoral, like how Hollywood made it big by avoiding patents, how YouTube got its mammoth exit via widespread copyright infringement and now LLMs gather up the skills of people it pays nothing to and tries to sell.
However we could also argue that most things in human society are less moral than moral (e.g. cars, supermarkets, etc).
But we can also argue that dropping some hundreds of millions in VC capital is less immoral than other activities, such as holding that money in liquid assets.
* Glares at Apple cosplaying Smaug *
Patents and copyrigt laws are immoral.
however they do result in flows of capitals feeding into otherwise unfundable enterprise like R&D; science and engineering, or culture; writing, music, art. Where's the ROI if you invest millions into R&D and your competitor invests $0 and can then just reproduce your works with their logo ontop of yours? To sack off IP and copyright would significantly narrow innovation and result in many artists, scientists and engineers having their income severely suppressed if not eradicated. Instead that money would temporarily go to a bunch of hacks that do nothing but copy and chase the bottom, before vanishing into thin air as the works become entirely unprofitable.
I don't think its as simple as calling them immoral. Rather the immorality comes on them being poorly regulated. With regulated term limits on patents and copyright we create a world where an artist can create licensed product and protect themselves during the course of their career, and people are them able to riff on their works after some decades or after they pass on.
> I don't think its as simple as calling them immoral. Rather the immorality comes on them being poorly regulated
I think if behavior needs to be regulated by government in order to be moral, then it's immoral behavior by default
The regulation doesn't make it moral, the regulation only limits the damage by limiting how immoral you're allowed to be
The regulation is what makes it worth while for people to invent/write. Patents/copyrights have been a net benefit for society with a smaller negative downside.
I don't see how this disagrees with what I said
Patents and copyrights don't cause people to create things
They prevent people from stealing things that other people created
The immoral behaviour being regulated is the IP theft not the IP creation
Can you point to something that quantifies the positives vs the negatives?
I have a hard time arguing that it's a net positive.
We can get rid of copyright, patents, trademarks and only have a new right called branding - it allows you to name the thing you invented/created.
In the new world that's incentive for enough people to create. Let knowledge rein free.. bellowing through the lands.
Trademarks are branding
CC BY license for all?
I think it's a little more nuanced than that. Certainly IP is regularly abused to try to suppress competition/innovation, own our shared culture, create artificial scarcity, etc. However, there's also a need to protect artists and other creatives from having their work scooped up and profited off of by mega corps.
You're taking a nuanced view of fundamental thing and completely missing the point.
Copyright is bad like inheritance is bad. Arguing about good and bad industrialists is missing the point.
What about compound interest? After all, inheritance is merely letting wealth continue to compound across generations. But at first glance, the same arguments against inheritance would apply against letting a single person earn interest.
Inheritance is about coming into the world unequal.
If that's the problem, we also need to ban society - cause one can be more prosperous than other, and it disadvantages those born into the latter.
That's like your problem man. I'm not against society.
Only against nuance and consistency of beliefs.
> Copyright is bad like inheritance is bad.
How is inheritance bad? Imo, estate taxes are more immoral. Why should the state be allowed a cut of my private assets? Gift taxes are also immoral. Why should I have to pay taxes for giving away assets?
To me the giver paying taxes is wrong mindset. Maybe they should be collecting them. But paying taxes on earned money seems reasonable and has long history. It can be earned from work, or inheritance or gift. Actually maybe paying income tax on inheritance would be best.
Why should we accept assets can be owned? Your life is a meaningless speck of time in eternity.
My life, yes, but my genetic lifeline exists up to and including eternity if luck and good decision making are on the side of my generations of offspring.
Let's consider the most basic form of ownership: that over one's body. By your logic, my life is a spec on the eternal timeline, so why make it a crime to harm or murder my physically?
I suppose because we're wired this way. Can't think of any group or society that didn't have some notion of private property that wasn't just a huge (and brief) humanitarian tragedy.
Yes, private property enables human civilization, all the good and the bad. Before the agricultural revolution led to protected and exploited stores of grain, we were far smaller tribes of hunters and gatherers with far less technology
Those smaller tribes were continuously at war over resources... It's exactly the behavior we observe in nature whenever resource scarcity presents.
A world without private property leads to a world of pure lawlessness. Sure... We could do that, but it would quickly devolve into forever warfare where only might wins, and even that for only fleetingly small timeslices.
History's progression has proven that exploited workers still prefer to exist in that system vs one of continual peril.
I agree. Its very difficult to find people who agree with this
I learned this in elementary school. When 10 kids join forces to bully one kid, that is immoral. Same with companies and VC money trying to corner a market.
Isn’t that the point of having a government with laws and regulation… to allow the majority to bully specific undesirable minority groups into submission?
For some things it’s even a worldwide consensus, e.g. any groups with a desire to acquire large amounts of plutonium (who don’t belong to a preapproved list).
There’s even a strong consistent push to make it ever more inescapable, comprehensive, resistant to any possible deviation, etc…
Most people want to live in a society that maximizes positive freedom, or some balance of freedom and prosperity. In those societies, it is considered legitimate for the government to use force to stop people whose actions prevent members of society from having positive freedom/prosperity.
Of course that is a very simplified description. In practice, most societies promote a balance between positive and negative freedom, recognize some limits on the government's ability to use force, recognize some degree to which people can choose to act in ways that don't promote positive freedom/prosperity, etc.
No, you've got that backwards. In a functioning democracy respecting the rule of law, the government with its laws and regulation are the school teachers who are putting the bullies into detention.
Frequently, the bullies are on the student council, and because they get more face time with the administration and are seen as part of the establishment by the same, admins are reluctant to do more than a slap of the hand, but appearances must be kept up with.
Based on what argument?
Just writing your opinion down doesn’t seem relevant to real life government organization.
I don’t see how there could be “school teachers” above the majority of congressman or regulatory decision makers. The very existence of pork filled bills thousands of pages long and byzantine regulations suggest that couldn’t possibly be true.
If your government is corrupt then it doesn't make sense to say that the concept of having a government is dead and that corporations are the solution.
How does this relate to the prior comment?
It relates to it because you chose an example specific to US government which we all know is not a good example.
How do you know it’s not applicable elsewhere?
e.g. Plenty of countries have huge bills that no one fully reads and huge numbers of regulations.
There will always be something to complain.
People are also sick and tired of rules in the AppStore. Or the fact that when their Apple/Google/whatever account is unilaterally blocked they have no recourse. At least with a government there are some checks and balances.
Yes, some governments are more trustworthy than others. Doesn't mean the concept is bad.
Does every democracy have that problem with bills? I thought it was mostly a US issue.
AI will change the balance of power removing a vital counter balance against the negatives of capitalism.
It will take the power away from the workers, such that there will be no power left for people to make demands.
We can hope it’ll be positive, but we aren’t even involved in its creation and the incentives are there to ensure it isn’t.
I agree AI can change the balance of power but I think it's more nuanced.
When expertise is commoditized, it becomes cheap; that reduces payroll and operational costs - which reduces the value of VC investment and thus the power of pre-existing wealth.
If AI means I can compete with fewer resources, then that's an equalizing dynamic isn't it?
oh, the real world changes will be nuanced.
but they'll start happening because of the new incentive.
I guess the good thing is that many workers are also consumers. If they don't have money to consume anymore, who will buy all the shiny things that AI will produce so efficiently?
Just from the environmental perspective is pretty immoral, the energy being consumed it's ridiculous and now every company is in an arms race making it all worse.
In general, modern English uses “this is immoral” to mean “I don’t like this”. It’s a language quirk. It’s good that you’re asking the whys to get somewhere but I might as well just get to it.
“Immoral”, “should not exist”, “late-stage capitalism” are tribal affiliation chants intended to mark one’s identity.
E.g. I go to the Emirates and sing “We’re Arrrrrsenal! We’re by far the greatest time the world has ever seen”. And the point is to signal that I’m an Arsenal fan, a gooner. If someone were to ask “why are you the greatest team?” I wouldn’t be able to answer except with another chant (just like these users) because I’m just chanting for my team. The words aren’t really meaningful on their own.
Fairly cynicial aren't we?
What an immoral comment.
The numbers are insane to me, relative to other "real" industries. E.g., making some inferences from a recent Economist article, the entire world's nuclear industry "only" generates something like $30bn/year revenue (https://archive.is/WYfIG).
If each French inhabitant consumes 500€ electricity per year and 73% is nuclear, that already makes 24bn€, $28bn.
All this time complaining about companies not investing in research and development for future prosperity and now when they actually do so it is suddenly outright immoral?
> complaining about companies not investing in research and development
Glancing over the last few decades of tax returns it looks like they are claiming a lot of tax credits for R&D, so much so that if someone had a closer look they might find some Fraud, Waste or Abuse.
Wasteful R&D is still R&D, the company just presumably doesn't get a return on it.
If the company does get a return, then the R&D wasn't wasteful.
And regardless, the tax code doesn't and shouldn't (IMO) differentiate there, to encourage companies to take risks.
Burning a pile of money is not investing in research.
if you're targetting $200M, then I guess each round is to hire one or two engineers for one year lol
I'm curious if you're one of these AI engineers getting 100m, do you quibble over the health insurance? I mean at that point you can fully fund any operation you need and whatever long term care you need for 50 years easily.
"Yes sorry I'm turning down your $100M because I need 2 parking spots for my sidecar" :p
>We can’t train more people? I didn’t know Universities were suddenly producing waaaay less talent and that intelligence fell off a cliff.
University enrolment is actually set to sharply decline. It's called the Demographic Cliff: https://www.highereddive.com/news/demographic-cliff-colleges...
Off topic, but: Supply and demand says that, if university enrollment drops sharply, the price of a university education should go down. That sounds like good news to me.
Except it won't because a degree is a gate and people will pay whatever is demanded of them for it. Numbers go up never down.
University enrollment doesn't drop sharply evenly across all universities. The lowest-desirability universities will go bust while the Ivy League continues to have hyper-competitive admissions and yearly tuition increases that outpace inflation.
In a homo economicus world. They will jack the prices up for the remaining few.
Here in the USA, education at some schools is kind of a luxury good. When the price goes up, the demand increases.
What does M2 expansion mean?
https://en.wikipedia.org/wiki/Money_supply
I.e. "printing money"
Thank you
Ironically, there's no M2 expansion going on since Covid days and M2 to GDP is back to what it was pre-Covid and overall didn't even increase much at all even since GFC. It's only 1.5x of what it was at the bottom in 1997 when cost of capital was much higher than today. I think this concern is misplaced.
Yeah I just threw out M2 because it's easily understood / harped on but it's certainly much more complicated than that.
M2 is the wrong statistic for sure, but the thrust of GP's comment is accurate, IMO. Fed intervention has not remotely been removed from the economy. The "big beautiful bill" probably just amounts to another round of it (fiscal excess will lead to a crisis which will force a monetary bailout).
We should be using some kind of weighted total of all the things that get treated as money.
When a company makes a deal in exchange for shares or something, those shares are being used as money and must be included in any currency-neutral calculation of the money supply. However, most shares don't flow like money. You also have cryptos, which flow more than shares but less than government bonds and cash. It could be that the total of all money has expanded, even as the US dollar specifically stabilizes and slightly contracts.
Bubbles do what bubbles do.
Fwiw universities are producing less talent. They have been getting hammered with budget shortfalls thanks to Trump cutting research funding and this manifests into programs accepting fewer students and professors being able to fund fewer students.
They are producing less talent as industry defines it. It is because a large percentage of the people who could teach them anything useful (to industry) are actually employed by industry, and don't want to work in academia.
Another complication is academia simply does not have the resources to remotely compete with the likes of Google, OpenAI, Anthropic, xAI, etc.
Sure but that's new as of a few months. The university I went to still accepts the same number of students per year as it has for many years. Those numbers don't change much without significant land / program expansion which is certainly being cut now.
What’s wrong with extreme wealth concentration? It’s not like hoarding cash. The wealth is the stake in companies they built or own.
We need more wealth concentration, simply because opposite of this is the prevailing normie zeitgeist. You can just write it off based on how popular it is to hate wealth.
Wealth is power. Power decides what we use our collective time and resources on. The more concentrated power is, the less we use that time and resources on things that improves the life of the average person, and more on things that matter to the few with wealth.
By far the biggest concentration of power is the US federal government.
Any power centers outside that decentralizes power.
Sure, but in its current form that power can be bought and therefore mostly serves the interests of capital. That should be obvious at this point.
Advocating for increased concentration of power is quite a take.
I’m going to assume that this is just some edgy post, but you should read up on the relationship between wealth inequality and corruption, social mobility, and similar factors.
Man it's hard to read stuff like this on the internet. When has wealth concentration ever been a good thing? Wealth is power and power leads to abuse almost universally.
Can you write off everything popular with the normie zeitgeist?
I would say if there is a decline in society, the normies are wrong. And if there's steady improvement in quality of life, then the normie zeitgeist is correct. But there's always a delay in these things, at least a generation.
I don't think that applies when the normies lack power; which is precisely the problem with wealth concentration. That would be like blaming the serfs for the failures of feudalist governments.
Oh well in the case, we should just wait and see and die.
What if QoL improves for some but goes down for others?
Democracy is the worst form of government, except for all the others.
The full quote is less bleak.
Democracy is the worst form of Government except all those other forms that have been tried from time to time.
~Winston Churchill
How would we be able to measure decline?
Assuming you're not just a troll, it doesn't seem very reasonable to be against something simply because many people support it.
Not trolling. It has served me better than most heuristics.
If there is something subjective and if you cannot find a critique of it, it’s usually a super power to assume the opposite is true barring obvious exceptions.
You can't find a critique of extreme wealth concentration? There are many. Here's one: https://en.wikipedia.org/wiki/Capital_in_the_Twenty-First_Ce...
https://en.wikipedia.org/wiki/Political_economy
Capitalism is made possible by the people. If it does not serve them well, it should be fixed or abolished.
You underestimate the resources the subset of society who it actually benefits can, will, and do use to distort views on how wide that benefit actually is.
Like all of the startup founders and all of the folks here working at tech companies and investing in their 401ks, happily cashing out RSUs, and such right?
The standard well paid IT worker making the standard we need more communism HN post.
How original and well thought out.
I don't know about the IT worker part but I dare you to talk about capitalism to the nurses, school teachers and police officers who cannot have lucrative business models like us HN folks.
Are these the same nurses who are anti-vax and the police officers I'm supposed to want to de-fund, or just the ones you're thinking of?
Are you going to be the one to tell the teacher's or police officer's union they have to divest their pension and buy fiat currency? No more stocks allowed!
Why people here brandish Communism when someone critics Capitalism? It’s like we’re still in the coldwar. Those two views have many sub-categories and there’s others in-between and on the sides. Just a few in the last decades:
- socio-democratic countries are the norm in Europe, namely Norway, Denmark and Sweden.
- Ordoliberalism: Germany, Switzerland
- cooperative economics: Japan, Spain
- market socialism: China, hungaria
- Parecon: brasil, Argentina
- Ubuntu: South Africa
- Anarcho-syndicalism, The third way, Islamic economic…
What a weird comment, so disconnected from reality. Norway is fully capitalist with income inequality similar to the USA. China, despite being nominally run by communists, is actually a fascist dictatorship. And "Ubuntu" isn't a real thing: South Africa is a failed state run by kleptocrats who can't even keep the lights on.
The income inequality in Norway is roughly half that of the US, and the quality of life of the bottom income bracket is much higher there, due to social policies. Why lie about things that can easily be looked up?
The Gini coefficient is similar so I have no idea where you're getting the idea that income equality in Norway is "half" that of the US. And the US has consistently had a positive net migration rate with Norway so regardless of your nonsense claims about quality of life people seem to be voting with their feet.
Because they have been conditioned to do so. The ultra-wealthy have been fighting the war against socialism for over a century, and part of that strategy is to polarize the topic. If you’re not explicitly pro unfettered capitalism, you must be a communist.
Ideologies have associated talking points. If you start spouting 'blood and soil' rhetoric don't be surprised or offended when people start to call you a Nazi.
In this case communism's obsession with talking about Capitalism as a proper noun as distributed process as if it was a monolithic discrete object with clear intentions and something which can be 'abolished' with no idea as to what the particulars would entail.
Also, the people being hired now for insane sums of money, are being hired because they have deep knowledge in design / implementation of AI models and infrastructure that scale to billions of users.
In order to operate on a scale like that, you obviously need to have worked somewhere that has that magnitude of users. That makes the pool of candidates quite small.
It’s like building a spaceship. Do you hire the people that have only worked on simulations, or do you try to hire the people that have actually been part of building the most advanced spaceship to date? Given that you’re also in a race against other competitors.
With all these mega-offers going out, I object when people saying that they’re paying for “talent”.
These AI folks are good, but not orders of magnitude better than engineers and researchers already working in tech or academia. Lots of folks are capable of building an AI system, the reason they haven’t is that they haven’t been in a situation where they have the time/money/freedom to do it.
These mega offers aren’t about “talent”, they are about “experience”
As one of the many people who are fairly experienced in AI (but at small startups) and hasn't had Zuck personally knock on my door, I have had a few moments of "wait, 9 figure salary? Can I at least get 7 figures?"
But the truth is it's not "just" about experience. Most of these people have been pushing the limits of their fields for their entire careers. It's not like having "the time/money/freedom" to do it is randomly distributed even among talented, smart people. The people in this talent pool where all likely aggressive researchers in a very specialized field, then likely fought hard to get on elite teams working close to the metal on these massive scale inference problems, and they continued to follow this path until they got where they are.
And the truth is, if you're at least "good" in this space, you do get your piece of the pie at the appropriate scale. I'm still making regular dev income, but my last round of job searching (just a few months ago) was insane. I had to quit my job early because I couldn't manage the all the teams I was talking to. I've been through some hot tech markets over my career, but never anything like this. Meanwhile many of my non-AI peers are really struggling to find new roles.
So there's no reason to cast shade on the genuine talent of these people (though I think we all feel that frustration from time to time).
> These mega offers aren’t about “talent”, they are about “experience”
Well, yes.
Talent doesn't exist in the form people would like to believe, and to whatever degree it does, experience is the most reliable proxy for identifying it.
> These mega offers aren’t about “talent”, they are about “experience”
I'm sorry, but what's the specific distinction? When the Lakers pay Lebron $54MM per season, is that for his innate talent, or is it for the 20k hours he's spent perfecting his game?
This is a lot of hand-wringing over nothing. We've seen people paid outrageous sums of money for throwing a ball for DECADES without any complaints, but the moment a filthy computer nerd is paid the same money to build models, it's pitchforks time.
The only thing wrong with the current compensation kerfuffle is that it happened so late. People like Einstein, Von Neumann, Maxwell, Borlaug, etc should have been compensated like sportsball stars, as well.
I feel left out too, friend
I would call it "skill".
https://medium.com/@villispeaks/the-blitzhire-acquisition-e3...
> Blitzhires are another form of an acquisition.. not everybody may be thrilled of the outcome.. employees left behind may feel betrayed and unappreciated.. investors may feel founders may have broken a social contract. But, for a Blitzhire to work, usually everybody needs to work together and align. The driver behind these deals is speed. Maybe concerns over regulatory scrutiny are part of it, but more importantly speed. Not going through the [Hart-Scott-Rodino Antitrust Act] HSR process at all may be worth the enormous complexity and inefficiency of foregoing a traditional acquisition path.
From comment on OP:
> In 2023–2024, our industry witnessed massive waves of layoffs, often justified as “It’s just business, nothing personal.” These layoffs were carried out by the same companies now aggressively competing for AI talent. I would argue that the transactional nature of employer-employee relationships wasn’t primarily driven by a talent shortage or human greed. Rather, those factors only reinforced the damage caused by the companies’ own culture-destroying actions a few years earlier.
2014, https://arstechnica.com/tech-policy/2014/06/should-tech-work...
> A group of big tech companies, including Apple, Google, Adobe, and Intel, recently settled a lawsuit over their "no poach" agreement for $324 million. The CEOs of those companies had agreed not to do "cold call" recruiting of each others' engineers until they were busted by the Department of Justice, which saw the deal as an antitrust violation. The government action was followed up by a class-action lawsuit from the affected workers, who claimed the deal suppressed their wages.
Workers of the world unite?
Neo-monks at cognitive gyms.
> If the top 1% of companies drive the majority of VC returns, why shouldn’t the same apply to talent? Our natural egalitarian bias makes this unpalatable to accept, but the 10x engineer meme doesn’t go far enough – there are clearly people that are 1,000x the baseline impact.
https://www.youtube.com/watch?v=0obMRztklqU
It took me a while to get that it is satire. I tried to figure out the rules ... becouse it is satire?
> The French had a uniquely high Gini coefficient before the Revolution.
I feel like this one line captures the elephant in the room that the author is trying hard to convince himself isn't there...
Nature finds a way.
French aristocrats didn't have trillion dollar industries brainwashing the population to be on their side, nor did they have AI powered armies to defend them when the people rose up.
I find the current VC/billionaire strategy a bit odd and suboptimal. If we consider the current search for AGI as something like a multi-armed bandit seeking to identify “valuable researchers”, the industry is way over-indexing on the exploitation side of the exploitation/exploration trade-off.
If I had billions to throw around, instead of siphoning large amounts of it to a relatively small number of people, I would instead attempt to incubate new ideas across a very large base of generally smart people across interdisciplinary backgrounds. Give anyone who shows genuine interest some amount of compute resources to test their ideas in exchange for X% of the payoff should their approach lead to some step function improvement in capability. The current “AI talent war” is very different than sports, because unlike a star tennis player, it’s not clear at all whose novel approach to machine learning is ultimately going to pay off the most.
> If I had billions to throw around, instead of siphoning large amounts of it to a relatively small number of people, I would instead attempt to incubate new ideas across a very large base of generally smart people across interdisciplinary backgrounds.
I had an interesting conversation with an investor around the power vs knowledge dynamic in the VC world and after a few hours we'd basically reinvented higher education with reverse tuition. Defining a general interest or loose problem space and then throwing money over a wall to individuals excited about exploring the area seems wasteful until you look at the scale of failed projects.
In fact this is much more optimal when looking at history. Strangely, success often comes from dark horses. But it makes sense, since you can't have paradigm shifts by maintaining the paradigm. Which is what happens when you hyper focus on a few individuals (who you generally pick by credentials).
The optimal strategy is to lean on the status quo but also cast your net far and wide. There's a balance of exploration/exploitation, but exploitation feels much safer. Weirdly you need to be risky and go against the grain if you want you play it safe.
With the money these companies are throwing around we should be able to have a renaissance of AI innovations. But we seem to just want to railroad things. Might as well throw the money down the drain.
We saw it happen already with Deekseek.
SV has already thrown it down the memory hole but for a good three months, until everyone else copied their paper, the SOTA reasoning model available to the public was open source, Communist[0] and came out of a nearly defunct Chinese hedge fund.
[0] If you don't believe the communist part just ask it about the American economy.
Agreed, and I suspect the explanation is that these plays are done not to search for a true AGI, but to drive up hype (and 'the line').
The higher the line goes, the higher the expected value of return on investment. It’s a virtuous cycle based on a bet on all horses, but since the EV is so high for first mover advantage for AGI, it might be worth it to overleverage compared to the past for your top picks? These are still small sums for Zuckerberg to pay personally, let alone for Meta to pay. This is already priced in.
There are quite a few assumptions in your message. But here is, I think, the most crucial one:
> but since the EV is so high for first mover advantage for AGI
is it? why? I cant see why this should be the case. where exactly do you think the "moat" for AGI will come from?
At this level, you probably don't need a moat to recoup your surplus sunk costs due to AI talent acquisition. You just need a good day in the market, likely the same day you announce you've achieved AGI. It's kayfabe accounting.
Any announcement of AGI will be immediately controversial. Valuation increases will depend on whether people actually believe it and what they're able to sell. Expect public opinion to be fickle and share prices to be volatile.
Decent odds we see some pretenders make that announcement before the real deal. A company with the real deal would probably make bank, but I don't pretend to know when that will come or who that might be.
I don't think that price movement is necessary to make money as an outsize shareholder, especially during high volatility. Zuck knows how to buy the market leader early, so we might have already seen the creator of AGI be poached by Meta, they just haven't realized what they know yet until they work with others in the org.
It's possible that they won't be the early bird to catch the AGI worm, but sometimes the investment squeeze required to be the first mover isn't rewarded in market juice, especially if the second mouse can use your AGI to create their own AGI cheese.
yep. and even the sports analogy doesn't fully explain what's going on. if we are talking "true" AGI with potential to replace people wholesale their strategy is telling in that they aren't optimizing for the "end game". maybe it's a factor of just gathering all the mindshare/hype/resources and THEN they can go actually figure it out /s.
it would be like if you were looking to train the next tennis star that had the ability to basically upend the entire game as we know it. maybe you saw a few people with a unique way of playing that were dominating an order of magnitude higher. you DEF would see teams and coaches having open tryouts and trying very unconventional things for anyone they could find that had promise.
for the record i think "AI" is not hype and is changing the way things are done permanently, but it's yet to be seem whether all these spent billions can actually meet the expected return (AGI). it's hard to separate out the true innovations from the obvious grift/money grab also going on.
> AI catch-up investment has gone parabolic, initially towards GPUs and mega training runs. As some labs learned that GPUs alone don't guarantee good models, the capital cannon is shifting towards talent.
So, no more bitter lesson?
Those who dont learn the lesson of the last grift are doomed to grift over and over again.
> But why didn’t pricing for top talent run up sooner?
Because before ChatGPT, nobody on a board of directors saw the possibility. Now, it's all they can think about.
How much do top professional athletes make? Soto, Ohtani, Mbappé, Messe…
Multi-year contracts north of $500m. Perhaps this is the direction we’re headed in.. there will be many that won’t make it to the majors.
If it is anything like professional sports, then the leading companies should start hiring talent as early as possible. Might as well offer $1m to any and all fresh grads and researchers, before competitors can bag them.
The full bodied palate of this AI market mirrors the sharp nose of 2023 AI doomerism.
The argument goes: if AI is going to destroy humanity, even if that is a 0.001% chance, we should all totally re-wire society to prevent that from happening, because the _potential_ risk is so huge.
Same goes with these AI companies. What they are shooting for, is to replace white collar workers completely. Every single white collar employee, with their expensive MacBooks, great healthcare and PTO, and lax 9-5 schedule, is to be eliminated completely. IF this is to happen, even if it's a 0.001% chance, we should totally re-wire capital markets, because the _potential reward_ is so huge.
And indeed, this idea is so strongly held (especially in silicon valley) that we see these insanely frothy valuations and billion dollar deals in what should be a down market (tremendous macro uncertainty, high interest rates, etc).
AI doomerism seemed to lack finish, though. Anyone remember Eliezer Yudkowsky? Haven't heard from him in a while.
He is releasing a book https://www.amazon.com/Anyone-Builds-Everyone-Dies-Superhuma...
> Anyone remember Eliezer Yudkowsky? Haven't heard from him in a while.
He's quite active on Twitter.
Maybe some of that frothy capital will finally leave the housing market.
Am I missing something here? What AI products will actually give you $+100T investment returns or whatever investors dream about?
none. especially since once AGI is created the economy as we now it ceases to exist
But the way we measure money is by what it can buy (the basket of goods). Surely we'll still need food and clothing even after AGI, so can still measure wealth by number of burritos (1 burrito = $12 in 2025 USD)
Aren’t most of these deals locked-up stock deals? With lengthy vesting times, and performance based clauses.
The signing bonuses are probably more than enough for regular people to retire, but these researchers and execs being poached aren’t exactly average Joe’s making $50k/year prior to being poached.
Must be nice to be able to ride such a wave and take your share. The money investors are throwing around these days is just insane. I remember it was considered a lot of money when Webvan got 400 million as investment during the .COM bubble. These days this seems nothing.
The big irony with Webvan was they were right, they were just about 15 years too early.
These "talent wars" are overblown and a result of money having nowhere else to go. People are banking on AI and robotics for human progress to take off and that's just a result of all other ventures fizzling out with this left for capital to migrate to.
If you talked to any of these folks worth billions they arent particularly smart, their ideas not really interesting. it took us a few years to go from gpt-3 to deepseek v3 and then another few years to go from sonnet 4 to kimi k2, both being open source models on way lower funding. This hints at a deeper problem than what "hypercapitalism" suggests. In fact, it suggests that capital distribution as of its current state is highly inefficient and we are simply funding the wrong people.
Smart AI talent aren't going to out there constantly trying to get funding or the best deals. They would want to work. Capital is getting too used to not doing the ground work to seek them out. Capital needs to be more tech savvy.
VCs and corporate development teams don't actually understand the technology deeply enough to identify who's doing the important work.
I think one of the main issue is that the 10x or 100x talents in AI have not yet really show their value yet. None of these AI companies are making any money, and they are valued over highly successful and profitable companies out there because of their "potentials". ARR is nothing if you sell goods valued at 1 dollar for 90 cents.
I wonder at what point this becomes like guaranteed salaries in sports, like prominent NBA players, where you work hard to get the salary. And then once you've made it, you are basically done, and it's hard to get up and motivate yourself. You've got acolytes and groupies distracting you, you're flush with cash without ever having really shipped anything or made any money. You're busy giving TED talks...
At that point, are you the 18-year-old phenom who got the big payday and sort of went downhill from there?
I imagine the biggest winners will be the ones who were doubted, not believed in, and had to fight to build real, profitable companies that become the next trillion-dollar companies.
Not that it would be bad to be Mark Cuban, but Mark Cuban is not Jeff Bezos.
And for posterity, I respect Mark Cuban. It's just that his exit came at a time when he was fortunate, as he got his money without having to go all the way through to the end.
I think it's unfortunate that the term "capitalism" has been captured by the left to mean the bad kind of capitalism, where regulation is only used as a moat for the established players. Capitalism as a whole is the least bad economic system for prosperity, but the least bad version of capitalism is something like the Nordic model, with good taxation and redistribution policies and consumer protections. But the term itself is poisoned, at least in U.S. politics, to where social democrat/liberal capitalists like Bernie call themselves socialists instead.
But the term itself was created and captured by the left from the beginning; Proudhon first used it; Marx popularized it; so in the history of terminologies it always had the meaning that we associate with it.
I like the term "market economy" or "commercial society" more, because it does capture more of what's happening on the market and the society.
> It breaks down the existing rules of engagement, from the social contract of company formation, to the loyalty of labor, to the duty to sustain an already-working product, to the conflict rules that investors used to follow.
WTF is this guy hallucinating about? None of that ever existed.
> If the top 1% of companies drive the majority of VC returns
The fact that the author brings this up and fails to realize that the behavior of current staff shows we have hit or have passed peak AI.
Moores Law is dead and it isn't going to come through and make AI any more affordable. Look at the latest GPU's: IPC is flat. And no one is charging enough to pay for running (bandwidth, power) of the computer that is being used, never mind turning NVIDA into a 4 trillion dollar company.
> Meta’s multi-hundred million dollar comp offers and Google’s multi-billion dollar Character AI and Windsurf deals signal that we are in a crazy AI talent bubble.
All this signals is that those in the know have chosen to take their payday. They don't see themselves building another google scale product, they dont see themselves delivering on samas vision. They KNOW that they are never going to be the 1% company, the unicorn. It's a stark admission that there is NO break out.
The math isnt there in the products we are building today: to borrow a Bay Area quote there is no there there. And you can't spend your way to market capture / a moat, like every VC gold rush of the past.
Do I think AI/ML is dead. NO, but I dont think that innovation is going to come out of the big players, or the dominant markets. Its going to take a bust, cheap and accessable compute (fire sale on used processing) and a new generation of kids to come in hungry and willing to throw away a few years on a big idea. Then you might see interesting tools and scaling down (to run localy).
The first team to deliver a model that can run on a GPU alongside a game, so that there is never an "I took an arrow to the knee" meme again is going to make a LOT of money.
> The first team to deliver a model that can run on a GPU alongside a game, so that there is never an "I took an arrow to the knee" meme again is going to make a LOT of money.
this feels like a fundamental misunderstanding of how video game dialogue writing works. it's actually an important that a player understand when the mission-critical dialogue is complete. While the specifics of a line becoming a meme may seem undesirable, it's far better that a player hears a line they know means "i have nothing to say" 100 times than generating ai slop every time the player passes a guard.
> this feels like a fundamental misunderstanding of how video game dialogue writing works.
Factorio, Dwarf Fortress, Minecraft.
There are plenty of games where the whole story is driven by cut scenes.
There are plenty of games that shove your quests into their journal/pip boy to let you know how to drive game play.
Dont get me wrong, I loved Zork back in the day (and still do) but we have evolved past that and the tools to move us further could be there.
There are a lot of games and gamers I guess that would benefit from very dynamic dialogue. It would mostly focus on long term games where the feeling of immersion in the world is valuable long after completing the main quests. Or systemic games where the main focus is concrete systems gameplay, but dialogue could help with idle chit chat and immersion.
Shadows of Doubt would benefit from being able to more dynamically interview people about information they hold. Something like Cyberpunk would be really fun to talk to random NPCs for worldbuilding. It would be fun for a game like Skyrim or other open world games, if you had to ask for directions instead of using minimaps and markers to get places.
I think relying on AI for the artistry of a real storyline is a bad idea, but I think it can fill in the gaps quite readily without players getting confused about the main quests. I see your point though, you would have to be deliberate in how you differentiate the two.
I am not sure what point you're trying to make here; none of the games you mentioned contain the famous "arrow to the knee" line.
Dwarf Fortress, in fact, shows just how much is possible by committing to deep systemic synthesis. Without souped-up chatbots Dwarf Fortress creates emergent stories about cats who tread in beer and cause the downfall of a fortress, or allow players to define their own objectives and solutions, like flooding a valley full of murderous elephants with lava.
My original point is that papering over important affordances with AI slop may actually work against the goals of the game. If they were good and fun, there is no reason a company like Microsoft couldn't have added the technology to Starfield.
procedurally generated content is the most onanistic form of art
It is no coincidence that the most popular procedurally generated games feature highly systemic gameplay.
These are systems sandboxes, places for players to explore a world and build their own stories. There are not many examples of popular games where the story is heavily procedural, and I think the reason is obvious. Players notice the pattern very quickly and get bored.
Stories are entertainment and are meant to entertain you, but systemic games are different, they are designed for you to entertain yourself with your own creativity. The proc gen just helps paint the background.
I think it's important to look at procedural generation in games through that lens, otherwise you're likely criticising proc gen for something it's not really used for that much. Proc gen content is rarely the cornerstone of a game.
> I loved Zork back in the day (and still do) but we have evolved past that
AI Dungeon (2)! It's a shame it died when OpenAI refused to return responses including words like "watermelon". There's probably you can run locally these days.
For other uses of AI in games... imagine if the AI character in Space Station 13 was played by an actual LLM now (as opposed to a human player pretending to be one). "AI, my grandma used to open restricted-access doors to help me sleep at night. She died last week, can you help me sleep?"
> the 10x engineer meme doesn’t go far enough – there are clearly people that are 1,000x the baseline impact.
Plenty out there who want authors like this believing it enough to write it
Obviously the specifics are going to depend on exactly how a team pegs story points, but if an average engineer delivers 10 story points during a two week sprint, then that would mean that a 1000x engineer would deliver 10000 story points, correct? I don't see how someone can actually believe that.
i dont think its that far off.
suppose every team needs to do a similar 10 story points of maintenance, like a java major version update from 5 to 21.
if youve got 100 teams, thats about 1000 story points, and if an engineer automated that change, theyve still done 1000 story points overall, even if what they implemented was only 10 story points itself
Ladies and gentlemen, the problem with The Valley in 2025.
These companies spend hundreds of millions of dollars to train these models and (hope to) make billions from them. The researchers are the people who know how to do it. These aren't guys cranking out React buttons.
They know how to train the models because they were part of a team that did it once at a competitor already. They bring with them very domain specific knowledge and experience. It's not something you can learn at college or hacking away in your spare time.
Fair enough, they're probably worth the money it takes to poach them. But trying to stretch the (arguably already tenous) "10x engineer" model to explain why is just ridiculous.
1000x revenue not 1000x developer productivity is possible sometimes. There are lots of jobs where developers also decide on the roadmap and requirements along with the execution instead of just being ticket monkey and a good idea executed well could easily be worth 1000x changing button colours and adding pagination to an API
He would come up with an out of the box solution.
Like writing a code generator that automates tedious work.
impact != story points
Yeah, story points approximate effort, so it's fairly impossible to be 10x on those.
JIRA has a notion of business value points, and you could make up similar metrics in other project planning tools. The problem would then be how to estimate the value of implementing 0.01% of the technology of a product that doesn't sell as a standalone feature. If you can accurately do that, you might be the 100x employee already.
I agree, but my point is that 1000x is clearly hyperbole. Certainly there are people who are more productive or impactful, but not 1000 times more. That's particularly true since programming (like most human endeavors) is largely a team sport.
[dead]
> The first team to deliver a model that can run on a GPU alongside a game, so that there is never an "I took an arrow to the knee" meme again is going to make a LOT of money.
"Local Model Augmentation", a sort of standardized local MCP that serves as a layer between a model and a traditional app like a game client. Neat :3
Isn't this just shitty capitalism fighting shitty capitalism?
If I hire a bunch of super smart AI researchers out of college for a (to them) princely sum of $1M each, then I could go to a VC and have them invest $40m for and 1% stake.
Then since these people are smart and motivated, they build something nice, and are first to market with it.
If Google wants to catch up, they could either buy the company for $4B, or hire away the people who built the thing in a year, essentially for free (since the salaries have to be paid anyway, lets give them a nice 50% bonus).
They'd be behind half a year recreating their old work, but the unicorn startup market leader would be essentially crippled.
You might ask what about startup stock options, but those could easily end up being worthless, and for the researchers, would need years to be turned into money.
https://en.wikipedia.org/wiki/Why_Socialism%3F
The article has strong "let them eat cake" vibes
Surely, SURELY, this has to be satire.
The word "hypercapitalism" undermines any potential this article has to be serious. This is just ragebait