For me this feels irrelevant. These tools are marketed for developers for their day-to-day jobs that involve building products. Devs don't look up information on people or build some complex mathematical things daily. They build things that consist of different parts, which in turn can consist of different contexts and can be a combination of other things as well. It can be a straightforward approach or it can be a legacy codebase that also need to incorporate new features with new stacks. The real test is in the real world scenarios. But every time it's about a narrowly scoped thing, the tests, the marketing. And they try to build an image that the combination of these scoped tasks can somehow bring you the ability to build at large scale. They don't say it, but they implicitly mean it with the way they present all this. Computers can compute, they can detect patterns and do analytics part, they can build assumptions based on the data they have. But they need the data, they need parameters, they need not only an operator, they need the source for the material they base their computations and output on. And somehow all the marketing completely ignores this fact. And this is damaging.
As each new tool drops it makes me wonder if I should convert. Currently I mostly code in vs code or chat with Claude code but I don’t really mix the two even though I know I can with the Claude add on. My gf uses colab with Gemini and it seems rather spiffy for data science. And now antigravity. I just wonder when it will end and devtools will slow down their development cycle a bit.
I know what you mean, but LLMs are just a tool. Probably the joy is actually taken out by some form of pressure to use them even when it doesn't make sense, like commercial/leadership pressure.
I tried vscode with copilot again after a year of cursor.
It’s faster but the LLM aspect is unusable. The diffing is still slow as molasses and the chat is very slow also. LLM wise it’s a joke vs cursor. But it is less laggy (because cursor is basically vibe coded crapware that they release multiple bug fixes for per day for the errors they introduce into production on every single release)
I've seen a lot less of that the last couple of weeks. My understanding is that when the main model spits out a diff that somehow doesn't apply cleanly, a cheap model is invoked to 'intelligently' apply it. So it shouldn't normally happen.
For me this feels irrelevant. These tools are marketed for developers for their day-to-day jobs that involve building products. Devs don't look up information on people or build some complex mathematical things daily. They build things that consist of different parts, which in turn can consist of different contexts and can be a combination of other things as well. It can be a straightforward approach or it can be a legacy codebase that also need to incorporate new features with new stacks. The real test is in the real world scenarios. But every time it's about a narrowly scoped thing, the tests, the marketing. And they try to build an image that the combination of these scoped tasks can somehow bring you the ability to build at large scale. They don't say it, but they implicitly mean it with the way they present all this. Computers can compute, they can detect patterns and do analytics part, they can build assumptions based on the data they have. But they need the data, they need parameters, they need not only an operator, they need the source for the material they base their computations and output on. And somehow all the marketing completely ignores this fact. And this is damaging.
As each new tool drops it makes me wonder if I should convert. Currently I mostly code in vs code or chat with Claude code but I don’t really mix the two even though I know I can with the Claude add on. My gf uses colab with Gemini and it seems rather spiffy for data science. And now antigravity. I just wonder when it will end and devtools will slow down their development cycle a bit.
Personally, I wonder if I should switch careers
LLMs do take a lot of the joy out of it.
I know what you mean, but LLMs are just a tool. Probably the joy is actually taken out by some form of pressure to use them even when it doesn't make sense, like commercial/leadership pressure.
To what, something regulated?
Idk, something that needs arms and legs, and still a bit technical.
look into sustainance farming
I'm wondering if I should move to a remote village.
I’ve been learning how to be a bike mechanic
I tried vscode with copilot again after a year of cursor.
It’s faster but the LLM aspect is unusable. The diffing is still slow as molasses and the chat is very slow also. LLM wise it’s a joke vs cursor. But it is less laggy (because cursor is basically vibe coded crapware that they release multiple bug fixes for per day for the errors they introduce into production on every single release)
> The diffing is still slow as molasses
I've seen a lot less of that the last couple of weeks. My understanding is that when the main model spits out a diff that somehow doesn't apply cleanly, a cheap model is invoked to 'intelligently' apply it. So it shouldn't normally happen.
Such a dumb name for an IDE, damn...
jetbrains, hadoop, kafka (kaka!?) mongodb... git... it couldve been worse.
this... is not very good for an hour? i would expect an undergrad to be able to cook this up in an hour
theyd cook it up in an hour and then spend the next 2 days stuck on some stupid issue. not so with ai tools.