mjburgess 20 hours ago

It feels, more and more, that LLMs will be another technology that society will inoculate itself against. It's already starting to happen in education: teachers conversing with students, observing them learn, observing them demonstrate their skills. In business, quickly I'd say, we will realise that the vast majority of worthwhile communication necessarily must be produced by people -- as authors of what they want to say. Authoring is two-thirds of the point of most communication.

Before this, of course, will be a dramatic "shallowness of thinking" shock that will have to occur before its ill-effects are properly inoculated against. It seems part of the expert aversion to LLMs -- against the credulous lovers of "mediocrity" (cf. https://fly.io/blog/youre-all-nuts/) -- is an early experience of inoculation:

Any "macroscopic usage" of LLMs has, in any of my projects, dramatically impaired my own thinking, stolen decisions-making, and worsened my readiness for necessary adaptions later-on. LLMs are a strictly microscopic fill-in system for me, in anything that matters.

This isn't like calculators: my favourite algorithms for by-hand computation arent being "taken away". This is a system for substituting thinking itself with non-thinking, and radically impairs your readiness (, depth, adaptability, ownership) wherever it is used, on whatever domain you use it on.

  • codeduck 19 hours ago

    > In business, quickly I'd say, we will realise that the vast majority of worthwhile communication necessarily must be produced by people

    I believe that one of the most underappreciated skills in business is the ability to string a coherent narrative together. I attend many meetings with extremely-talented engineers who are incapable of presenting their arguments in a manner that others (both technical and non-technical) can follow them. There is an artistry to writing and speaking that I am only now in my late forties beginning to truly appreciate. Language is a powerful tool, the choice of a single word can sometimes make or break an argument.

    I don't see how LLMs can do anything but significantly worsen this situation overall.

    • bonoboTP 19 hours ago

      > I believe that one of the most underappreciated skills in business is the ability to string a coherent narrative together. I attend many meetings with extremely-talented engineers who are incapable of presenting their arguments in a manner that others (both technical and non-technical) can follow them.

      Yes, but the arguments they need to present are not necessarily the ones they used to convince themselves, or their own reasoning history that made them arrive at their proposal. Usually that is an overly boring graph search like "we could do X but that would require Y which has disadvantage Z that theoretically could be salvaged by W, but we've seen W fail in project Q and especially Y would make such a failure more likely due to reason T, so Y isn't viable and therefore X is not a good choice even if some people argue that Y isn't a strict requirement, but actually it is if we think in a timeline of several years and blabla" especially if the decision makers have no time and no understanding of what the words X, Y, Z, W, Q, T etc. truly mean. Especially if the true reason also involves some kind of unspeakable office politics like wanting to push the tools developed by a particular team as opposed to another or wanting to use some tech for CV reasons.

      The narrative to be crafted has to be tailored for the point of view of the decision maker. How can you make your proposal look attractive relative to their incentives, their career goals, how will it make them look good and avoid risks of trouble or bad optics. Is it faster? Is it allowing them to use sexy buzzwords? Does it line up nicely with the corporate slogan this quarter? For these you have to understand their context as well. People rarely announce these things, and a clueless engineer can step over people's toes, who will not squarely explain the real reason for their pushback, they will make up some nonsense, and the clueless guy will think the other person is just too dumb to follow the reasoning.

      It's not simply about language use skills, as in wordsmithing, it's also strategizing and putting yourself in other people's shoes, trying to understand social dynamics and how it interacts with the detailed technical aspects.

      • mjburgess 19 hours ago

        To give a brief example of this -- a college asked why an exec had listened to my argument but not theirs recently, despite "saying the same thing". I explained that my argument contained actual impacts: actual delays, actual costs, an actual timeline when the impact would occur -- rather than nebulous "there will be problems".

        Everyone comes to execs with hypothetical problems that all sound like people dressing up minor issues -- unless you can give specific details, justifications, etc. they're not going to parse properly.

        This would be one case where a person asking an LLM for help is not even aware of the information they lack about the person they're trying to talk to.

        We could define expertise this way: that knowledge/skill you need to have to formulate problems (, questions) from a vague or unknown starting point.

        Under that definition, it becomes clear why LLMs "in the large" pose problems.

        • bonoboTP 18 hours ago

          I don't know. Predicting delays, costs and timelines is notoriously hard unless it's something you've done the exact same way many times already. For example in physical work, like installing external insulation on a building, a contractor can fairly easily predict the time required because they did similar buildings in the past several years, it's multiplying an area by a time average, and they know the delay caused by asking for some material by checking the shipping time on the website they order it from.

          Developing software is very different and many nontechnical execs still refuse to understand it, so the clever engineers learn to make up numbers because that makes them look good.

          Realistically, you simply come across as more competent and the exec compressed all that talk about the details into "this guy is quite serious about not recommending going this way - whatever their true reason and gut feel, it's probably okay to go their way, they are a good asset in the company, I trust that someone who can talk like this is able to steer things to success". And the other guy was "this guy seems to actively hide his true reasons, and is kind of vague and unconfident, perhaps lazy or a general debbie downer, I see no reason to defer to him."

          • mjburgess 18 hours ago

            I think there's an element to that -- I also said it's about trust and credibility. However in this case it was partly about helping the exec cognise the decision and be aware that he needs to make a decision, basically scaffolding the decision-making process for the exec.

            It's kinda annoying for decision-makers to be presented with what sounds like venting. This is something I've done before, in much worse ways actually -- even venting on a first-introduction handshake meeting. But I've learned how to boil that down into decision-making.

            I do find it very annoying, still, how people are generally unwilling to help you explore your thinking out-loud with you, and want to close it down to "what's the impact?" "what's the decision?" -- so I sympathise a lot with people unable to do this well.

            I often need to air unformulated concerns and it's a PITA to have people say, "well there's no impact to that" etc. : yeah, that isnt how experts think. Experts need to figure out even how to formulate mental models of all possible impacts, not just the ones you care about.

            This is a common source of frustration between people who's job is to build (mental,technical...) models and people who's job is to manage social systems.

            • bonoboTP 18 hours ago

              I think nontechnical execs have a mental model of technical expertise, where there's some big rule book lookup table that you learned in college and allows you to make precise, quantified, authoritative statements about things.

              But of course the buck has to stop somewhere. By being definitive, you as the expert also give ammo to the exec. Maybe they already wanted to go that certain way, and now they can point to you and your mumbo jumbo as the solid reasoning. Kind of how consultants are used.

    • bsenftner 17 hours ago

      This entire thread of comments is all circling around but does not now how to articulate the omnipresent communication issues within tech, because the concept of effective communications is not taught in tech, not taught in the entire science, engineering, math and technology series. The only communications training people receive is how to sell, how to do lite presentations.

      There absolutely is a great way to use LLMs when writing, but not to write! Have them critique what you wrote, but not write for you. Create a writing professor persona, create a writing critique, and make them offer Socratic advice where they draw you to make the connection, they don't think for you, but teach you.

      There has been a massive disservice to the entire tech series of professions by ignoring the communications, interpersonal and group communication dynamics of technology development. It is not understood, and not respected. (Many developers will deny communication skills utility! They argue against being understood; "that is someone else's job") Fact of the matter: a quality communicator leads, simply because no one else conveys understanding; without the skills they leave a wake of confusion and disgruntled staff. Competent communicators know how to write to inform, know how to debate to shared understanding, and they know how to diffuse excited emotion, they know how to give bad news and be thanked for the insight.

      Seriously, effective communications is a glaring hole in your tech stack.

    • je42 19 hours ago

      I find that LLM extremely good in training such language skills by using following process:

      a) write a draft yourself.

      b) ask the LLM to correct your draft and make it better.

      c) newer LLMs will explicitly mention the things they corrected (otherwise ask for being explict about the changes)

      d) walk through each of the changes and apply the ones you feel that make the text better

      This helped me improving my writing skills drastically (in multiple languages) compared to the times where I didn't have access to LLMs.

      • bayindirh 18 hours ago

        I use Grammarly sometimes to check my more serious texts, but there's a gotcha. If you allow all of its stylistic choices, your writing becomes very sterile and devoid of any soul.

        Your word and structural choices adds a flair of its own, makes something truly yours and unique. Don't let the tool kill that.

      • darkwater 18 hours ago

        Done this as well. But after the initial "wow!" moment, the "make it better" part became a "actually I don't like how you wrote it, it doesn't sound like me".

        There is a thin line between enhancing and taking over, and IMO the current LLMs cross it most of the time.

        • bsenftner 17 hours ago

          Your mistake was having the AI rewrite at all, don't do that, that is exactly the problem with them - that is them thinking for you. Ask the AI how well you wrote it, make the AI a writing professor, make the AI adopt a Socratic attitude that does not do anything but draws you to make the connection yourself.

  • jddj 19 hours ago

    > another technology that society will inoculate itself against

    I like the optimism. We haven't developed herd immunity to the 2010s social media technologies yet, but I'll take it.

    • Herring 13 hours ago

      The obesity rate is still rising in most areas worldwide. I'd argue we still haven't developed herd immunity to gas-powered automobiles invented early to mid 1800s.

  • hansmayer 19 hours ago

    It's all already there. When you converse with a junior-engineer about their latest and greatest idea (over a chat platform), and they start giving you real-time responses which are a page long and structured into bullet points...it's not even that they are using chatgpt to avoid thinking, it is the fact that they think either no-one will notice, or that this is how grown-ups actually converse with each other, is terrifying.

  • Ekaros 20 hours ago

    > I'd say, we will realise that the vast majority of worthwhile communication necessarily must be produced by people

    Now one like me might go and ask how much of communication is actually worthwhile? Sometimes I consider that there is lot of communication that might not actually be. It is still done, but if no one actually reads it, why not automate generation.

    Not to say there is not significant amount of stuff you actually want to get right.

    • mjburgess 19 hours ago

      It's not about getting it right, its about having thought about it. Authoring means thinking-thru, owning, etc.

      There's a tremendous hollowing-out of our mental capacities caused by the computer science framing of activities in terms of input->output, as-if the point is to obtain the output "by any means".

      It would not matter if the LLM gave exactly the same output as you had written, and always did. Because you still have to act in the world with thoughts that you needed have when authoring it.

      • supriyo-biswas 19 hours ago

        > It's not about getting it right, its about having thought about it. Authoring means thinking-thru, owning, etc.

        So much this.

        At my current workplace, I was asked to write up a design doc for a software system. The contents of the document itself weren't very relevant as the design deviated significantly based on constraints and feedback that could be discovered only after beginning the implementation, but it was the act of putting together that document, thinking about the various cases, etc. that lead to the formation of a mental model that helped me work towards delivering that system.

    • kibibu 20 hours ago

      > It is still done, but if no one actually reads it, why not automate generation.

      There's a reason the real-estate industry has been able to go all-in on using AI to write property listings with almost no consumer kickback (except when those listings include hallucinated schools).

      We're already used to treating them with skepticism, and nobody takes them at face value.

  • Al-Khwarizmi 18 hours ago

    > In business, quickly I'd say, we will realise that the vast majority of worthwhile communication necessarily must be produced by people -- as authors of what they want to say.

    But what fraction of communication is "worthwhile"?

    I'm an academic, which in theory, should be one of the jobs that requires the most thinking. And still, I find that over half of the writing I do are things like all sorts of reports, grant applications, ethics/data management applications, recommendation letters, bureaucratic forms, etc. Which I wouldn't class as "worthwhile" in the sense that they don't require useful thinking, and I don't care one iota whether the text sounds like me or not as long as I get the silly requirement done. For these purposes, LLMs are a godsend and probably actually help me think more because I can devote more time to actual research and teaching, which I do in person.

    • mjburgess 17 hours ago

      Well if you want a rant about academia, I have many well prepared.

      I think in the cases you describe the "thinking" was already purely performative, and what LLMs are doing is a kind of accelerationist project of undermining the performance by automating it.

      I'm somewhat optimistic about this kind of self-destructive LLM use:

      There are a few institutions where these purely-performative pseudo-thinking processes exist, ones insensitive to "existential feedback loops" which otherwise burn them down. I'm hopefully LLMs become a wildfire of destruction in these institutions and, absent external pressures, they return to actual thinking over the performative.

  • jstummbillig 20 hours ago

    I see it as more of a calibration, revolving around understanding what an AI is inherently not able to do – decide what YOU want – and stopping to be weird about that. If you chose to stop being involved in a process and mold it, then your relationship to that process and the outcome will necessarily change. Why would we be surprised by that?

    As soon as we stop treating AI like mind readers things will level out.

  • sesm 20 hours ago

    One of the effects on software development is: the fact that you submitted a PR with any LoC count doesn't mean that you did any work. You need to explain your solution and answer questions to prove that.

    • mjburgess 20 hours ago

      The next stage of this issue is: how do you explain something you didn't write?

      The LLM-optimist view at the moment, which takes on board the need to review LLMs, assumes that this review capability will exist. I cannot review LLM output on areas outside of my expertise. I cannot develop the expertise I need if I use an LLM in-the-large.

      I first encountered this issue ~year-ago when using an LLM to prototype a programming language compiler (a field I knew quite well anyway) -- but realised that very large decisions about the language were being forced by LLM implementation.

      Then, over the last three weeks, I've had to refresh my expertise in some areas of statistics and realised much of my note taking with LLMs has completely undermined this process -- the effective actions have been, in follow on, traditional methods: reading books, watching lectures, taking notes. The LLM is only a small time saver, "in the small" once I'm an expert. It's absolutely disabling as a route back to expertise.

      • kibibu 20 hours ago

        IMO we are likely in a golden era of coding LLM productivity, one in which the people using them are also experts. Once there are no coding experts left, will we still see better productivity?

        • victorbjorklund 19 hours ago

          Yea, and how are people going to learn when the answer is just a chat away. I know it would have been hard for me to learn programming if I knew I could just ask for the solution everytime (no stackoverflow does not count because most people dont ask a question for every single issue they encounter like with AI)

          • ookblah 19 hours ago

            this was the same criticism against SO at the time too. people who want to learn and put in the effort will learn even faster with AI (asking, exploring the answers, etc.) and those who use it as a crutch will be left behind as always. we're just in the confusing staging.

    • darkwater 20 hours ago

      Explanation that the smartpants and some management are already totally willing to outsource to an LLM as well...

  • christophilus 17 hours ago

    > This is a system for substituting thinking itself with non-thinking

    I haven’t personally felt this to be the case. It feels more like going from thinking about nitty gritty details to thinking more like the manager of unreasoning savants. I still do a lot of thinking— about organization, phrasing (of the code), and architecture. Conversations with AI agents help me tease out my thinking, but they aren’t a substitute for actual thought.

  • barrell 19 hours ago

    It’s been my experience that most people opinions on AI is inversely proportional to the timescale they have been using it.

    Using AI is kind of like having a Monika Closet. You just push all the stuff you don’t know to the side until it’s out of view. You then think everything is clean, and can fool yourself into thinking so for a while.

    But then you need to find something in that closet and just weep for days.

  • eru 19 hours ago

    What you say might be true for the current crop of LLMs. But it's rather unlikely their progress will stop here.

  • CuriouslyC 18 hours ago

    Shallow take. LLMs are like food for thought -- the right use in the right amounts is empowering, but too much (or uncritical use) and you get fat and lazy, metaphorically speaking.

    You wouldn't go around crusading against food because you're obese.

    Another neat analogy is to children who are too dependent on their parents. Parents are great and definitely help a child learn and grow but children who rely on their parents for everything rather than trying to explore their limits end up being weak humans.

    • mjburgess 18 hours ago

      > You wouldn't go around crusading against food because you're obese.

      My eateries I step into are met with revulsion at the temples to sugary carbohydrates they've become.

      > about 40.3% of US adults aged 20 and older were obese between 2021 and 2023

      Prey your analogy to food does not hold, or else, we're on track for 40% of americans to acquiring mental disabilities.

      • CuriouslyC 17 hours ago

        Oh, we for sure are, because much like America's social structure pushes people to obesity with overwork and constant stress, that same social structure will push people to use AI blindly to to keep up with brutal quotas set by their employers.

    • NilMostChill 18 hours ago

      Shallow take.

      Your analogies only work if you don't take in to account there are different degrees of utility/quality/usefulness of the product.

      People absolutely crusade against dangerous food, or even just food that has no nutritious benefit.

      The parent analogy also only holds up on your happy path.

      • CuriouslyC 17 hours ago

        You've just framed your own argument. In order to be intellectually consistent, you can't crusade against AI in general, but rather bad uses of AI, which (even as an AI supporter) is all I've asked anti-AI folks to do all along.

        • NilMostChill 15 hours ago

          I'm aware of my own perspective, i don't generally crusade against whatever flavour of machine learning is being pushed currently.

          I was just pointing out that arguing against crusading by using an argument (or analogies) that leaves out half of the salient context could be considered disingenuous.

          The difference between:

          You're using it incorrectly

          vs

          Of the ones that are fit for a particular purpose, they can work well if used correctly.

          Perhaps i'm just nitpicking.

  • lvl155 19 hours ago

    Sad reality is that most people are not smart. They’re not creative, original, or profound. Think back to all the empty and pointless convos you had prior to AI or the web.

    • weatherlite 18 hours ago

      I don't see it as sad, it's perfectly fine to be mediocre. You can have a full, rich life without being or doing anything extraordinary. I am mediocre and most of the people I know are mediocre - at least mediocre in the sense that there will be no Wikipedia page under my name.

    • bayindirh 19 hours ago

      I strongly disagree with this idea.

      If you evaluate a fish by asking it to climb a tree, it'll look dumb.

      If you evaluate a cat by asking it to navigate an ocean to find its birthplace, it'll look dumb, too.

  • supriyo-biswas 19 hours ago

    > against the credulous lovers of "mediocrity" (cf. https://fly.io/blog/youre-all-nuts/)

    I read that article when it was posted on HN, and it's full of bad faith interpretations of the various objections to using LLM-assisted coding.

    Given that the article comes from a person whose expertise and viewpoints I respected, I had to run it through a friend; who suggested a more cynical interpretation that the article might have been written to serve his selfish interests. Given the number of bugs that LLMs often put in, it's not difficult to see why a skilled security researcher might be willing to encourage people to generate code in ways that lead to cognitive atrophy, and therefore increase his business through security audits.

    • mjburgess 19 hours ago

      More charitably, it's a person yet to feel the disabling phase of using an LLM.

      If he's a security researcher, then I'd imagine much of his LLM use is outside his area of expertise. He's probably not using it to replace his security research.

      I think the revulsion to LLMs by experts is during that phase when its clearly mentally disabling you.

    • raesene9 19 hours ago

      Now I'm a fairly cynical person by trade but that feels like it's straying into conspiracy theory territory.

      And of course the key point is that the author of that article isn't (IMO) working in the security research field any more, they work at fly.io on the security of that platform.

  • jrflowers 20 hours ago

    > This is a system for substituting thinking itself with non-thinking

    One of my favorite developments on the internet in the past few years is the rise of the “I don’t think/won’t think/didn’t think” brag posts

    On its own it would be a funny internet culture phenomenon but paired with the fact that you can’t confidently assume that anybody even wrote what you’re reading it is hilarious

    • mijoharas 20 hours ago

      > One of my favorite developments on the internet in the past few years is the rise of the “I don’t think/won’t think/didn’t think” brag posts

      Sorry, I can't immediately think of what you're talking about. Could you link to an example so I can get a feel for it?

      • jrflowers 19 hours ago

        Every time you have ever seen somebody weigh into an evenly slightly complex topic with a post that starts with “I asked chatgpt and”

        • daemin 18 hours ago

          That just signals to me to completely ignore the rest of what they wrote.

        • mijoharas 16 hours ago

          Oh god, I find it maddening! Got it.

  • antithesizer 20 hours ago

    >Authoring is two-thirds of the point of most communication.

    Not when there's money to be made.

jsrozner 2 days ago

I wouldn't call it "accumulation of cognitive debt"; just call it cognitive decline, or loss of cognitive skills.

And also DUH. If you stop speaking a language you forget it. The brain does not retain information that it does not need. Anybody remember the couple studies on the use of google maps for navigation? One was "Habitual use of GPS negatively impacts spatial memory during self-guided navigation"; another reported a reduction in gray matter among maps users.

Moreover, anyone who has developed expertise in a science field knows that coming to understand something requires pondering it, exploring how each idea relates to other things, etc. You can't just skim a math textbook and know all the math. You have to stop and think. IMO it is the act of thinking which establishes the objects in our mind such that they can be useful to our thinking later on.

  • vishnugupta 2 days ago

    > You can't just skim a math textbook and know all the math. You have to stop and think.

    And most importantly you have to write. A lot. Writing allows our brain to structure our thinking. Enables us to have a structured dialogue with ourselves. Explore different paths. Thinking & pondering can only do so much and will reach the limits soon. Writing, on the other hand enables one to explore thoughts nearly endlessly.

    Given that thinking is so intimately associated with writing (could be prose, drawing, equations, graphs/charts, whatever) and that LLMs are doing more and more of writing it'll be interesting to see the effect of LLMs on our cognitive skills.

    • larodi 2 days ago

      The impact of writing is immensely undervalued. Even writing with a keyboard or screen is a lot more than non writing. Exercising writing on any topic is still beneficial, and you can find many psychologists recommend having a daily blog of some sort to help people observe themselves from a side. The same goes for speaking, public speech if u want, and therapeutic daily acting-playing which is also overlooked.

      I’d love to see some sort of study on people who actively particulate writing their stuff on social media and those who don’t.

      If u want to spare your mind from GPT numbness - write or copy what it tells you to do by hand, do not abandon this process.

      Or just write code, programs, essays, poems for fun. Trust me - it is and you’ll get smarter and more confident. GPT is a very dangerous convenience gadget, is not going away like sugar or Netflix, or obesity or long commutes … but similarly dosage and counter measures are essential to cope with the side-effects.

      • QuantumGood a day ago

        Similarly, the impact of white-boarding-type activities is undervalued. When discussing problems with a viewpoint, a quick whiteboard usually gets at some easy-to-find underlying issues that others can understand, rather than it devolving into positional framings.

      • ToucanLoucan 2 days ago

        The only writing I've ever used ChatGPT for is writing I openly don't give a shit about, and even then I constantly find myself prompting it to write less because holy shit do LLMs love to go on and on and on.

        Like not only do I cosign all said above, but I will also add to this: brevity is the soul of wit and none of these fucking things are brief. No matter what you ask for you end up getting just paragraphs of shit to communicate even basic ideas. It's hard to not think this tool was designed from go to automate high school book reports.

        I would only use these programs to either create these overly long, meandering stupid emails, or to digest ones similarly sent to me, and make a mental note to reduce my interactions with this person.

        It's no wonder the MBA class is fucking thrilled with it though, since the vast majority of their jobs seem to revolve around producing and consuming huge reports containing vacuously little.

        • metalman 20 hours ago

          not all humans are brief, and not all situations are amenable to brevity, but I get the point, as brevity can be be exceptionaly informationaly dense, but like in humor(sports), it only works if someone else plays the strait guy or set up artist. Also true masters will switch up, happy to join ingeneral blather, and then drop a subtle, brief comment that is the bridge piece for an otherwise huge informational set.Another thing many performers and writers describe is the finding of the voice or stage/writing persona....perhaps quite different from the one that they inhabit at home. The topic at hand leaves out the trap of standing behind a persona, that the person cant then inhabit, and then can be caught out in a real world situation as an imposter, ha!

    • supriyo-biswas 2 days ago

      > And most importantly you have to write. A lot. Writing allows our brain to structure our thinking.

      There's a lot of talk about AI assisted coding these days, but I've found similar issues where I'm unable to form a mental model of the program when I rely too much on them (amongst other issues where the model will make unnecessary changes, etc.). This is one of the reasons why I limit their use to "boring" tasks like refactoring or clarifying concepts that I'm unsure about.

      > it'll be interesting to see the effect of LLMs on our cognitive skills.

      These discussions remind me a lot about this comic[1].

      [1] https://www.monkeyuser.com/2023/deprecated/

    • fatnoah 2 days ago

      > And most importantly you have to write. A lot. Writing allows our brain to structure our thinking. Enables us to have a structured dialogue with ourselves.

      I feel like to goes beyond writing to really any form of expressing this knowledge to others. As a grad student, I was a teaching assistant for an Electrical Engineering class I failed as an undergrad. The depth of understanding I developed for the material over the course of supporting students in the class was amazing. I transitioned from "knowing" the material and equations to being able to generate them all from first principles.

      Regardless, I fully agree that using LLMs as our form of expression will weaken both the ability to express ourselves AND the ability to develop deep understanding of topics as LLMs "think" for us too.

    • p_v_doom 2 days ago

      Writing is pure magic.It allows so much reflection and so many insights, that you wouldnt otherwise get. And writing as part of the reading process allows you to directly integrate what you are reading as you are doing it. Like cant recommend it enough. Only downside is that its slow, compared to what people are used and want to do, especially in the work environment.

    • Davidzheng 2 days ago

      I disagree with this take. I'd say often when exploring new math problems, often it's possible explore the possible solutions paths at lower technical levels first in your mind before anything down--when actually going into details of an approach. I don't think not writing is that limiting if all of your approaches already fail before going into details, which is often the case in early stages of math research.

      • hamdouni 2 days ago

        I can also explore by writing. Writing drafts can help structure my thinking.

        • hyper57 2 days ago

          "The pen is an instrument of discovery rather than just a recording implement." ~ Billy Collins

    • Aeolun 2 days ago

      > And most importantly you have to write. A lot.

      I find this to still be true with AI assisted coding. Especially when I still have to build a map of the domain.

    • SissyphusXOXO a day ago

      > And most importantly you have to write. A lot. Writing allows our brain to structure our thinking.

      Not to be pedantic, but I’d still argue that thinking is the most important. At least when understanding the nature of learning. I mean, writing is ultimately great because it facilitates high quality thinking. You essentially say this yourself.

      Overall, I think it’s more helpful to understand the learning process as promoting high quality thinking (encoding if you want to be technical). This sort of explains why teaching others, argumentation, mind-mapping, good note-taking, and other activities and techniques are great for learning as well.

    • tom_m a day ago

      They made a documentary about this actually. You can probably find it on Netflix or something. It's called Idiocracy.

    • dr_dshiv 2 days ago

      Prompting involves more than an insignificant amount of writing.

      • delusional 2 days ago

        But it is not at all the same _type_ of writing. Most of the prompts I've seen and written are shorter, less organized, and most importantly not actually considered a piece of writing. When you are writing a prompt you are considering how the machine will "interpret" it and what it will spit back, you're not constructing and argument. Vagueness or dialectics in a prompt will often just confuse the machine.

        Hitting the keys is not always writing.

        • dr_dshiv 2 days ago

          Prompting is prewriting — which is very important and often neglected. With it, you are:

          * Describing the purpose of the writing

          * Defining the format of the writing

          * Articulating the context

          You are writing to figure out what you want.

  • teekert 2 days ago

    I would call it cognitive debt. Have you ever tried writing a large report with an LLM?

    It's very tempting to let it write a lot, let it structure things, let it make arguments and visuals. It's easy to let it do more and more... And then you end up with something that is very much... Not yours.

    But your name is on it, you are asked to explain it, to understand it even better than it is written down. Surely the report is just a "2D projection" of some "high dimensional reality" that you have in you head... right? Normally it is, but when you spit out a report in 1/10th of the time it isn't. You struggle to explain concepts, even though they look nice on paper.

    I found that I just really have to do the work, to develop the mental models, to articulate and to re-articulate and re-articulate again. For different audiences in different ways.

    I like the term cognitive debt as a description of the gap between what mental models one would have to develop pre-LLMs to get a report out, and how little you may need with an LLM.

    In the end it is your name on that report/paper, what can we expect of you, the author? Maybe that will start slipping and we start expecting less over time? Maybe we can start skipping authors altogether and rely on the LLM's "mental" model when we have in depth questions about a report/paper... Who knows. But different models (like LLMs) may have different "models" (predictive algorithms) of underlying truth/reality. What allows for most accurate predictions? One needs a certain "depth of understanding". Writing while relying too much on LLMs will not give it to you.

    Over time indeed this may lead to a population "cognitive decline, or loss of cognitive skills." I don't dare to say that. Book printing didn't do that, although it was expected at the time by the religious elite, they worried that normal humans would not be able to interpret texts correctly.

    As remarked here in this thread before, I really do think that "Writing is thinking" (but perhaps there is something better than writing which we haven't invented yet). And thinking is: Developing a detailed mental model that allows you to predict the future with a probability better than chance. Our survival depends on it, in fact it is what evolution is in terms of information theory [0]. "Nothing in biology makes sense except in the light of ... information."

    [0] https://www.youtube.com/watch?v=4PCHelnFKGc

    • chubot a day ago

      I found that I just really have to do the work, to develop the mental models, to articulate and to re-articulate and re-articulate again. For different audiences in different ways

      Yes definitely!

      I'd say that being able to turn an idea over in your head is how you know if you know it ... And even pre-LLM, it was easy to "appear to know" something, but not really know it.

      PG wrote pretty much this last year:

      in a couple decades there won't be many people who can write.

      So a world divided into writes and write-nots is more dangerous than it sounds. It will be a world of thinks and think-nots.

      https://paulgraham.com/writes.html

  • pilif 2 days ago

    > The brain does not retain information that it does not need.

    Why do I still know how to optimize free conventional memory in DOS by configuring config.sys and autoexec.bat?

    I haven’t done this in 2 decades and I’m reasonably sure I never again will

    • dotancohen 2 days ago

      Probably because you learned it during that brief period in your development in which humans are most impressionable.

      Now think about the effect on those humans currently using LLMs at that stage of their development.

    • fennecfoxy 2 days ago

      The last fast food place you went to, what does the ceiling look like? The exact colour/pattern?

      The last phone conversation you had with a utility company, how did they greet you exactly?

      There's lots that we do remember, sometimes odd things like your example, though I'm sure you must have repeated it a few times as well. But there's so much detail that we don't remember at all, and even our childhood memories just become memories of memories - we remember some event, but we slowly forget the exact details, they become fuzzy.

    • reciprocity 2 days ago

      I also think the claim that "the brain does not retain information it does not need" is an insufficient explanation, and short-sighted. As an example, reading books informs and shapes our thinking, and while people may not immediately recall a book that they read some time ago, I've had conversations where I remembered that I had read a particular passage (sentence, phrase, idea) and referred to it in the conversation.

      People do stuff like that all the time, bringing up past memories in spontaneity. The brain absolutely does remember things it "doesn't need".

    • 15123123 2 days ago

      I think because some experiences are so profound to your brain ( first impression, moments that you are proud of ) that you just replay them over and over again.

    • nottorp 2 days ago

      To nitpick, your subconscious is aware computers have memory constraints even now and you write better code because of it even if you do javascript...

    • rusk 2 days ago

      Because these are core memories that provide stepping stones to later knowledge. It is a part of the story of you. It is very hard to integrate all knowledge in this way.

    • flomo 2 days ago

      Probably because there was some reward that you felt at the time was important (most likely playing a DOS game).

      I did this for a living at a large corp where I was the 'thinkpad guy', and I barely remember any of the tricks (and only some of the IBM stuff). Then Windows NT and 95 came out and like whoo cares... This was always dogshit. Because I was always an Apple/Unix guy and that was just a job.

    • lelele 2 days ago

      Agreed. We remember many things that don't serve us anymore.

    • Delphiza 2 days ago

      memmaker - a cheat, but it is still in my quick-access memory.

  • this_steve_j 2 days ago

    The terms “Cognitive decline” or “brain rot” may have sounded too sensational, and to be fair the authors note the limitations of the small sample size.

    Indeed the paper doesn’t provide a reference or citation for the term “cognitive debt” so it is a strange title. Maybe a last minute swap.

    Fascinating research out of MIT. Like all psychology studies it deserves healthy scrutiny and independent verification. Bit of a kitchen sink with the imaging and psychometric assessments, but who doesn’t love a picture of “this is your brain on LLMs” amirite?

  • eru 2 days ago

    > The brain does not retain information that it does not need.

    Sounds very plausible, though how does that square with the common experience that certain skills, famously 'riding a bike', never go away once learned?

    • wahern 2 days ago

      Closer to the truth is that the brain never completely forgets something, in the sense that there are always vestiges left over, even after the ability to recall or instantly draw upon it is long gone. Studies show, for example, that after one has "forgotten" a language, they're quicker to pick up it again later on compared to someone without that prior experience; how quickly being time dependent, but more quickly nonetheless.

      OTOH, IME the quickest way to truly forget something is to overwrite it. Photographs being a notorious example, where looking at photographs can overwrite your own personal episodic memory of an event. I don't know how much research exists exploring this phenomenon, though, but AFAIU there are studies at least showing that the mere act of recalling can reshape memories. So, ironically, perhaps the best way not to forget is to not remember.

      Left unstated in the above is that we can categorize different types of memory--episodic, semantic, implicit, etc--based on how they seem to operate. Generalizations (like the above ;) can be misleading.

    • gwd 2 days ago

      I think a better way to say it is that the brain doesn't commit to long term memory things that it doesn't need.

      I remember hearing about some research they'd done on "binge watching" -- basically, if you have two groups:

      1. One group watches the entire series over the course of a week

      2. A second group watches a series one episode per week

      Then some time later (maybe 6 months), ask them questions about the show, and the people in group 2 will remember significantly more.

      Anecdotally, I've found the same thing with Scottish Country Dancing. In SCD, you typically walk through a dance that has 16 or so "figures", then for the next 10 minutes you need to remember the figures over and over again from different perspectives (as 1st couple, 2nd couple, 3rd couple etc). Fairly quickly, my brain realized that it only needed to remember the figures for 10 minutes; and even the next morning if you'd asked me what the figures were for a dance the night before I couldn't have told you.

      I can totally believe it's the same thing with writing with an LLM (or having an assistant write a speech / report for you) -- if you're just skimming over things to make sure it looks right, your brain quickly figures out that it doesn't need to retain this information.

      Contrast this to riding a bike, where you almost certainly used the skill repeatedly over the course of at least a year.

    • KineticLensman 2 days ago

      > Sounds very plausible, though how does that square with the common experience that certain skills, famously 'riding a bike', never go away once learned?

      I worked with some researchers who specifically examined this when developing training content for soldiers. They found that 'muscle memory' skills such as riding a bike could persist for a very long time. At the other end of the spectrum were tasks that involved performing lots of technical steps in a particular order, but where the tasks themselves were only performed infrequently. The classic example was fault finding and diagnosis on military equipment. The researchers were in effect quantifying the 'forgetting curve' for specific tasks. For some key tasks, you could overtrain to improve the competence retention, but it was often easier to accept that training would wear off very quickly and give people a checklist instead.

      • eru 2 days ago

        Very interesting! Thanks for bringing this up.

    • pempem 2 days ago

      Such a good question - I hope someone answers with more than an anecdote (which is all I can provide) - I've found the skills that don't leave you like riding a bike, swimming, cooking are all physical skills. Tangible.

      The skills that leave: arguments, analysis, language, creativity often seem abstract and primarily if not exclusively sourced in our minds

      • hn_throwaway_99 2 days ago

        Google "procedural memory". Procedural memory is more resistant to forgetting than other types of memory.

        • eru 2 days ago

          I guess speaking a language employs some mixture of procedural and other types of memory?

    • rusk 2 days ago

      Riding a bike is a skill rather than what we would call a “memory” per se. It’s a skill that develops a new neural pathway throughout your extended nervous system bringing together the lesser senses of proprioception and balance. Once you bring these things together you then go on to use them for other things. You “know” (grok), rather than “understand” how a bike stays upright on a very deep physical level.

      • eru 2 days ago

        Sure. But speaking a language is also (at least partially) a skill, ain't it?

        • rusk 2 days ago

          It is. It’s also something you don’t forget except in extreme cases like dementia. Skills are different from facts but we use the word memory interchangeably for each. It’s this nuance of language that causes a category error in your reasoning ain’t it.

    • devmor 2 days ago

      I am not an expert in the subject but I believe that motor neurons retain memory, even those not located inside the brain. They may be subject to different constraints than other neurons.

  • jancsika 2 days ago

    > And also DUH. If you stop speaking a language you forget it. The brain does not retain information that it does not need.

    Except when it does-- for example in the abstract where it is written that Brain-to-LLM users "exhibited higher memory recall" than LLM and LLM-to-Brain users.

  • amelius 2 days ago

    > You can't just skim a math textbook and know all the math.

    Curious, did anyone try to learn a subject by predicting the next token, and how did it go?

greekanalyst 20 hours ago

"...the LLM group's participants performed worse than their counterparts in the Brain-only group at all levels: neural, linguistic, scoring."

That's not surprising but also bleak.

  • fhd2 20 hours ago

    Appears to align with good old Ironies of Automation [1]. If humans just review and rubber stamp results, they do a pretty terrible job at it.

    I've been thinking for a while now that in order to truly make augmented workflows work, the mode of engagement is central. Reviewing LLM code? Bah. Having an LLM watch over my changes and give feedback? Different story. It's probably gonna be difficult and not particularly popular, but if we don't stay in the driver's seat somehow, I guess things will get pretty bleak.

    [1]: https://en.m.wikipedia.org/wiki/Ironies_of_Automation

    • tuatoru 20 hours ago

      Didn't realise the pedigree of the idea went back to 1983.

      I read about this in a book "Our Robots, Ourselves". That talked about airline pilots' experience with auto-land systems introduced in the late 1990s/ early 2000s.

      As you'd expect after having read Ironies of Automation, after a few near misses and not misses, auto-land is not used any more. Instead, pilot augmentation with head-up displays is used.

      What is the programming equivalent of a head-up display?

      • fhd2 20 hours ago

        Certainly a relatively tight feedback loop, but not too tight. Syntax errors are very tight, but non negotiable: Fix it now.

        Test failures are more explicit, you run tests when you want to and deal with the results.

        Code review often has a horrible feedback loop - often days after you last thought about it. I think LLMs can help tighten this. But it can't be clippy, it can't interrupt you with things that _may_ be problems. You have to be able to stay in the flow.

        For most things that make programmers faster, I think deterministic tooling is absolutely key, so you can trust it rather blindly. I think LLMs _can_ be really helpful for helping you understand what you changed and why, and what you may have missed.

        Just some random ideas. LLMs are amazing. Incorporating them well is amazingly difficult. What tooling we have now (agentic and all that) feels like early tech demos to me.

      • stevage 20 hours ago

        >What is the programming equivalent of a head-up display?

        Syntax highlighting, Intellisense, and the millions of other little features built into modern editors.

        • rglullis 19 hours ago

          We should be able to do a lot more than that. I for one would love to have UML as the basis of for system design and architecture, have "pseudo-code repositories" that can be used as a "pattern book" and leave that as the context for LLM-based code generation tools. We could then define a bunch of constraints (maximum cyclomatic complexity, strict type checking, acceptance tests that must pass, removal of dead code) to reduce the chances of the LLM going rampant and hallucinating.

          This way I'd still be forced to think about the system, without having to waste time with the tedious part of writing code, fixing typos, etc.

          Bonus point: this could become a two-way system between different programming languages and UML as the intermediate representation, which would make a lot easier to port applications to different languages, and would eliminate concerns about premature optimizations. People could still experiment with new ideas in languages that are more accessible (Python/Javascript) and later on port them to more performant systems (Rust/D/C/C++).

  • pantalaimon 19 hours ago

    > We must negate the machines-that-think. Humans must set their own guidelines. This is not something machines can do. Reasoning depends upon programming, not on hardware, and we are the ultimate program! Our Jihad is a "dump program." We dump the things which destroy us as humans!

    https://dune.fandom.com/wiki/Butlerian_Jihad

NetRunnerSu 2 days ago

The discussion here about "cognitive debt" is spot on, but I fear it might be too conservative. We're not just talking about forgetting a skill like a language or losing spatial memory from using GPS. We're talking about the systematic, irreversible atrophy of the neural pathways responsible for integrated reasoning.

The core danger isn't the "debt" itself, which implies it can be repaid through practice. The real danger is crossing a "cognitive tipping point". This is the threshold where so much executive function, synthesis, and argumentation has been offloaded to an external system (like an LLM) that the biological brain, in its ruthless efficiency, not only prunes the unused connections but loses the meta-ability to rebuild them.

Our biological wetware is a use-it-or-lose-it system without version control. When a complex cognitive function atrophies, the "source code" is corrupted. There's no git revert for a collapsed neural network that once supported deep, structured thought.

This HN thread is focused on essay writing. But scale this up. We are running a massive, uncontrolled experiment in outsourcing our collective cognition. The long-term outcome isn't just a society of people who are less skilled, but a society of people who are structurally incapable of the kind of thinking that built our world.

So the question isn't just "how do we avoid cognitive debt?". The real, terrifying question is: "What kind of container do we need for our minds when the biological one proves to be so ruthlessly, and perhaps irreversibly, self-optimizing for laziness?"

https://github.com/dmf-archive/dmf-archive.github.io

  • alex77456 2 days ago

    It's up to everyone to decide what to use LLMs for. For high friction / low throughput (eg, online research using inferieor search tools) tasks, i find text models to be great. To ask about what you don't know, to skip the 'tedious part' (I don't feel like looking for answers, especially troubleshooting arcane technical issues among pages of forums or social media, makes me smarter in any way whatsoever, especially that the information usually needs to be verified and taken with a grain of salt).

    StackExchange, the way it was meant to be initially, would be way more valuable over text models. But in reality people are imperfect and carry all sorts of cognitive biases and baggage, while a LLM won't close your question as 'too broad' right after it gets upvotes and user interaction.

    On the other hand, I still find LLM writing on the subjects familiar to me, vastly inferior. Whenever I try to write a say, email with its help, I end up spending just as much time either editing the prompt to keep it on track, or rewriting it significantly after. I'd rather write it on my own with my own flow, than proofread/peer review a text model.

    • tguvot a day ago

      >To ask about what you don't know, to skip the 'tedious part' (I don't feel like looking for answers, especially troubleshooting arcane technical issues among pages of forums or social media, makes me smarter in any way whatsoever, especially that the information usually needs to be verified and taken with a grain of salt).

      quoting the article:

      Perhaps one of the more concerning findings is that participants in the LLM-to-Brain group repeatedly focused on a narrower set of ideas, as evidenced by n-gram analysis (see topics COURAGE, FORETHOUGHT, and PERFECT in Figures 82, 83, and 85, respectively) and supported by interview responses. This repetition suggests that many participants may not have engaged deeply with the topics or critically examined the material provided by the LLM.

      When individuals fail to critically engage with a subject, their writing might become biased and superficial. This pattern reflects the accumulation of cognitive debt, a condition in which repeated reliance on external systems like LLMs replaces the effortful cognitive processes required for independent thinking.

      Cognitive debt defers mental effort in the short term but results in long-term costs, such as diminished critical inquiry, increased vulnerability to manipulation, decreased creativity. When participants reproduce suggestions without evaluating their accuracy or relevance, they not only forfeit ownership of the ideas but also risk internalizing shallow or biased perspectives.

niemandhier 2 days ago

AI is the anti-Zettelkasten.

Rather than getting ever deeper insight into a subject matter by actively working on it, you iterate fast but shallow over a corpus of AI generated content.

Example: I wanted to understand the situation in the Middle East better so I wrote an 10 page essay on the genesis if Hammas and Hizbulah using OpenAI as a cowriter.

I remember nothing, worse of the things I remember I don’t know if it was hallucinations I fixed or actual facts.

  • energy123 2 days ago

    I'm on the optimistic side with how useful LLMs are, but I have to agree. You cultivate the instinct for how to steer the models and reduce hallucinations, but you're not building articulable knowledge or engaging in challenging thinking. It's more learning muscle-memory reactions to certain forms of LLM output that lean you towards trusting the output more, trying another prompting strategy, clearing context or not, and so on.

    To the extent we can call it skill, it's probably going to be made redundant in a few years as the models get better. It gives me a kind of listlessness that assembly line workers would feel.

    • namaria 2 days ago

      Maybe, much like we invented gyms to exercise after civilization made most physical labor redundant (at least in developed countries), we will see a rise of 'creative writing gyms' of some sort in the future.

      • deinonychus a day ago

        I like this outlook a lot. I suppose I've met a lot of people that do creative writing recreationally and also socially in clubs, writing not just poetry but also things like adventures for roleplaying games like D&D.

        I wonder what the commercialized form of a "gym but for your brain" would look like and if it would take off and if it would be more structured than... uh... schools? Wait, wouldn't this just be like a college except the students are there because they want to be, and not for vocational reasons?

  • nottorp 2 days ago

    You tend to remember trouble more than things going smoothly, so I'd say you remember the parts you had to fix manually.

  • kiru_io 20 hours ago

    Interesting perspective to see AI as the opposite of accessing connected knowledge (aka Zettelkasten)

  • atoav 2 days ago

    Most intelligent people are aware of the fact that writing is about thinking as much as it is about getting the written text.

    LLMs can be great sparring partners for this, if you don't use it as a tool that writes for you, but as a tool that finds mistakes, points out gaps and errors (which you may or may not ignore) and helps in researching general questions aboit the world around you (always woth caution and sources).

    • bsenftner a day ago

      Exactly! Never ever ever have AI write for you. Ask it to critique what you wrote, ask it to pick your arguments apart. Then use your mind to fix what it pointed out. If you cannot figure out how, ask the AI to explain how. Then take a break, 20 minutes is fine, and then return and fix the issue yourself using your own mind to write without assistance. This is how one uses AI to learn.

      • niemandhier a day ago

        The problem with this strategy is that unless you commit logical fallacies you cannot trust the AI critic. Why? It might cite non existing diverging opinions, misuse sources or introduce subtle changes in a citation.

        • bsenftner a day ago

          > It might cite non existing diverging opinions, misuse sources or introduce subtle changes in a citation.

          Just like a person could, which is why one validates. AI is not one's sole information. That's dangerous, to say the least. It also helps to stay within one's formal education, and/or experience, and stay within logical boundaries one can track themselves. It is really all about understanding what you are doing, committing to run without you.

tkgally 2 days ago

The results are not surprising to me personally. When I have used AI to help with my own writing and translation tasks, I do not feel as mentally engaged with the writing or translation process as I would be if I were doing it all on my own.

But I have found that using AI in other ways to be incredibly mentally engaging in its own way. For the past two weeks, I’ve been experimenting with Claude Code to see how well it can fully automate the brainstorming, researching, and writing of essays and research papers. I have been as deeply engaged with the process as I have ever been with writing or translating by myself. But the engagement is of a different form.

The results of my experiments, by the way, are pretty good so far. That is, the output essays and papers are often interesting for me to read even though I know an AI agent wrote them. And, no, I do not plan to publish them or share them.

  • SchemaLoad 2 days ago

    I use AI tools for amusement and asking random questions, but for actual work, I basically don't use them at all. I wonder if I'll be part of the increasingly rare group who is actually able to do anything while the rest become progressively more incompetent.

    • barrenko 2 days ago

      My nickel - we are in the primary stages of being given something like the famed "bicycle for the mind", an exoskeleton for the brain. At first when someone gives you a mech, you're like "woah, cool", let's see what it can do. And then you zip around, smash rocks, buildings, go try to lift the Eiffel.

      After a while you get bored of it (duh), and go back to doing what you usually do, utilizing the "bicycle" for the kind of stuff you actually like doing, if it's needed, because while exploration is fun, work is deeply personal and meaningful and does not sustain too much exploration for too long.

      (highly personal perspective)

      • audunw 2 days ago

        “Bicycle for the mind” analogy is actually really good here. Since bicycles and other transportation technology has made us increasingly weak, which has a negative impact on physical health. At this point it has reached such a critical point that people are taking seriously the fact that we need physical exercise to be in good health. My company recently introduced 60 minutes a week of activity during work hours. It’s probably a good investment since physical health affects performance and mental health.

        Coming back to AI, maybe in the future we will need to explicitly take mental exercise as seriously as we do with physical exercise now. Perhaps people will go to mental gyms. (That’s just a school you may say, but I think the focus could be different: Not having a goal to complete a class and then finish, but continuous mental exercises..)

        • rohansingh 2 days ago

          > bicycles ... has made us increasingly weak

          This is pretty difficult for me to buy. Cycling has been shown time & again to be a great way to increase fitness.

          • nottorp 2 days ago

            > Cycling has been shown time & again to be a great way to increase fitness.

            Compared to sitting on your butt in a car or public transport.

            Perhaps not compared to walking everywhere and chasing the antelope you want to cook for lunch.

            I think what he meant is that both bicycles and LLMs are a force multiplier and you still provide the core of the work, but not all of the work any more.

            • alex77456 2 days ago

              Cycling, in my experience, is usually way more intense than walking or even running/jogging. It just lets you cover larger distance and gives you more control over how your energy is used.

              With the example of LLMs, sure, you could cycle the initial destination you were meant to walk to - write an article with its help, save a few hours and call it a day. Or you could cycle further and use the saved time to work on something a text model can't help you well with.

              • nottorp 20 hours ago

                If we're nitpicking, are we talking about cycling as a sport or cycling as a means for getting from point A to point B?

                I'm sure cultures where they cycle to everywhere all the time take it easier than cultures where going out for a bike ride is an event.

                • alex77456 18 hours ago

                  Not nitpicking, just playing along with the analogy, which i found not that far fetched

          • noobermin 2 days ago

            I once had blood clots in my legs. I couldn't walk in the worst parts of it but cycling down the street was easier than walking for more than ten metres. It's better than sitting on your butt for hours on end, sure.

      • tguvot a day ago

        if you will use exoskeleton for walking, eventually you will have muscle wasting and depends on type of exoskeleton degradation of neural pathways that you need to use for walking

dzonga 20 hours ago

I worry about the adverse effects of LLM on already disfranchised populations - you know the poor etc - that usually would have to pull themselves up using hard work etc studying n reading hard.

now if you don't have a mentor to tell you in the age of LLM you still have to do things the hard / old school way to develop critical thinking - you might end up taking shortcuts and have the LLMs "think" for you. hence again leaving huge swaths of the population behind in critical thinking which is already in shortage.

LLMs are bad that they might show you the sources but also hallucinate about the sources. & most people won't bother going to check source material and question it.

  • eru 19 hours ago

    LLMs are great for the poor!

    If you are rich, you can afford a good mentor. (That's true literally, in the sense of being rich in money and paying for a mentor. But also more metaphorically for people rich in connections and other resources.)

    If you are poor, you used to be out of luck. But now everyone can afford a nearly-free mentor in the form of an LLM. Of course, at the moment the LLM-mentor is still below the best human mentors. But remember: only rich people can afford these. The alternative for poor people was essentially nothing.

    And AI systems are only improving.

    • walleeee 17 hours ago

      A public library is actually free and its contents, collectively, are a far better "mentor" than ChatGPT. Plus the library doesn't build a psychological profile on you while you use it.

      • eru 3 hours ago

        ChatGPT ain't taking libraries away. It's just an addition to your toolbox.

        However, we notice that in practice free public libraries are mostly welfare for the well-off: they are mostly used by people who are at least middle-class.

    • supriyo-biswas 19 hours ago

      If people are using it to critically question their beliefs and thinking, that is.

      However, most of the hype around LLMs is that they take out the difficult task of thinking and allow the creation of the artifact (documents, code or something else) that is really dangerous.

      • eru 19 hours ago

        How is it any worse than the status quo for the disenfranchised?

  • rglullis 18 hours ago

    People could in theory also get a college-level education by watching videos on YouTube, but in practice the masses just end up watching Mr. Beast.

    15 years ago, people were sure that the Khan Academy and Coursera would disrupt Ivy League and private schools, because now one good teacher could reach millions of students. Not only this has not happened, the only movement I'm observing against credentialism is that I have good amount of anecdata showing kids preferring to go to trade school instead of university.

    > pull themselves up using hard work etc studying n reading hard.

    Where are you from? "The key to success is hard work" is not exactly something part of the Gen Z and Zoomers core values, at least not in the Americas and Western Europe.

Todd 2 days ago

This is called cognitive offloading. Anyone who’s spent enough time working with coding assistants will recognize it.

  • esafak 2 days ago

    Or working as an engineering manager.

    It's the inevitable consequence of working at a different level of abstraction. It's not the end of the world. My assembly is rusty too...

    • 15123123 2 days ago

      I don't think not using assembly is going to affect my brain / my life quality in any significant way, but not speaking / chatting with someone is.

      • tankenmate 2 days ago

        But this is a strawman argument, it's not what the research is talking about.

    • nothrabannosir 2 days ago

      If LLMs were as reliable as compilers we wouldn’t be checking in their output, and I’d be happy to forget all programming lore.

      The “skill domain” with compilers is the “input”: that’s what I need to grok , maintain , and understand . With LLMs it’s the “output”.

      until that changes, you’re playing a dangerous game letting those skills atrophy.

      • tele_ski 6 hours ago

        Isn't it both on LLMs? The input is your ability to craft a prompt, the output is checking if the prompt worked.

je42 18 hours ago

> All participants were then reassured that though 20 minutes might be a rather short time to write an essay, they were encouraged to do their best.

Given that the task has been under time pressure, I am not sure this study helps gauging the impact of LLMs in other contexts.

When my goal is to produce the result for a specific short term task - I maximize tool usage.

When my goal is to improve my personal skills - I use the LLM tooling differently optimizing for long(er) term learning.

  • einrealist 18 hours ago

    "I"? You should treat yourself as an anecdotal exception.

    You are reading on HN. You are probably more aware about the advantages and shortcomings of LLMs. You are not a casual user. And that's the problem with our echo chamber here.

  • mparramon 17 hours ago

    This would mean that short term tasks, the bulk of what knowledge workers do nowadays, forgo learning on the job.

Magmalgebra 20 hours ago

Well... yes? Essays are tools to force students to structure and communicate thinking - production of the essay forces the thinking. If you want an equivalent result from LLMs you're going to need a much more iterative process of critique and iteration to get the same kind of mental effort out of students. We haven't designed that process yet.

  • bayindirh 20 hours ago

    I mean, they found brain atrophy. If this doesn't get someone worried, I don't know what would.

    I joked that "I don't do drugs" when someone asked me whether I played MMORPGs, but this joke becomes just too real when we apply it to generative AI of any soırt.

    • Magmalgebra 20 hours ago

      As someone who used to teach, this does not worry me (also, they mention skill atrophy - inherently less concerning).

      Putting ChatGPT in front of a child and asking them to do existing tasks is an obviously disasterous pedagogical choice for the reasons the article outlines. But it's not that hard to create a more constrained environment for the LLM to assist in a way that doesn't allow the student to escape thinking.

      For writing - it's clear that finding the balance on how much time you ordering your thoughts and getting the LLM to write things is its own skillset, this will be its own skill we want to teach independent of "can you structure your thoughts in an essay"

    • falcor84 19 hours ago

      > I mean, they found brain atrophy.

      Where did you get that from? While the article mentions the word "atrophy" twice, it's not something that they found. They just saw less neural activation in regards to essay writing in those people who didn't write the essay themselves. I don't anything there in regards to the brain as a whole.

      • bayindirh 19 hours ago

        If physical exercise builds muscle mass, mental work and exercise builds more connections in your brain.

        Like everything, not using something causes that thing to atrophy. IOW, if you depend on something too much, you'll grow dependency on it, because that part of your body doesn't do the work that much anymore.

        Brain is an interconnected jungle. Improvement in any ability will improve other, adjacent abilities. You need to think faster to type faster. If you can't think faster, you'll stagnate, for example.

        Also, human body always tries to optimize itself to reduce its energy consumption. If you get a chemical from outside, it'll not produce it anymore, assuming the supply will be there. Brain will reduce its connections in some region if that function is augmented by something else.

        Same for skill atrophies. If you lose one skill, you lose the connections in your brain, and that'll degrade adjacent skills, too. As a result, skill atrophy is brain atrophy in the long run.

        • falcor84 17 hours ago

          Absolutely agreed, but where does it take us in regard to division of labor in general? Obviously by not growing my own food or making my own clothes, I'm degrading a lot of skills I could potentially have. To what extent should I strive to develop skills that I don't "care" to exercise?

          "Essay Writing" in particular, at least in an academic context, is almost by definition an entirely useless activity, as both the writer and the reader don't care much about the essay as an artifact. It's a proxy for communication skills, that we've had to use for lack of a better alternative, but my hope is that now that it's become useless as a proxy, our education system can somehow switch to actually helping learners communicate better, rather than continuing to play pretend.

          • bayindirh 17 hours ago

            However, since many tasks are adjacent to each other, you're keeping these tasks at the edge of being alive.

            Do you have plants at home? You're 50% there for growing your own food (veggies, at least). Do you mend your clothes (e.g.: sew your buttons back)? You're ~30% there for making your own clothes, given you have access to fabric.

            On the essay writing, I can argue that at least half of my communication skills come from writing and reading. I don't write essays anymore, but I write a diary almost daily, and I build upon what I have read or written in the past for academic reasons. What I find more valuable in these exercises is not the ability to communicate with others, but communicate with myself.

            Brain has this strange handicap. It thinks that it has a coherent thought, but the moment you try to write or tell about it, what comes out is a mushy, spaghetti which doesn't mean anything. Having the facilities to structure it, and to separate the ore from the dirt, and articulate it clearly so you and everyone can understand it is a very underrated skill.

            Funnily, the biggest contributor to my written skills is this place, since good discussion needs a very particular set of skills here, namely clarity, calmness and having structure to your thought.

            This is why I'm very skeptical of letting go of writing, and actual pens and paper for progress. We're old creatures evolved slowly, and our evolution has a maximum speed. Acting like this is not true will end in disaster.

            Humans, the civilization and the culture we built has so many implicit knowledge coded everywhere, and assuming that we know it all, and can encode in a 80GB weighted graph is a mistake to put it kindly.

    • eru 19 hours ago

      > I joked that "I don't do drugs" when someone asked me whether I played MMORPGs, [...]

      I thought WoW was an off-label contraceptive?

    • HPsquared 20 hours ago

      LLMs are the tip of the iceberg when it comes to this stuff.

jameson 2 days ago

> The LLM undeniably reduced the friction involved in answering participants' questions compared to the Search Engine. However, this convenience came at a cognitive cost, diminishing users' inclination to critically evaluate the LLM's output or ”opinions” (probabilistic answers based on the training datasets). This highlights a concerning evolution of the 'echo chamber' effect: rather than disappearing, it has adapted to shape user exposure through algorithmically curated content. What is ranked as “top” is ultimately influenced by the priorities of the LLM's shareholders [123, 125].

  • eru 2 days ago

    > What is ranked as “top” is ultimately influenced by the priorities of the LLM's shareholders [123, 125].

    As if that's anything new. There's the adage that's older than electronics, that freedom of the press is freedom for those who can afford to own a printing press.

    > However, this convenience came at a cognitive cost, diminishing users' inclination to critically evaluate the LLM's output or ”opinions” (probabilistic answers based on the training datasets).

    Reminds me of Plato's concern about reading and writing dulling your mind. (I think he had his sock puppet Socrates express the concern. But I could be wrong.)

    • namaria 2 days ago

      > Reminds me of Plato's concern about reading and writing dulling your mind. (I think he had his sock puppet Socrates express the concern. But I could be wrong.)

      Nope.

      Read the dialogue (Phaedrus). It's about rhetoric and writing down political discourses. Writing had existed for millennia. And the bit about writing being detrimental is from a mythical Egyptian king talking to a god, just a throwaway story used in the dialogue to make a tiny point.

      In fact the conclusion of that bit of the dialogue is that merely having access to text may give an illusion of understanding. Quite relevant and on point I'd say.

      • eru 2 days ago

        > In fact the conclusion of that bit of the dialogue is that merely having access to text may give an illusion of understanding. Quite relevant and on point I'd say.

        Well, so that's exactly my point: Plato was an old man who yelled at clouds before it was cool.

    • dotancohen 2 days ago

      Plato's sock puppet Socrates? I think that you and I have read different history books, or at least different books regarding the history of philosophy. That said, I would love to hear your perspective on this.

      • Sharlin 2 days ago

        I presume they refer to the fact that Socrates is basically used as a rhetorical device in Plato’s writings, and it’s not entirely clear how much of the dialogues were Socrates’s thoughts and how much was Plato’s own.

        • eru 2 days ago

          Yes, exactly.

      • eru 2 days ago

        > Plato's sock puppet Socrates?

        See https://en.wikipedia.org/wiki/Socratic_problem

        > Socrates was the main character in most of Plato's dialogues and was a genuine historical figure. It is widely understood that in later dialogues, Plato used the character Socrates to give voice to views that were his own.

        However, have a look at the Wikipedia article itself for a more nuanced view. We also have some other writers with accounts of Socrates.

amunozo 19 hours ago

What I still wonder is whether using LLMs is helpful in some ways, or it is, as other users say, just useful for man-made problems such as corporate communication or bureaucracy. I use it for coding and it makes me confident to tackle new things.

I try to use it to understand the code or to implement changes I am not familiar with, but I tend to overuse them a lot. Would it be better, if used ideally (i.e. only to help learning and guiding), to just try it harder before using this or using a search engine? I wonder what's the most optimal use of LLMs in the long run.

jonplackett 20 hours ago

I think we need to shift our idea of what LLMs do and stop thinking they are ‘thinking’ in any human way.

The best mental description I have come up with is they are “Concept Processors”. Which is still awesome. Computers couldn’t understand concepts before. And now they can, and they can process and transform them in really interesting and amazing ways.

You can transform the concept of ‘a website that does X’ into code that expresses a website X.

But it’s not thinking. We still gotta do the thinking. And actually that’s good.

  • panstromek 20 hours ago

    Concept Processor actually sounds pretty good, I like it. That's pretty close to how I treat LLMs.

  • eru 19 hours ago

    Are you invoking a 'god of the gaps' here? Is 'true' thinking whatever machines haven't mastered yet?

    • jonplackett 18 hours ago

      Not at all, I don’t think humans are magic at all.

      But I don’t think even the ‘thinking’ LLMs are doing true thinking.

      It’s like calling pressing the autocomplete buttons on your iPhone ‘writing’. Yeah kinda. It mostly forms sentences. But it’s not writing just because it follows the basic form of a sentence.

      And an LLM, though now very good at writing is just creating a very good impression of thinking. When you really examine what it’s outputting it’s hard to call it true thinking.

      How often does your LLM take a step back and see more of the subject than you prompted it to? How often does it have an epiphany that no human has ever had?

      That’s what real thinking looks like - most humans don’t do tonnes of it most of the time either - but we can do it when required.

panstromek 20 hours ago

Interesting. This says a different thing than what I thought from the title. I thought this will be about cognitive overload from having to process and review all the text the LLM generates.

I had to disable copilot for my blog project in the IDE, because it kept bugging me, finishing my sentences with fluff that I'd either reject or heavily rewrite. This added some mental overhead that makes it more difficult to focus.

MasihMinawal 20 hours ago

I'm curious to see how the EEG measurements might change if someone uses LLM extensively over a longer period of time (fe about a year).

a_bonobo 2 days ago

I guess: Not only does AI reduce the number of the entry-level workers, now this shows that the entry-level workers who remain won't learn anything from their use of AI and remain entry-level forever if they're not careful.

Kiyo-Lynn 2 days ago

When I write with AI, it feels smooth in the moment, but I’m not really thinking through the ideas. The writing sounds fine, but when I look back later, I often can’t remember why I phrased things that way.

Now I try to write my own draft first, then use AI to help polish it. It takes a bit more effort upfront, but I feel like I learn more and remember things better.

  • energy123 2 days ago

    The rule of thumb "LLMs are good at reducing text, not expanding it" is a good one here.

    • falcor84 2 days ago

      > "LLMs are good at reducing text, not expanding it"

      You put it in quote marks, but the only search results are from you writing it here on HN. Obviously LLMs are extremely good at expanding text, which is essentially what they do whenever they continue a prompt. Or did you mean that in a prescriptive way - that it would be better for us to use it more for summarizing rather than expanding?

      • 542354234235 a day ago

        >You put it in quote marks, but the only search results are from you writing it here on HN.

        They said it was a rule of thumb, which is a general rule based on experience. In context with the comment they were replying to, it seems that they are saying that if you want to learn and understand something, you should put the effort in yourself first to synthesize your ideas and write out a full essay, then use an LLM to refine, tighten up, and polish it. In contrast to using an LLM as you go to take your core ideas and expand them. Both might end up very good essays, but your understanding will be much deeper if you follow the "LLMs are good at reducing text, not expanding it" rule.

        • falcor84 20 hours ago

          I think that this conflates two issues though. It seems obvious to me that in general, the more time and effort I put into a task, the deeper I will understand it. But it's unclear to me how this aspect of how we learn by spending time on a task is related to what LLMs are good at.

          Intentionally taking this to a slightly absurd metaphor - it seemed to me like a person saying that their desire to reduce their alcohol consumption, led them to infer the rule of thumb that "waiters are good at bringing food, not drinks".

      • stephen_g a day ago

        I think the key is how you define “good” - LLMs certainly can turn small amounts of text into larger amounts effortlessly, but if in doing so the meaningful information is diluted or even damaged by hallucinations, irrelevant info, etc., then that’s clearly not “good” or effective.

    • devmor 2 days ago

      Probably interesting to note that this is almost always true of weighted randomness.

      If you have something that you consider to be over 50% towards your desired result, reducing the space of the result has a higher chance of removing the negative factor than the positive.

      In contrast, any case that the algorithm is less than 100% capable of producing the positive factor, adding on to the result could always increase the negative factor more than the positive, given a finite time constraint (aka any reasonable non-theoretical application).

Noelia- 2 days ago

After using ChatGPT a lot, I’ve definitely noticed myself skipping the thinking part and just waiting for it to give me something. This article on cognitive debt really hit home. Now I try to write an outline first before bringing in the AI. I do not want to give up all the control.

Scratchthat a day ago

An interesting thinking point on this is to, more broadly, consider the impact that advances in machinery have made to humanity's industrial sector. There are vast stories and accounts of people fearful of job loss/redundancy when we have inevitably developed an automation to take over more repetitive/mind numbing tasks. What ends up happening, generally, is you see humanity gain the ability to discover and innovate as they now have the time and energy to put into it.

What's interesting is I have to wonder if this is something that would extend to our own way of thinking, as discussed here with the short term affects we're already describing with increased dependence on LLMs, GPS systems, etc. There have been studies which have shown that those of who grew up using search engines exclusively did not lose or gain anything with respect to brain power, instead they developed a different means of retaining the information (i.e. they are less likely to remember the exact fact but they will remember how to find it). It makes me wonder if this is the next step in that same process and those of us in the transition period will lament what we think we'll lose, or if LLM dependency presents a point of diminishing return where we do lose a skill without replacing it.

falcor84 2 days ago

I don't quite see their point. Obviously if you're delegating the task to someone/something then you're not getting as good at it as if you were to do it yourself. If I were to write machine code by hand, rather than having the compiler do it for me, I would definitely be better at it and have more neural circuitry devoted to it.

As I see it, it's much more interesting to ask not wherther we are still good at doing the work that computers can do for us, but whether we are now able to do better at the higher-level tasks that computers can't yet do on their own.

  • devmor 2 days ago

    Your question is answered by the study abstract.

    > Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning.

    • falcor84 2 days ago

      But it's not that they "underperformed" at life in general - they underperformed when assessed on various aspects of the task that they weren't practicing. To me it's as if they ran a trial where one group played basketball, while another were acting as referees - of course that when tested on ball control, those who were dribbling and throwing would do better, but it tells us nothing about how those acting as referees performed at their thing.

      • devmor 2 days ago

        I see what you’re getting at now. I agree I’d like to see a more general trial that measures general changes in problem solving ability after a test group is set at using LLMs for a specific problem solving task vs a control group not using them.

acc_297 a day ago

This has been on my mind for awhile and is why I only briefly used copilot on a daily basis.

I'm at the beginning of my career and learning every day - I could do my job faster with an LLM assistant but I would lose out on an opportunity to acquire skills. I don't buy the argument that low-level critical thinking skills are obsolete and high level conceptual planning is all that anyone will need 10 years from now.

On a more sentimental level I personally feel that there is meaning in knowing things and knowing how to do things and I'm proud of what I know and what I know how to do.

Using LLM's doesn't look particularly hard and if I need to use one in the future I'll just pick whichever one is supposedly the newest and best but for now I'm content to toil away on my own.

hcta 20 hours ago

> We recruited a total of 54 participants for Sessions 1, 2, 3, and 18 participants among them completed session 4.

> We used electroencephalography (EEG) to record participants' brain activity in order to assess their cognitive engagement and cognitive load

> We performed scoring with the help from the human teachers and an AI judge (a specially built AI agent)

Next up: your brain on psych studies

darkwater 20 hours ago

Love this study because it reinforces my own biases but also love that a study was done to actually check it.

With that said, would be a study that finds out that people using motorcycles or cars to move around exclusively gets their leg and body atrophied in comparison to people who walk all the day to do their things. Totally. It's just plain obvious. The gist is in the trade-offs: can I do more things or things I wasn't able to do before commuting by car? Sure. Am I going to be exposed to health issues if I never walk day in, day out? Most probably.

The exact same thing will happen with LLM, we are in the hype phase and any criticism is downplayed with "you are being left behind if you don't drink rocket fuel like we do" but in 10-15 years we will be complaining as a society that LLMs dumbed down our kids.

  • falcor84 19 hours ago

    The motorcycle/car metaphor here is really interesting. We really don't know yet, but it could indeed be that lack of access to AI would be similar to how teenagers growing up in a small town without good public transport or access to a car or motorcycle would have a different adolescence experience from those growing up with a convenient mode of travel. You can argue that either experience is "better" but they are inarguably different.

xorokongo 2 days ago

Will we end up with a world where the only experts are LLM companies, having a monopoly on thinking. Will future humans ever be as smart as us or are we the peak of human intelligence and can AI make progress without smart humans to provide training data, getting new insights and increasing its intelligence?

kaelandt 19 hours ago

One thing that is also truly unappreciated is most of us humans actually enjoy thinking, and people are trying to make llms strip us from a fundamental thing we enjoy doing. Look at all the people that enjoy solving problems for the sake of it

rgoulter 2 days ago

From the summary:

"""Going forward, a balanced approach is advisable, one that might leverage AI for routine assistance but still challenges individuals to perform core cognitive operations themselves. In doing so, we can harness potential benefits of AI support without impairing the natural development of the brain's writing-related networks.

"""It would be important to explore hybrid strategies in which AI handles routine aspects of writing composition, while core cognitive processes, idea generation, organization, and critical revision, remain user‑driven. During the early learning phases, full neural engagement seems to be essential for developing robust writing networks; by contrast, in later practice phases, selective AI support could reduce extraneous cognitive load and thereby enhance efficiency without undermining those established networks."""

seanmcdirmid 2 days ago

My hand writing has suffered since I’ve heavily relied on keyboards for the last few decades. I can’t even produce a consistent signature anymore. My stick shift skills also suffered when I used an automatic for so long (and now I have an EV, I’m forgetting what gears are at all).

Rather than lament that the machine has gotten better than us at producing what we’re always mostly vacuous essays anyways, we have to instead look at more pointed writing tasks and practice those instead. Actually, I never really learned how to write until I hit grad school and had messages I actually wanted to communicate. Whatever I was doing before really wasn’t that helpful, it was missing focus. Having ChatGPT write an essay I don’t really care about only seems slightly worse than writing it myself.

endymion-light 19 hours ago

Honestly, my general feeling with LLMs and large language models is that they cure very man-made issues.

They're brilliant in what I always feel is entangled communication, beurocratic maintenence. Like someone mentioned further down, they work great at Concept Processing.

But it feels like a solution to the over saturation of stupid SEO, terrible google search, and overall rise in massive documents that write for the sake of writing.

I've actually found myself beginning to use LLMS more to find me the core sources of information that are useful rather than terrible SEO optimization, rather than as a personal assistant.

Frummy 2 days ago

I can’t believe riding a horse and carriage wouldn’t make you better at riding a horse. Sure a horserider wouldn’t want to practice the wrong way, but anyone else just wants to get somewhere

  • OhNotAPaper 2 days ago

    > I can’t believe riding a horse and carriage wouldn’t make you better at riding a horse.

    Surely you mean "would"? Because riding a horse and carriage doesn't imply any ability at riding a horse, but the reverse relation would actually make sense, as you already have historical, experiential, intimate knowledge of a horse despite no contemporaneous, immediate physical contact.

    Similarly, already knowing what you want to write would make you more proficient at operating a chatbot to produce what you want to write faster—but telling a chatbot a vague sense of the meaning you want to communicate wouldn't make you better at communicating. How would you communicate with the chatbot what you want if you never developed the ability to articulate what you want by learning to write?

    EDIT: I sort of understand what you might be getting at—you can learn to write by using a chatbot if you mimic the chatbot like the chatbot mimics humans—but I'd still prefer humans learn directly from humans rather than rephrased by some corporate middle-man with unknown quality and zero liability.

    • Frummy a day ago

      From the thread: yes, it's sarcasm. Here's some clarification as well: https://news.ycombinator.com/item?id=44291314

      Yes, I'm acknowledging a lack of skill transfer, but that there are new ways of working and so I sarcastically imply the article can't see the forest for the trees, missing the big picture. A horse and carriage is very useful for lots of things. A horse is more specialised. I'm getting at the analogy of a technological generalisation and expansion, while logistics is not part of my argument. If you want to write a very good essay and if you're good at that then do it manually. If you want to create scalable workflows and have 5 layers of agents interacting with each other collaboratively and adversarially scouring the internet and newssites and forums to then send investment suggestions to your mail every lunch then that's a scale that's not possible with a pen and paper and so prompting has an expanded cause and effect cone

    • wcoenen 2 days ago

      The first sentence in the comment you are responding to is sarcasm. Just replace "I can't believe" with "Of course".

      • OhNotAPaper 2 days ago

        > The first sentence in the comment you are responding to is sarcasm. Just replace "I can't believe" with "Of course".

        Do you have any evidence of this?

        • wcoenen 2 days ago

          No, because of Poe's law only the author of the comment can confirm. But the analogy makes sense then:

          "[Of course] writing an essay with chatgpt wouldn’t make you better at writing essays unassisted. Sure, a student wouldn’t want to practice the wrong way, but anyone else just wants to produce a good essay."

        • christophilus 2 days ago

          It’s fairly obvious from the context.

  • apsurd 2 days ago

    i didn't read the article but come on riding a horse to get to a destination is not remotely similar to writing an essay.

    if you say it's a means to an end - to what a good grade? - we've lost the plot long ago.

    writing is for thinking.

    • Frummy a day ago

      I'm making an analogy as to the type of skill it is, so yes, means to an end. I wouldn't mean an apathetic student jumping through bureaucratic educational hoops and requirements, but perhaps a selfdriven person wanting to get something done.

      What I'm saying is that yes writing essays is one skill and if it's your goal to write essays then obviously not doing it yourself entirely will make you worse than otherwise. But I'm expanding a bit beyond the paper saying that yes the brain won't grow for this specific skill because it's actually a different skill.

      Thinking can be done in lots of ways such as when having a conversation, and what I think the skill is is steering and creating structures to orchestrate AIs into automated workflows which is a new way of working. And so what I mean is that with a new technology you can't expect a transfer to the way you work with old technologies rather you have to figure out the better new way you can use the new technology, and the brain would grow for this specific new way of working. And one could analyse depending on ones goal if it's a tool you'd want to use in the sense that cause leads to effect or if you would be better off for your specific goal to ignore the new technology and do it the usual way.

  • adeon 2 days ago

    The task of riding a horse can be almost entirely offsourced to the professional horse riders. If they take your carriage from point A to point B, sure, you care about just getting somewhere.

    Taking the article's task of essay writing: someone presumably is supposed to read them. It's not a carriage task from point A to point B anymore. If the LLM-assisted writers begin to not even understand their own work (quoting from abstract "LLM users also struggled to accurately quote their own work.") how do they know they are not putting out nonsense?

    • eru 2 days ago

      > If the LLM-assisted writers begin to not even understand their own work (quoting from abstract "LLM users also struggled to accurately quote their own work.") how do they know they are not putting out nonsense?

      They are trained (amongst other things) on human essays. They just need to mimic them well enough to pass the class.

      > Taking the article's task of essay writing: someone presumably is supposed to read them.

      Soon enough, that someone is gonna be another LLM more often than not.

  • bakugo 2 days ago

    You know the AI-induced cognitive decline is already well under way when people start comparing writing an essay to riding a horse.

  • namaria 2 days ago

    Horse riding was invented much later than carriages, and it revolutionized warfare.

    • gnabgib 2 days ago

      Can you point at some references? Horse riding started around 3500 BC[0], while horse carriages started around 100BC [1], oxen/buffalo drawn devices around 3000 BC[1].

      [0]: https://en.wikipedia.org/wiki/Equestrianism

      [1]: https://en.wikipedia.org/wiki/Carriage

      • namaria 2 days ago

        From the article [0] you linked:

        "However, the most unequivocal early archaeological evidence of equines put to working use was of horses being driven. Chariot burials about 2500 BC present the most direct hard evidence of horses used as working animals. In ancient times chariot warfare was followed by the use of war horses as light and heavy cavalry."

        Long discussion in History Exchange about dating the cave paintings mentioned in the wikipedia article above:

        https://history.stackexchange.com/questions/68935/when-did-h...

        • gnabgib 2 days ago

          Well exactly.. a millennium after being ridden (3500BC) they were used as beasts of burden (2500BC).. rather the opposite of your claim.

          • namaria 2 days ago

            The 3500 BCE date for horse ridding is speculative and poorly supported by evidence. I thought the language in the bit I pasted made that clear. "Horse being driven" means attached to chariots, not ridden.

            Unless you want to date the industrial revolution to 30 BCE when Vitruvius described the aeolipile, we can talk about the evidence of these technologies impact in society. For chariots that would be 1700 BCE and horseback riding well into iron age ~1000 BCE.

      • eesmith 2 days ago

        I think you are reading "carriage" too specifically, when I suspect it's meant as a wider term for any horse-drawn wheeled vehicle.

        Your [0] says "Chariot burials about 2500 BC present the most direct hard evidence of horses used as working animals. In ancient times chariot warfare was followed by the use of war horses as light and heavy cavalry.", just after "the most unequivocal early archaeological evidence of equines put to working use was of horses being driven."

        That suggests the evidence is stronger for cart use before riding.

        If you follow your [1] link to "bullock cart" at https://en.wikipedia.org/wiki/Bullock_cart you'll see: "The first indications of the use of a wagon (cart tracks, incisions, model wheels) are dated to around 4400 BC[citation needed]. The oldest wooden wheels usable for transport were found in southern Russia and dated to 3325 ± 125 BC.[1]"

        That is older than 3000 BC.

        I tried but failed to find something more definite. I did learn from "Wheeled Vehicles and Their Development in Ancient Egypt – Technical Innovations and Their (Non-) Acceptance in Pharaonic Times" (2021) that:

        > The earliest depiction of a rider on horseback in Egypt belongs to the reign of Thutmose III.80 Therefore, in ancient Egypt the horse is attested for pulling chariots81 before it was used as a riding animal, which is only rarely shown throughout Pharaonic times.

        I also found "The prehistoric origins of the domestic horse and horseback riding" (2023) referring to this as the "cart before the horse" vs. "horse before the cart" debate, with the position that there's "strong support for the “horse before the cart” view by finding diagnostic traits associated with habitual horseback riding in human skeletons that considerably pre-date the earliest wheeled vehicles pulled by horses." https://journals.openedition.org/bmsap/11881

        On the other hand, "Tracing horseback riding and transport in the human skeleton" (2024) points out "the methodological hurdles and analytical risks of using this approach in the absence of valid comparative datasets", and also mentions how "the expansion of biomolecular tools over the past two decades has undercut many of the core assumptions of the kurgan hypothesis and has destabilized consensus belief in the Botai model." https://www.science.org/doi/pdf/10.1126/sciadv.ado9774

        Quite a fascinating topic. It's no wonder that Wikipedia can't give a definite answer!

        • 542354234235 a day ago

          Now I am more interested in prehistoric horse domestication than the AI essay writing.

smartmic 20 hours ago

> As the educational impact of LLM use only begins to settle with the general population, in this study we demonstrate the pressing matter of a likely decrease in learning skills based on the results of our study.

Fast forward 500 years (about 20 generations), and the dumbing down of the population has advanced so much that films like 'Idiocracy" should no longer be described as science fiction but as reality shows. If anyone can still read history books at that point, the pre-LLM era will seem like an intellectual paradise by comparison.

mmaunder 2 days ago

No one only uses an LLM for writing. We switch tools as needed to pull threads as they emerge. It’s like being told to explore a building without leaving a specific room.

user453 2 days ago

Interesting study but I don't really get the point of the search group. Looking at the essay prompts, they all seem like fluffy, opinion based stuff. How would you even use a search engine to help you in that case? Quote some guy who had an opinion? Personally I think my approach would be identical whether put in the web-search or the only-brain group.

  • nk_mit a day ago

    Search Engine is a tool, similar to the one we have now, LLM. It seemed unfair to compare a purely no-tools approach (Brain-only) with a tool (LLM), thus the first motivation of including it. The second one is that we had already seen several studies exploring the Search Engine and its effects on one’s brain. This allows us to ground the research a bit and have a solid base. Finally, I think you had just responded to your own question in your own statement - indeed, to get a user exposed to other opinions. Echo chambers are present in both cases, but it is also important to understand what was the training dataset for ChatGPT and what is the current trend in Google keyword planner (see the example on homeless and giving in the Discussion of the paper). Hope it is more clear now.

sachin_rcz 2 days ago

Would the cognitive decline of using coding debt be on higher side compared to essay writing task? We can all see the effect on junior developers but what about senior devs.

disintegrator 2 days ago

It's somewhat disappointing to see a bunch of "well, duh" comments here. We're often asking for research and citations and this seems like a useful entry in the corpus of "effects of AI usage on cognition".

On the topic itself, I am very cautious about my use of LLMs. It breaks down into three categories for me: 1. replacing Google, 2. get a first review of my work and 3. taking away mundane tasks around code editing.

Point 3. is where I can become most complacent and increasingly miscategorize tasks as mundane. I often reflect after a day working with an LLM on coding tasks because I want to understand how my behavior is changing in its presence. However, I do not have a proper framework to work out "did i get better because of it or not".

I still believe we need to get better as professionals and it worries me that even this virtue is called into question nowadays. Research like this will be helpful to me personally.

paradite 2 days ago

The results are not surprising, but it's good to have these findings formalized as publications, so that we (or LLMs) can refer to them as ground truth in the future.

kelvinjps10 a day ago

What about llms for grammar correction, English is my second language so I find it useful for that

bsenftner 2 days ago

Well duh. Writing is thinking ordered, and thinking in your mind is not ordered unless one has specific training that organizes and orders their thinking - and even then it requires effort to maintain an organized perception. That is why we write: writing is our thoughts organized and frozen in an order that will be remain in order when related, without writing as the communications foundation the ideas/concepts would drift. Using an LLM to write is using an LLM to think for you and unless you then double your work by validating what was written, you are just adding work that regulates your mind to a janitor cleaning up after the LLM.

It is absolutely possible to use LLMs when writing essays, but do not use them to write! Use them to critique what you yourself with your own mind wrote!

  • 542354234235 a day ago

    Validating what is written is just confirming facts, and figures and making sure it is logical. It is not the same as synthesizing the original data, in terms of your level of understanding. If you need something to submit, an AI essay will do. But if you want to understand something, you really need to write it yourself.

    • bsenftner 17 hours ago

      > Validating what is written is just confirming facts

      You wrote it, not the AI. My entire point here is not to have the AI write, ever. Have it critique, have it Socratically draw you to make the decisions to axe sections, rewrite them, and so on - and then you do that, personally, using your own mind.

cleandreams 2 days ago

A paper to make the teachers I know weep.

satisfice 2 days ago

I am just finishing a book that took about two years to write. I thought I would be done a year ago. It’s been a slog.

So now I am in the final editing stage, and I am going back over old writing that I don’t remember doing. The material has come together over many many drafts, and parts of it are still not quite consistent with other parts.

But when I am done, it will be mine. And any mistakes will be honest ones that represent the real me. That’s a feeling no one who uses AI assistance will ever have.

I have never and will never use AI to write anything for me.

ninetyninenine 2 days ago

The next generation of programmers will be stupider then the current generation thanks to LLMs. That means age-ism will become less and less prevalent.

"Look at that old timer! He can code without AI! That's insane!"

tguvot 2 days ago

Now, let's do same exercise but with programming and over longer period of time.

Would really like to present it to management that pushes ai assistance for coding

  • throwawaygmbno 2 days ago

    This opinion is the exact thinking that has lead to the massive layoffs in the design industry. Their jobs are being destroyed because they think lawsuits and current state of the art will show they are right. These models actually can't produce unique input and if you use them for ideation they do only help you get to already solved problems.

    But engineers aren't being fired completely in droves because we have adapted. The human can still break down the problem, tell the LLM to come up with multiple different ways of solving the problem, throw away all of them and asking for more. My most effective use is usually looking and seeing what I would do normally, breaking it down, and then asking for it in chunks that make sense that would touch multiple places, then coding details. It's just a shift in thinking like knowing when to copy and paste when being DRY.

    Designers are screwing themselves right now waiting for case law instead of using their talents to make one unique thing not in the training set to boost their productivity and shaming tools that let them do that.

    It will be a competitive advantage in the future to short sighted companies that took humans out the loop completely, but any company not using the tech will be horse shoe makers not worried because of all the mechanical issues with horseless carriages

  • OhNotAPaper 2 days ago

    > ai assistance for coding

    I honestly think it's gonna be a decade to define this domain, and it's going to come with significant productivity costs. We need a git but to prevent LLMs from stabbing themself in the face. At that point you can establish an actual workflow for unfucking agents when they inevitably fuck themselves. After some time and some battery of testing you can also automate this process. This will take time, but eventually, one day, you can have a tedious process of describing an application you want to use over and over again until it actually works.... on some level, not guaranteed to be anything close to the quality of hand-crafted apps (which is in-line with the transition from assembly to high-level and now to whatever the fuck you want to call the katamari-damacy zombie that is the browser)

  • eru 2 days ago

    > Would really like to present it to management that pushes ai assistance for coding

    Your management presumably cares more about results, than your long term cognitive decline?

    • ezst 2 days ago

      Good of you to suppose that engineers cognitive decline doesn't translate into long term impactful business challenges as well. I mean, once you truly don't know your product and its capabilities any longer, what's left for you to "sell"?

      • eru 2 days ago

        To quote myself:

        > Companies don't own employees: workers can leave at any time.

        > Thus protecting employees productivity in the long run doesn't necessarily help the company. (Unless you explicitly set up contracts that help there, or there are strong social norms in your place around this.)

        • ezst 2 days ago

          You are talking about productivity, I'm talking about knowledge. You may come-up with a product, then fire all engineers having built it. Then, what? It's not sustainable for a business to start from scratch every other year. Your LLM won't be a substitute for owning your product.

          • eru 2 days ago

            Your workers can still quit, and take their knowledge with them.

    • tguvot 2 days ago

      i guess one of the questions is how quick cognitive decline sets it and how it influences system stability (we have big system with very high sla due to nature of system and it takes some serious cognitive abilities to reason about it operation).

      if todays productivity is traded for longer term stability, i am not sure that it's a risk they would like to take

      • eru 2 days ago

        Companies don't own employees: workers can leave at any time.

        Thus protecting employees productivity in the long run doesn't necessarily help the company. (Unless you explicitly set up contracts that help there, or there are strong social norms in your place around this.)

        • tguvot 2 days ago

          i am not talking about productivity. i am talking about quality and knowledge

          • eru 2 days ago

            Your workers can still quit, and take their knowledge with them.

            • yifanl 2 days ago

              You can put effort in making workers not want to quit.

            • tguvot a day ago

              not if because of ai they have no knowledge

  • raincole 2 days ago

    Your management probably believe there will be no "longer period" of programming, as a career option.

  • AnimalMuppet 2 days ago

    If by "cognitive debt", you mean "you don't really understand the code of the application that we're trying to extend/maintain", then yes, it's almost certainly going to apply to programming.

    If I write the application, I have an internal map that corresponds (more or less) to what's going on in the code. I built that map as I was writing it, and I use that map as I debug, maintain, and extend the application.

    But if I use AI, I have much less clear of a map. I become dependent on AI to help me understand the code well enough to debug it. Given AI's current limitations of actually understanding, that should give you pause...

    • tguvot a day ago

      i think that more far reaching consequences it's that "accumulation of cognitive debt" essentially leads to diminished cognitive capabilities, as you loose ability to understand things, analyze and reason.

  • devjab 2 days ago

    I don't think that research will show what you're hoping it would. I'm not a big proponent of AI, you shouldn't bother going through my history but it is there to back up my statement if you're bored. Anyway, even I find it hard to argue against AI agents for productivity, but I think ik depend a lot on how you use them. As an anecdotal example I mainly work with Python, C and Go, but once in a while I also work with Typescript and C#. I've got 15 years experience with js/ts but when I've been away from it for a month it's not easy for me to remember the syntax, and before AI agents I'd need to go to https://developer.mozilla.org/en-US/docs/Web/JavaScript or similar quite a lot when I jumped back into it. AI agents allow me to do the same thing so much quicker.

    These AI agent tools can turn your intend into code rather quickly, and at least for me, quicker than I often can. They do it rather unintrusive, with little effort on your part and they present it with nice little pull-request-lite functionalities.

    The key "issue" here, and probably what this article is more about is that they can't reason as you likely know. The AI needs me to know what "we" are doing, because while they are good programmers they are horrible software engineers. Or in other words, the reason AI agents enhance my performance is because I know exactly what and how I want them to program something and I can quickly assess when they suck.

    Python is a good language to come up with examples on how they can quickly fail you if you don't actually know Python. When you want to iterate over something you have to decide whether you want to do this in memory or not, in C#'s linq this is relatively easily presented to you with IEnumerable and IQuerable which work and look the same. In Python, however, you're often going to want to use a generator which looks nothing like simply looping over a List. It's also something many Python programmers have never even heard about, similar to how many haven't heard about __slots__ or even dataclasses. If you don't know what you're doing, you'll quikly end up with Python that works, but doesn't scale, and when I say scale I'm not talking Netflix, I'm talking looping over a couple of hundred of thousands of items without breaking paying a ridicilous amount of money for cloud memory. This is very anecdotal, but I've found that LLM's are actually quite good at recognizing how to iterate in C# and quite terrible in both Python and Typescript desbite LLM's generally (again in my experience) are much worse at writing C#. If that isn't just anecdotal then I guess they truly are what they eat.

    Anyway, I think similar research would show that AI is great for experienced software engineers and terrible for learning. What is worse is that I think it might show that a domain expert like an accountant might be better at building software for their domain with AI than an inexperienced software engineer.

    • darkstar_16 2 days ago

      You're proving the point in the actual research. Programmers who only use AI for learning/coding will lose this knowledge (of python, for example) that you have gained by actually "doing" it.

      • devjab 2 days ago

        I thought I pretty clearly stated that I was already losing that knowledge long before AI. I guess time will tell if I will lose even more with agents, but I frankly doubt that is possible.

      • tguvot 2 days ago

        i'll add this quote from article:

        Perhaps one of the more concerning findings is that participants in the LLM-to-Brain group repeatedly focused on a narrower set of ideas, as evidenced by n-gram analysis (see topics COURAGE, FORETHOUGHT, and PERFECT in Figures 82, 83, and 85, respectively) and supported by interview responses. This repetition suggests that many participants may not have engaged deeply with the topics or critically examined the material provided by the LLM.

        When individuals fail to critically engage with a subject, their writing might become biased and superficial. This pattern reflects the accumulation of cognitive debt, a condition in which repeated reliance on external systems like LLMs replaces the effortful cognitive processes required for independent thinking.

        Cognitive debt defers mental effort in the short term but results in long-term costs, such as diminished critical inquiry, increased vulnerability to manipulation, decreased creativity. When participants reproduce suggestions without evaluating their accuracy or relevance, they not only forfeit ownership of the ideas but also risk internalizing shallow or biased perspectives.

    • tguvot 2 days ago

      point of article is that people who use ai in order to accomplish work experience measurable cognitive decline compared to those who not

  • ivape 2 days ago

    Why not try it for social media? There’s got to be the world’s largest class action lawsuit if we can get some science behind what that industry has done.

    • OhNotAPaper 2 days ago

      > There’s got to be the world’s largest class action lawsuit

      You'd have to articulate harm, so this is basically dead in the water (in the US). Good luck.

wcfrobert 2 days ago

The results are obviously predictable, but it's nice that the authors took the time to prove a thing everyone already knows to be true with the rigors of science.

I wonder how the participants felt writing an essay while being hooked up to an EEG.

namaria 2 days ago

[flagged]

  • tomhow 2 days ago

    Please don't do this here. If a comment seems unfit for HN, please flag it and email us at hn@ycombinator.com so we can have a look.

    We detached this comment from https://news.ycombinator.com/item?id=44287157 and marked it off topic.

    • namaria 2 days ago

      I did not say it was unfit and I don't see how discussing writing styles and the influence of LLMs on it is off topic on a thread about the effects of LLMs on cognition.

      I don't believe I was impolite or making a personal attack. I had a relevant point and I made it clearly and in a civil manner. I strongly disagree with your assessment.

  • stephen_g 2 days ago

    Really? You claim that praising an analogy would never happen in normal conversation before 2022? Seems fairly normal to potentially start with "that's a good way of putting it, but [...]" since forever...

    • namaria 2 days ago

      I claim specifically that "I love this analogy" and "I love your analogy" have become noticeably more common in HN since 2022.