15 comments

  • thepasch 4 hours ago
    This paper introduces a term and instantly defines it as a definitely biased thing that is definitely happening, then spends its entirety arguing against the strawman it built itself. Not a single sentence is spent actually arguing with the idea or any of its points (other than the “partial similarities” paragraph on page I just realized the pages aren’t even numbered).

    In general, the terms “LLM-like” and “human-like” are used all over the place, and in contrast with each other, but they’re never actually defined. It all just seems more vibes-based than anything else.

    And “treating the human cognitive process like it’s similar to the LLM cognitive process might lead to a society where epistemics turns into a discipline where plausibility is an acceptable substitute for empiricism” has got to be one of the most ridiculous notions I’ve ever read in a paper (ctrl+F “fifth pathway is epistemic” for the exact quote).

    • therealpygon 2 hours ago
      It’s certainly a paper, that’s factual. To make sure I understand the argument: - Scientists create software inspired by how brain works. - People realize it’s not all that far off. - Many papers showing this, lots of research to make AI even more like brains.

      Paper’s conclusion: “People stupid, this bad. All made up.”

      Reading this feels like meeting someone who likes to hear themselves talk.

  • fhars 3 hours ago
    Before electric computers, the human mind was a steam engine: https://www.ezrabrand.com/p/releasing-the-pressure-a-dive-in...
    • mapontosevenths 52 minutes ago
      I would argue that both are correct, because as McLuhen pointed out the things we build come to change the way we perceive the world.

      "We become what we behold. We shape our tools, and thereafter our tools shape us." -- Father John Culkin on McLuhan

      That said, LLM's were modeled on the human brain so the entire idea that we shouldn't compare ourselves to them is daft. They are similar to use because that is exactly what they're designed to be.

    • DoctorOetker 1 hour ago
      a lot of control theory goes into steam train locomotive, you don't say it explicitly but the way people typically make the same reference they evoke the image of a simple steady state steam engine, and insinuate that people compared brains to whatever was the novelty. I don't think that was ever the case, and comparing the brain with the intricate feedback systems, regulators and more general control theory facets of an actual steam train locomotive is a lot more apt than people make it out to be as if people were comparing it to a simple steam engine proper (i.e. not a locomotive).

      also if you look at lifeforms with and without brains, and lifeforms that do or don't do locomotion, there is a clear correlation between mammals, birds, reptiles, spiders, insects, ... which have brains and are motile, versus plants, fungi, ... which don't have brains and aren't significantly motile.

      the moment you need to move (not just grow in this or that direction) you need a lot of things: muscle control, inverse kinematics, interpretation of the environment, speedy reactions, routing, planning, memory, ...

    • JKCalhoun 40 minutes ago
      Or clockwork.

      Regardless of the degree to which the human mind works like an LLM, my reductionist tendency has always imagined that the human mind will be found to be built from simple enough principles (but at scale, of course). In that regard, LLM as model for the human brain (or at least one aspect of it) is attractive to me. I admit it.

    • AnthonBerg 1 hour ago
      It's interesting that it's easier to construct the argument† that a mind like an LLM would have an easier time capturing mind as steam engine than a mind like a steam engine would have capturing mind as LLM.

      †: come up with each token after the other that induces a graspable interpretation of a sequence of tokens representing a potential judgement

  • bluejay2387 54 minutes ago
    A more insidious related pathology- marital induced projected LLMorphism... where your wife constantly accuses you of having the personality of a large language model.
  • dr_dshiv 4 hours ago
    I teach students to use their own imagination like generative AI. Prompting works. They just need a bit of practice.
    • incognito124 1 hour ago
      That's actually a really interesting thing to think about, learning how to "prompt" oneself.
  • daishi55 51 minutes ago
    I certainly analogize my behaviors to LLMs. How I learn, how I think - I see it reflected in the LLMs I use every day.
  • Alifatisk 4 hours ago
    > When artificial systems produce human-like language, people may draw a reverse inference: if LLMs can speak like humans, perhaps humans think like LLMs.

    I think I experienced this when I learned about LLMs, chain of thought, thinking tokens, short-term memory context, and long-term memory context. I began applying these concepts to real life and reasoning about how our brains work as if these concepts described how our brains actually function. But maybe this is more akin to the Tetris effect?

    • wizzwizz4 2 hours ago
      People have been doing this since the invention of clockwork. Analogies are useful, even when they're utterly wrong, since they provide a perspective and that perspective is not necessarily wrong. Who knew?
  • vachina 4 hours ago
    I mimic how LLM responds when I talk to my boss lol. Appear useful and present verbose facts. Works pretty well so far.
    • loadingcmd 2 hours ago
      My boss has started to verbalize like an LL lol. I can notice it is not intentional, I think getting exposed to a certain patterns repeatedly is causing some form of imprinting.

      Kids, are more susceptible to unknowingly imprinting in their formative users, I wonder if a generation will grow up communicating like an LLM?

      • flux3125 1 hour ago
        You're absolutely right!
  • artninja1988 4 hours ago
    I think it's meaningless anyway. A calculator doesn't multiply numbers like a human does. The important part is to develop systems that can do many human tasks
    • mhalle 3 hours ago
      Early LLMs typically tried to do multiplication "in their head" by recall.

      Now most LLMs do multiplication using a tool call to a programming language, akin to a person reaching for a calculator rather than relying on a learned table or working the problem out mentally.

      The high level comparison between what LLMs do and what humans do" for this example is fairly parallel.

      • mikrl 3 hours ago
        How long until LLMs are prompting LLMs to write a response to their user query?

        Actually, this happens already in a modular way AFAIK…

        • SuperV1234 2 hours ago
          This already happens with -- e.g. -- Claude Code spawnining parallel agents and then collating their results.
  • Den_VR 5 hours ago
    > are [we] beginning to attribute too little mind to humans.

    I don’t think this way of thinking started with LLM. Does Systems Based Thinking also attribute too little mind to humans?

    • iugtmkbdfil834 5 hours ago
      Agreed. I think we, as humans, like to think in terms of various metaphors when it comes to how we perceive ourselves in the world ( for example, "I am not some sort of automaton/robot" when objecting to some boss way back when ).
  • j16sdiz 1 hour ago
    I looked up other paper from the same author.

    Looks like he mostly publish something about "social behavior".

    This "paper", IMO, is just saying "Hey, I notice this is happening. This is why it could be interesting for social science researchers" with without any real research or result.

  • MichaelRo 4 hours ago
    Nothing new under the sun. When clocks and precision mechanics started in the 17th century, there was a tendency to view humans as "machines". Computers came, suddenly human brains are "computers". Now we're LLMs.

    If scientists make green jelly that emits thoughtful judgements, humans will be compared to green jelly.

    • Legend2440 1 hour ago
      None of these analogies are entirely wrong, they're just incomplete.

      Humans are similar to machines for example in that our bodies convert energy to do work through a series of pumps and pipes and sensors and actuators. Life is not animated by some magic force but instead operates under the same physical laws that machines use to function.

    • cindyllm 4 hours ago
      [dead]
  • ineedasername 1 hour ago
    The author lightly touches on other ways humans have viewed cognition, “computationalism” as one, but somewhat brushes these aside as though LLMs are somehow a unique expression of this tendency. That seems unlikely to me but we’re pretty early days into the tech to start assuming and concluding every initial hot take on “AI is Doing $Thing”.

    Especially when this particular thing is just one in a very long line of metaphors humans make to our own minds’ operations every time a new major technology comes to play a pervasive role in society. Computers, steam engines, even aqueducts were not immune to comparisons of thought flowing like water, funneled by deliberate intent, etc. And for some, a certain amount of hand wringing worry or even moral panic about “what it’s doing to us”, eg taking away critical thinking because “OMG calculators!”

  • stavros 4 hours ago
    I'm sure we don't know for sure that humans work like LLMs, but do we know that they don't?
    • unleaded 1 hour ago
      The idea that humans could "work like" LLMs (or vice versa) is very vague and can be stretched to say pretty much anything. It's a pointless question IMO. I don't think I do, but maybe I really do on the inside and my consciousness makes me think I don't! We don't know.
  • TMWNN 5 hours ago
    Highly relevant: Reading Doesn't Fill a Database, It Trains Your Internal LLM <https://tidbits.com/2026/02/28/reading-doesnt-fill-a-databas...>
  • Der_Einzige 1 hour ago
    No template. No figures. No attempt. This shouldn't be on Arxiv. Vixra was created for such low effort content.