This paper introduces a term and instantly defines it as a definitely biased thing that is definitely happening, then spends its entirety arguing against the strawman it built itself. Not a single sentence is spent actually arguing with the idea or any of its points (other than the “partial similarities” paragraph on page I just realized the pages aren’t even numbered).
In general, the terms “LLM-like” and “human-like” are used all over the place, and in contrast with each other, but they’re never actually defined. It all just seems more vibes-based than anything else.
And “treating the human cognitive process like it’s similar to the LLM cognitive process might lead to a society where epistemics turns into a discipline where plausibility is an acceptable substitute for empiricism” has got to be one of the most ridiculous notions I’ve ever read in a paper (ctrl+F “fifth pathway is epistemic” for the exact quote).
It’s certainly a paper, that’s factual. To make sure I understand the argument:
- Scientists create software inspired by how brain works.
- People realize it’s not all that far off.
- Many papers showing this, lots of research to make AI even more like brains.
Paper’s conclusion: “People stupid, this bad. All made up.”
Reading this feels like meeting someone who likes to hear themselves talk.
I would argue that both are correct, because as McLuhen pointed out the things we build come to change the way we perceive the world.
"We become what we behold. We shape our tools, and thereafter our tools shape us." -- Father John Culkin on McLuhan
That said, LLM's were modeled on the human brain so the entire idea that we shouldn't compare ourselves to them is daft. They are similar to use because that is exactly what they're designed to be.
a lot of control theory goes into steam train locomotive, you don't say it explicitly but the way people typically make the same reference they evoke the image of a simple steady state steam engine, and insinuate that people compared brains to whatever was the novelty. I don't think that was ever the case, and comparing the brain with the intricate feedback systems, regulators and more general control theory facets of an actual steam train locomotive is a lot more apt than people make it out to be as if people were comparing it to a simple steam engine proper (i.e. not a locomotive).
also if you look at lifeforms with and without brains, and lifeforms that do or don't do locomotion, there is a clear correlation between mammals, birds, reptiles, spiders, insects, ... which have brains and are motile, versus plants, fungi, ... which don't have brains and aren't significantly motile.
the moment you need to move (not just grow in this or that direction) you need a lot of things: muscle control, inverse kinematics, interpretation of the environment, speedy reactions, routing, planning, memory, ...
Regardless of the degree to which the human mind works like an LLM, my reductionist tendency has always imagined that the human mind will be found to be built from simple enough principles (but at scale, of course). In that regard, LLM as model for the human brain (or at least one aspect of it) is attractive to me. I admit it.
It's interesting that it's easier to construct the argument† that a mind like an LLM would have an easier time capturing mind as steam engine than a mind like a steam engine would have capturing mind as LLM.
†: come up with each token after the other that induces a graspable interpretation of a sequence of tokens representing a potential judgement
A more insidious related pathology- marital induced projected LLMorphism... where your wife constantly accuses you of having the personality of a large language model.
> When artificial systems produce human-like language, people may draw a reverse inference: if LLMs can speak like humans, perhaps humans think like LLMs.
I think I experienced this when I learned about LLMs, chain of thought, thinking tokens, short-term memory context, and long-term memory context. I began applying these concepts to real life and reasoning about how our brains work as if these concepts described how our brains actually function. But maybe this is more akin to the Tetris effect?
People have been doing this since the invention of clockwork. Analogies are useful, even when they're utterly wrong, since they provide a perspective and that perspective is not necessarily wrong. Who knew?
My boss has started to verbalize like an LL lol. I can notice it is not intentional, I think getting exposed to a certain patterns repeatedly is causing some form of imprinting.
Kids, are more susceptible to unknowingly imprinting in their formative users, I wonder if a generation will grow up communicating like an LLM?
I think it's meaningless anyway. A calculator doesn't multiply numbers like a human does. The important part is to develop systems that can do many human tasks
Early LLMs typically tried to do multiplication "in their head" by recall.
Now most LLMs do multiplication using a tool call to a programming language, akin to a person reaching for a calculator rather than relying on a learned table or working the problem out mentally.
The high level comparison between what LLMs do and what humans do" for this example is fairly parallel.
Agreed. I think we, as humans, like to think in terms of various metaphors when it comes to how we perceive ourselves in the world ( for example, "I am not some sort of automaton/robot" when objecting to some boss way back when ).
Looks like he mostly publish something about "social behavior".
This "paper", IMO, is just saying "Hey, I notice this is happening. This is why it could be interesting for social science researchers" with without any real research or result.
Nothing new under the sun. When clocks and precision mechanics started in the 17th century, there was a tendency to view humans as "machines". Computers came, suddenly human brains are "computers". Now we're LLMs.
If scientists make green jelly that emits thoughtful judgements, humans will be compared to green jelly.
None of these analogies are entirely wrong, they're just incomplete.
Humans are similar to machines for example in that our bodies convert energy to do work through a series of pumps and pipes and sensors and actuators. Life is not animated by some magic force but instead operates under the same physical laws that machines use to function.
The author lightly touches on other ways humans have viewed cognition, “computationalism” as one, but somewhat brushes these aside as though LLMs are somehow a unique expression of this tendency. That seems unlikely to me but we’re pretty early days into the tech to start assuming and concluding every initial hot take on “AI is Doing $Thing”.
Especially when this particular thing is just one in a very long line of metaphors humans make to our own minds’ operations every time a new major technology comes to play a pervasive role in society. Computers, steam engines, even aqueducts were not immune to comparisons of thought flowing like water, funneled by deliberate intent, etc. And for some, a certain amount of hand wringing worry or even moral panic about “what it’s doing to us”, eg taking away critical thinking because “OMG calculators!”
The idea that humans could "work like" LLMs (or vice versa) is very vague and can be stretched to say pretty much anything. It's a pointless question IMO. I don't think I do, but maybe I really do on the inside and my consciousness makes me think I don't! We don't know.
In general, the terms “LLM-like” and “human-like” are used all over the place, and in contrast with each other, but they’re never actually defined. It all just seems more vibes-based than anything else.
And “treating the human cognitive process like it’s similar to the LLM cognitive process might lead to a society where epistemics turns into a discipline where plausibility is an acceptable substitute for empiricism” has got to be one of the most ridiculous notions I’ve ever read in a paper (ctrl+F “fifth pathway is epistemic” for the exact quote).
Paper’s conclusion: “People stupid, this bad. All made up.”
Reading this feels like meeting someone who likes to hear themselves talk.
"We become what we behold. We shape our tools, and thereafter our tools shape us." -- Father John Culkin on McLuhan
That said, LLM's were modeled on the human brain so the entire idea that we shouldn't compare ourselves to them is daft. They are similar to use because that is exactly what they're designed to be.
also if you look at lifeforms with and without brains, and lifeforms that do or don't do locomotion, there is a clear correlation between mammals, birds, reptiles, spiders, insects, ... which have brains and are motile, versus plants, fungi, ... which don't have brains and aren't significantly motile.
the moment you need to move (not just grow in this or that direction) you need a lot of things: muscle control, inverse kinematics, interpretation of the environment, speedy reactions, routing, planning, memory, ...
Regardless of the degree to which the human mind works like an LLM, my reductionist tendency has always imagined that the human mind will be found to be built from simple enough principles (but at scale, of course). In that regard, LLM as model for the human brain (or at least one aspect of it) is attractive to me. I admit it.
†: come up with each token after the other that induces a graspable interpretation of a sequence of tokens representing a potential judgement
I think I experienced this when I learned about LLMs, chain of thought, thinking tokens, short-term memory context, and long-term memory context. I began applying these concepts to real life and reasoning about how our brains work as if these concepts described how our brains actually function. But maybe this is more akin to the Tetris effect?
Kids, are more susceptible to unknowingly imprinting in their formative users, I wonder if a generation will grow up communicating like an LLM?
Now most LLMs do multiplication using a tool call to a programming language, akin to a person reaching for a calculator rather than relying on a learned table or working the problem out mentally.
The high level comparison between what LLMs do and what humans do" for this example is fairly parallel.
Actually, this happens already in a modular way AFAIK…
I don’t think this way of thinking started with LLM. Does Systems Based Thinking also attribute too little mind to humans?
Looks like he mostly publish something about "social behavior".
This "paper", IMO, is just saying "Hey, I notice this is happening. This is why it could be interesting for social science researchers" with without any real research or result.
If scientists make green jelly that emits thoughtful judgements, humans will be compared to green jelly.
Humans are similar to machines for example in that our bodies convert energy to do work through a series of pumps and pipes and sensors and actuators. Life is not animated by some magic force but instead operates under the same physical laws that machines use to function.
Especially when this particular thing is just one in a very long line of metaphors humans make to our own minds’ operations every time a new major technology comes to play a pervasive role in society. Computers, steam engines, even aqueducts were not immune to comparisons of thought flowing like water, funneled by deliberate intent, etc. And for some, a certain amount of hand wringing worry or even moral panic about “what it’s doing to us”, eg taking away critical thinking because “OMG calculators!”