I think this is the same ethical questions of veganism and our use/abuse of biological systems. This is an excerpt from "The Pig that Wants to be Eaten" by Julian Baggini
> After forty years of vegetarianism, Max Berger was about to sit down to a feast of pork sausages, crispy bacon and pan-fried chicken breast. Max had always missed the taste of meat, but his principles were stronger than his culinary cravings. But now he was able to eat meat with a clear conscience.
> The sausages and bacon had come from a pig called Priscilla he had met the week before. The pig had been genetically engineered to be able to speak and, more importantly, to want to be eaten. Ending up on a human’s table was Priscilla’s lifetime ambition and she woke up on the day of her slaughter with a keen sense of anticipation. She had told all this to Max just before rushing off to the comfortable and humane slaughterhouse. Having heard her story, Max thought it would be disrespectful not to eat her.
> The chicken had come from a genetically modified bird which had been ‘decerebrated’. In other words, it lived the life of a vegetable, with no awareness of self, environment, pain or pleasure. Killing it was therefore no more barbarous than uprooting a carrot.
> Yet as the plate was placed before him, Max felt a twinge of nausea. Was this just a reflex reaction, caused by a lifetime of vegetarianism? Or was it the physical sign of a justifiable psychic distress? Collecting himself, he picked up his knife and fork . . .
> Source: The Restaurant at the End of the Universe by Douglas Adams (Pan Books, 1980)
What is the source line at the end representing there? I've read The Restaurant at the End of the Universe, and while it definitely contains (and I see it as a major cultural anchor for) animals bred to desire being eaten and be able to say so, it doesn't contain that particular scene (at least in the version I read). Is that line Baggini noting that his scene was inspired by the Adams book?
Did Priscilla also want to be living in absolute misery every single day of her life? The way animals are treated while they are alive is my main objection to our farming practices and the reason why i don’t eat meat.
I believe you are missing the forest for the trees. It is bringing up the question of what defines self will. It is unrelated to veganism in all but text.
An easy example is dogs. We have bred dogs for centuries to love doing work for us. If they hated doing the work, it would be easy to call it cruel. If they loved it by nature, it would be easy to call it kind. But since we created them into a thing that loves the work we need them for, where do the ethics fall?
Should we prevent them from doing what brings them joy? Should we make use of this win-win situation? If it is the latter, we are quickly approaching the ability to morph every species into something that gets joy from doing our work.
Dogs we changed by accident. The next one will not be an accident. Is it still a beings free will if the game was rigged from the start?
(I know your point wasn't about dogs either, it just reminded me of something).
I love Neil de Grasse Tyson's line in Cosmos: A Spacetime Odyssey:
"This wolf has discovered what a branch of its ancestors figured out some 15.000 years ago... an excellent survival strategy: the domestication of humans."
as an unintentional and perhaps unethical vegetarian of many years who hasn't read this book: eating dead things gives me the creeps because it makes me consider my own death and consumption which is unappetizing
Be careful about how you interpret that paper. It looks really impressive -- real neurons in a petri dish seem to successfully (if amateurishly) murk a few imps.
So there is an entire pytorch stack wrapped around the mysterious little blob of neurons -- they aren't just wired straight into WASD. There is a conventional convnet-based encoder, running on a GPU, in the critical path. The README tries to argue that the "neurons are doing the learning" but to my dilettante, critical eye it really looks as though there is a hell of a lot of learning happening in the convnet also.
Are the neurons learning to play doom, or are they learning to inject ever so slightly more effective noise into the critical path? Would this work just as well if we replaced the neurons with some other non-markovian sludge? The authors do ablation experiments to try to get to the bottom of this but I can't really tell how compelling the results are (due to my own ignorance/stupidity of course)
>But this is where the line slightly blurs in my head. Did we possibly just build the first human biocomputer and immediately put it in a simulated hell, playing the same game on loop, forever? Using the same reward mechanisms we use for LLMs?
This description does not seem to really match what was done in the Doom demo, and makes me skeptical that the author has actually looked into the details.
> skeptical that the author has actually looked into the details.
Nevermind the experiment.. same deal for a lot of people who are only interested enough to offer opinions about consciousness and theory-of-mind without doing any of the boring background reading.
The bottom line in TFA is maybe just about unapologetic carbon-chauvinism. But although OP has "been in the AI space since ChatGPT first dropped" and "bothered by this for months", they don't seem aware of terms or the usual problems with this position. Your average non-technical scifi reader has a more nuanced take than AI bros puffing up blogs for linked-in traffic
I read an interesting book about consciousness recently: The Hidden Spring by Mark Solms.
Solms argues, I think convincingly, that consciousness fundamentally has to do with emotions and not cognition. Consciousness is not produced by the cortex but rather by the brainstem, where signals from all over the body converge (e.g. pain, hunger, itchiness, etc).
If that argument is true then a petri-dish of neurons is unlikely to be conscious, even it performs some analogue of visual processing.
The book makes other arguments that I found less convincing. For example that consciousness is "felt homeostasis" and that a fairly simple system (somewhat more complex than a thermometer) will be conscious, albeit minimally.
People having been saying for aeons that consciousness originates in the (mammalian) cortex and not in the brainstem. To justify killing all sorts of animals ;-)
The whole thing makes one thing extremely clear: people are very good at moving goalposts. We've blasted past the 'turing test' for all practical purposes, but we moved the definition of 'true intelligence'. Consciousness and intelligence have long seen as higly correlated or even the same thing. But now we have need of a separation between the two.
If we eventually (we're not there yet, I think) create a true intelligent AI it will probably be a long time before people will accept that creating an intelligent being probably means it should have 'rights' as well.
We're definitely not there yet, but at what point does turning off an AI become the same as killing a being? I think that's not being talked about enough. Sure LLMs are just prediction engines. But so are we. Our brains are prediction engines tuned by evolution to do the best possible prediction of the near future to maximize survival. We are definitely conscious. But a housefly, is that conscious? What makes the difference? it's hard to tell.
Otoh, an AI has no evolutionary reason to have the concept of fear/suffering so maybe it's more like the douglas adams creature that doesn't mind to be killed?
LLMs still do not pass the turing test as it is commonly understood. Ask the right questions, and it becomes apparent very quickly which party is the machine and which is the human. Hell, there are enough people on here that can probably tell them apart just from the way that LLMs write.
But it's also easy to argue that LLMs do pass the turing test just because it's so vague. How many questions can I ask? What's the success threshold needed to 'pass'? How familiar is the interrogator with the technology involved? It's easy to claim that goal posts have been moved when nobody even knew where they stood to begin with.
Ultimately it's impossible to rigorously define something that's so poorly understood. But if we understand consciousness as something that humans uniquely possess, it's hard to imagine that intelligence alone is enough. You at least also need some form of linear (in time) memory and the ability to change as a result from that memory.
And that's where silicon and biological computers differ - it's easy to copy/save/restore the contents of a digital computer but it's far outside our capabilities to do the same with any complex biological system. And that same limitation makes it very difficult for us humans to even imagine how consciousness could exist without this property of being 'unique', of being uncopiable. Of existing in linear time, without any jumps or resets. Perhaps consciousness doesn't make sense at all without that.
> Consciousness is not produced by the cortex but rather by the brainstem, where signals from all over the body converge (e.g. pain, hunger, itchiness, etc).
Which just begs the question of how pain or hunger is any different from a reward function, the very thing neural networks are based on. Or how it's even different from fungi growing towards food (pleasure), while avoiding salt (pain).
Anyone who believes AI running on silicon could in principle be conscious has to believe that biological computers are conscious, right? Why aren't those people voicing more concerns?
This does not follow. Just because biological brains can be conscious does not mean that all of them are, the same way that not every computer is running windows XP.
Why would you expect more concern from people about biological computing? It's not even demonstrated feasibility yet, while LLM based "AI" is already widely used.
Still, the day we manage to run a full LLM on biological neurons, even if using conventional code under the hood, will be a very interesting day for consciousness discussions.
> the day we manage to run a full LLM on biological neurons, even if using conventional code under the hood
Doesn't make sense to me to use conventional code, shouldn't it be a matter of connecting the biological neurons in the same way as the simulated neurons of the NN implementing the LLM?
How much commentary do you read on biocomputers? There are a lot less people talking about biocomputers than there are talking about AI in general. Remarks on the matter across the board are almost exclusively concerns and skeevishness, proportionally it's not even close.
So then, is it a question of volume? Ask yourself, within the last 2 years, have you thought about LLMs or biocomputers more? Probably the former, right? LLMs are ubiquitous within day-to-day life and massively marketed to the public and biocomputers are esoteric lab experiments that most people come across in a once-in-a-blue-moon news article. We talk and think about things that we are adjacent to, those form our preoccupations. Why aren't people who speak up about the Israel/Palestine dynamic speaking up more about West Papua? Or the mid-19th century geopolitical relationship between Cambodia and Viet Nam? Epistemological asymmetry.
I think so! You independently stumbled upon the "China brain" thought experiment. https://en.wikipedia.org/wiki/China_brain - is "the nation of china simulating a brain" conscious?
Your brain is a network. How does your entangled fatty tissue achieved consciousness?
I think that until we can answer this question in the authoritative way ruling out non-brain based consciousness concept is not particularly well thought thought - after all plants exhibit communication and response mechanisms that are similar to those in animals - without brain.
So what's your theory of consciousness and how does it preclude absolutely everything except wetware you generously include? :)
>How does your entangled fatty tissue achieved consciousness?
It doesn't. Humans aren't conscious. Nor are any other organisms. They don't have souls either, but that goes without saying since it's just an archaic synonym. Mostly this occurs because humans have painted themselves into corners morally-speaking, and they need justification to eat bacon or grow their population. And apparently "because we can and we want to" isn't the correct solution.
We'll never be able to "answer the question" because it is an absurd question on its face. "Where do we find the magical brain ghosts making us special" presupposes there is something to be found, and a negative answer proves only that we haven't looked hard enough.
>after all plants exhibit communication and response mechanisms that are similar to those in animals - without brain.
Were that line of inquiry followed to its inevitable conclusion, there would be a mass vegan suicide to look forward to.
Isn't consciousness phenomenon that's literally derived from human experience? How can you have any definition of consciousness that says humans do not possess it, it's contradictory.
>How can you have any definition of consciousness that says humans do not possess it,
I'm not obligated to prove the negative.
>Isn't consciousness phenomenon that's literally derived from human experience?
You grew up watching and seeing all the various illusions caused by how your brain works/malfunctions, but this is the one experience you're sure is the real deal? The one telling you that it's a scientific fact that you have a woo-woo spirit in your skull, and that neuroscientists are going to find it any day now?
> You grew up watching and seeing all the various illusions caused by how your brain works/malfunctions, but this is the one experience you're sure is the real deal? The one telling you that it's a scientific fact that you have a woo-woo spirit in your skull, and that neuroscientists are going to find it any day now?
No, that's your projection, I did not make any of these claims. I'm sure I have consciousness. I don't know how it works, if it's "real deal" (what does it even mean?), if its woo-woo spirit and if neuroscientists will ever be able to find. What we know is that humans experience it (I'll instantly clarify - it doesn't mean that non-humans do not experience it) hence definition which excludes humans will always make zero sense.
I think this comes from our rather nebulous definition of "consciousness".
We have this natural tendancy to impose our feelings of self on the definition of consciousness. Its hard to accept that all of our thoughts, emotions, and behaviours could be calculated by a human with pen and paper (with enough humans and developments in neurobiological research).
I believe we will have to reckon with these loose definitions and eventually realize how lacking in utility they are for describing engineered intellegence.
I don't find it hard to accept, but it's rather fascinating to think.
The way I think of it is along this way:
Despite the fact that our brains consist of bilions of neurons we think of ourselves as a unit enclosed in a single skull. But studies on people who have two sides of brain separated suggest that there can exist two separate conscious entities in one body.
If we have removed the physical limitations of support systems of our brain - I think it is possible you could split the brain in smaller and smaller chunks of less and less conscious entities until you reach single neurons which almost certainly do not have consciousness.
"The_Invincible" from Stanisław Lem is also a nice novel about the similar concept.
That's like saying you can split a dinner plate into smaller and smaller pieces until you no longer have a plate. It's presupposing that "plates" are an inherit physical property "out there" that would exist without human categorization.
This question boils down to whether consciousness is emergent from physical substrate and processes or not. If so, then yes, anything can be conscious, if not, you probably believe in spirit.
same question, I thought a long while before clicking publish contemplating if I were sounding too larp-philosophical but it had been bothering me far too long
We will never draw the line because morality among humans is coupled with looking human-like. For most people, their morals have aesthetic prerequisites, neurons in a lab don't mean as much as neurons in a meat case (especially if that meat case is physically attractive)
And even "human-like" had some pretty strict definitions back in the day, and probably still now for some people. The people working the fields in the American South certainly weren't thought of as having the same "personhood" on any level as their owners.
They aren't. However there is a coordinated effort to push this pseudo-philosophy on masses. On the one hand it degrades the idea of human consciousness or soul, calling it a fiction. On the other hand it props the AI, calling its pile of transistors almost brain-like.
For what it's worth, this happens every time there is a new technological innovation. Are human brains hydraulic systems? Are humans just a computer? Are they an LLM?
These technologies give some insight, but the answer is always not really. It would be good if we studied actual human brains in some detail if we want to know these answers.
Reminds me of an ethical dilemma in the game "Detroit: Become Human". I found myself philosophically asking what it means to be alive, what it means to be conscious, and if something without biological bones, blood and a brain can feel the same-level of consciousness as humans, or greater.
Yeah, we're totally fucked, there is no scientific theory that can tell you what is and isn't conscious. For all we know, my laptop, not running any LLM is conscious and always has been. Or my chair. Or a proton. This consciousness thing is a nasty problem for the scientific worldview.
... which is exactly how we know that LLMs are not conscious. We can't really explain consciousness. We can absolutely explain LLMs. The math is heavy and massive, but explainable. We can explain it layer-by-layer until we show that at its most basic level, it is still just a series of 0s and 1s.
People smuggle in so many assumptions when they use words like consciousness or thinking or soul or personhood, I've never met a lay person who could talk clearly about ai safety issues unless we switched to language like process.
Consciousness is an absolutely terrible term that's going to get us all killed by Ai. I know a huge swath of people who think its nbd to torture Ai because it doesnt have a soul, well I see a LOT of non-theists smuggling soul rhetoric and thinking in via consciousness and that's a problem.
In one sense they may be seperate, orthogonal even, but if our metrics are attention, decision making and accurately factoring risk they seem inseparable to many people. So, I agree with your point narrowly but I think broadly from an effort standpoint they interact quite a lot in the human mind.
I wouldn't torture a chair, and I would not associate anyone who gains pleasure from such. It is worse if the chair were to expressed displeasure. That indicates something deeply wrong.
Having such psychopaths revealed: use that information to alter your associations, is what I would suggest.
LLMs have awareness for the time they are spawned into memory. But it's very limited, think about if you could use your brain to think, but only after someone asked you a question. After you think the answer, then you are brain dead (unconscious) until another question is asked.
An underappreciated source of nonsense in 21st century discourse is people watching YouTube instead of reading things. It doesn't appear this author read anything, preferring to be spooked and misled by a YouTube video.
trained them to play DOOM - honestly better than I do.
Maybe the author really really sucks at DOOM, but I think this is a false embellishment:
>> While the neurons can play the game better than a randomly firing player, they’re not very good. “Right now, the cells play a lot like a beginner who’s never seen a computer—and in all fairness, they haven’t,” Brett Kagan, chief scientific officer at Cortical Labs, says in the video. “But they show evidence that they can seek out enemies, they can shoot, they can spin. And while they die a lot, they are learning.” [https://www.smithsonianmag.com/smart-news/a-clump-of-human-b... ]
To play DOOM, the system feeds visual data to the neurons. For the neurons to react, they have to interpret that data in some way.
This is totally false - not even a misleading metaphor, just plain wrong. The neuronal computer doesn't get any visual information:
>> So how does a petri dish of brain cells play Doom when it doesn’t have any eyes? Or fingers? "We take a snapshot of the game with information like the player’s health and the position of enemies, pass it through a neural network, convert it into numbers, and send the data,” explains Cole. “This is called encoding – essentially turning the game state into signals the neurons can understand. The neurons then fire an output – move left, move right, walk forward, shoot or not shoot – which the system decodes and converts back into actions in the game." [https://www.theguardian.com/games/2026/mar/16/petri-dish-bra...]
I am also concerned about neuronal computing. But it doesn't really help anyone to spread childish ghost stories about it.
I really hate YouTube, by the way. My dad used to read newspapers and had interesting ideas. Now he watches a bunch of YouTube and he's a huge idiot. It's not (directly) because of age: nobody is immune to narcotic slop. I had to delete my account when I realized how much of my life and cognition I was wasting. I wish others would do the same.
I feel that "YouTube makes you an idiot" is a misdiagnosis. And one I hear frequently.
Books can make you an idiot too- I think of "Rich Dad, Poor Dad" or "Grit" or any number of pseudo-science best seller books. These books end up capturing the public imagination in big ways too- Grit caused some government policy in the US around when it was popular.
The difference, I suppose, is that YouTube works faster by having many different people presenting the same bad ideas that the algorithm has helped you to buy into.
On the other hand there are amazing and useful YouTube channels that I use all the time like Practical Engineering, Crafsman, Technology Connections, Park Tools, SciShow, Crash Course, and on and on.
There are a number of studies that show that Grit is either not a thing or there are better measures of success. It has been a long time since I have thought about it so I don't remember which papers in particular.
The nice thing about books vs. YouTube is that it's much easier to critically interrogate books while you're reading them. That was the difference with my dad: he thought about what he read. He repeats what he listens to on YouTube.
I hate the proliferation of audiobooks too, by the way. It's the exact same problem.
To be fair, even reading 'good' books won't make you smart. I think the key is to be critical, which should be thought at a young age. Ikram Antaki dedicated most of her last years in teaching this in Mexico.
Anecdote: When I started studying economics I really agreed with a lot of what I read from economists like David Ricardo, Marx, Smith, etc. Then, I studied what other economist had to say and I could see how they disagreed with the former. This made me realize that I agreed with those people because their arguments 'made sense' to me, but that doesn't mean that what they said is completely true. This is something that has stayed with me, I always wonder how can something be wrong.
The Printing Press is good example, one of the first books was on "witch hunting", which panicked people, and lead to a lot of deaths. The first, 'conspiracy theory' to sweep over humans.
Humans are just highly susceptible to manipulation. YouTube is just taking it to next level. Like the difference in eating coca leaves, versus snorting coke.
you don't have to imagine too far - I made DOOM run through a series of pre-rendered images in markdown files as a stateless engine before [0] and the answer to your question is highly upto interpretation
You move, you plan, your actions have outcomes
Same question as if you're playing choose-your-own-adventure game storybook
The point is that it doesn't really make sense to say they're "seeing" anything. You said
So… are the neurons on that chip seeing?
We all desperately want to say no.
But I can confidently say "no, that's totally childish, the neurons are clearly not seeing anything." And in fact it's not even especially clear that they're "playing DOOM" vs. hitting a biased random number generator in response to carefully preprocessed inputs that come from DOOM. There is a major distinction when the enemy positions are directly piped into the brain.
Again I share the ethical concern about this stuff. But your blog post is quite misleading.
That's not what I said, I said the blog post was false because the author thoughtlessly digested a YouTube video. It looks like the blog invented some details that weren't actually in the video.
Contrarian take: the Promethian efforts will continue, and asymptotically approach the axis of The Real Thing, until we realize that that Prometheus is a variation on the theme of Sisyphus.
Only in this telling, Sisyphus is rolling his uneven boulder along that asymptotic curve a little further with every iteration toward a smiling Zeus.
This is where I'm at as well. I don't think we'll see true AGI until we go beyond silicon. It can't grow on it's own, and we'd burn the world down trying to get it to scale.
A living bundle of neurons that can grow and learn is exciting to think about.
It's also terrifying to imagine the ramifications considering how things are going with silicon based AI.
We treat actual biological animals a lot worse in some cases so until we bump up the number of neurons significantly higher above what the lowest tier is below us I don't think we should stop the experiments.
> After forty years of vegetarianism, Max Berger was about to sit down to a feast of pork sausages, crispy bacon and pan-fried chicken breast. Max had always missed the taste of meat, but his principles were stronger than his culinary cravings. But now he was able to eat meat with a clear conscience.
> The sausages and bacon had come from a pig called Priscilla he had met the week before. The pig had been genetically engineered to be able to speak and, more importantly, to want to be eaten. Ending up on a human’s table was Priscilla’s lifetime ambition and she woke up on the day of her slaughter with a keen sense of anticipation. She had told all this to Max just before rushing off to the comfortable and humane slaughterhouse. Having heard her story, Max thought it would be disrespectful not to eat her.
> The chicken had come from a genetically modified bird which had been ‘decerebrated’. In other words, it lived the life of a vegetable, with no awareness of self, environment, pain or pleasure. Killing it was therefore no more barbarous than uprooting a carrot.
> Yet as the plate was placed before him, Max felt a twinge of nausea. Was this just a reflex reaction, caused by a lifetime of vegetarianism? Or was it the physical sign of a justifiable psychic distress? Collecting himself, he picked up his knife and fork . . .
> Source: The Restaurant at the End of the Universe by Douglas Adams (Pan Books, 1980)
An easy example is dogs. We have bred dogs for centuries to love doing work for us. If they hated doing the work, it would be easy to call it cruel. If they loved it by nature, it would be easy to call it kind. But since we created them into a thing that loves the work we need them for, where do the ethics fall?
Should we prevent them from doing what brings them joy? Should we make use of this win-win situation? If it is the latter, we are quickly approaching the ability to morph every species into something that gets joy from doing our work.
Dogs we changed by accident. The next one will not be an accident. Is it still a beings free will if the game was rigged from the start?
(I know your point wasn't about dogs either, it just reminded me of something).
I love Neil de Grasse Tyson's line in Cosmos: A Spacetime Odyssey:
"This wolf has discovered what a branch of its ancestors figured out some 15.000 years ago... an excellent survival strategy: the domestication of humans."
https://www.youtube.com/watch?v=yRV8fSw6HaE
But there's more to the setup than you might assume from a casual reading. Here's the code used for that demo:
https://github.com/SeanCole02/doom-neuron
So there is an entire pytorch stack wrapped around the mysterious little blob of neurons -- they aren't just wired straight into WASD. There is a conventional convnet-based encoder, running on a GPU, in the critical path. The README tries to argue that the "neurons are doing the learning" but to my dilettante, critical eye it really looks as though there is a hell of a lot of learning happening in the convnet also.
Are the neurons learning to play doom, or are they learning to inject ever so slightly more effective noise into the critical path? Would this work just as well if we replaced the neurons with some other non-markovian sludge? The authors do ablation experiments to try to get to the bottom of this but I can't really tell how compelling the results are (due to my own ignorance/stupidity of course)
Yeah it feels like they constructed the conclusion and worked backwards from there. I'm not seeing how their claim has much merit.
This description does not seem to really match what was done in the Doom demo, and makes me skeptical that the author has actually looked into the details.
Nevermind the experiment.. same deal for a lot of people who are only interested enough to offer opinions about consciousness and theory-of-mind without doing any of the boring background reading.
The bottom line in TFA is maybe just about unapologetic carbon-chauvinism. But although OP has "been in the AI space since ChatGPT first dropped" and "bothered by this for months", they don't seem aware of terms or the usual problems with this position. Your average non-technical scifi reader has a more nuanced take than AI bros puffing up blogs for linked-in traffic
Solms argues, I think convincingly, that consciousness fundamentally has to do with emotions and not cognition. Consciousness is not produced by the cortex but rather by the brainstem, where signals from all over the body converge (e.g. pain, hunger, itchiness, etc).
If that argument is true then a petri-dish of neurons is unlikely to be conscious, even it performs some analogue of visual processing.
The book makes other arguments that I found less convincing. For example that consciousness is "felt homeostasis" and that a fairly simple system (somewhat more complex than a thermometer) will be conscious, albeit minimally.
The whole thing makes one thing extremely clear: people are very good at moving goalposts. We've blasted past the 'turing test' for all practical purposes, but we moved the definition of 'true intelligence'. Consciousness and intelligence have long seen as higly correlated or even the same thing. But now we have need of a separation between the two.
If we eventually (we're not there yet, I think) create a true intelligent AI it will probably be a long time before people will accept that creating an intelligent being probably means it should have 'rights' as well.
We're definitely not there yet, but at what point does turning off an AI become the same as killing a being? I think that's not being talked about enough. Sure LLMs are just prediction engines. But so are we. Our brains are prediction engines tuned by evolution to do the best possible prediction of the near future to maximize survival. We are definitely conscious. But a housefly, is that conscious? What makes the difference? it's hard to tell.
Otoh, an AI has no evolutionary reason to have the concept of fear/suffering so maybe it's more like the douglas adams creature that doesn't mind to be killed?
But it's also easy to argue that LLMs do pass the turing test just because it's so vague. How many questions can I ask? What's the success threshold needed to 'pass'? How familiar is the interrogator with the technology involved? It's easy to claim that goal posts have been moved when nobody even knew where they stood to begin with.
Ultimately it's impossible to rigorously define something that's so poorly understood. But if we understand consciousness as something that humans uniquely possess, it's hard to imagine that intelligence alone is enough. You at least also need some form of linear (in time) memory and the ability to change as a result from that memory.
And that's where silicon and biological computers differ - it's easy to copy/save/restore the contents of a digital computer but it's far outside our capabilities to do the same with any complex biological system. And that same limitation makes it very difficult for us humans to even imagine how consciousness could exist without this property of being 'unique', of being uncopiable. Of existing in linear time, without any jumps or resets. Perhaps consciousness doesn't make sense at all without that.
When this happens, it won't matter much what humans think.
I know what I'd do:
...When you can't turn it back on?
Suspending is a better word otherwise.
Which just begs the question of how pain or hunger is any different from a reward function, the very thing neural networks are based on. Or how it's even different from fungi growing towards food (pleasure), while avoiding salt (pain).
Why would you expect more concern from people about biological computing? It's not even demonstrated feasibility yet, while LLM based "AI" is already widely used.
Still, the day we manage to run a full LLM on biological neurons, even if using conventional code under the hood, will be a very interesting day for consciousness discussions.
Doesn't make sense to me to use conventional code, shouldn't it be a matter of connecting the biological neurons in the same way as the simulated neurons of the NN implementing the LLM?
So then, is it a question of volume? Ask yourself, within the last 2 years, have you thought about LLMs or biocomputers more? Probably the former, right? LLMs are ubiquitous within day-to-day life and massively marketed to the public and biocomputers are esoteric lab experiments that most people come across in a once-in-a-blue-moon news article. We talk and think about things that we are adjacent to, those form our preoccupations. Why aren't people who speak up about the Israel/Palestine dynamic speaking up more about West Papua? Or the mid-19th century geopolitical relationship between Cambodia and Viet Nam? Epistemological asymmetry.
I think that until we can answer this question in the authoritative way ruling out non-brain based consciousness concept is not particularly well thought thought - after all plants exhibit communication and response mechanisms that are similar to those in animals - without brain.
So what's your theory of consciousness and how does it preclude absolutely everything except wetware you generously include? :)
It doesn't. Humans aren't conscious. Nor are any other organisms. They don't have souls either, but that goes without saying since it's just an archaic synonym. Mostly this occurs because humans have painted themselves into corners morally-speaking, and they need justification to eat bacon or grow their population. And apparently "because we can and we want to" isn't the correct solution.
We'll never be able to "answer the question" because it is an absurd question on its face. "Where do we find the magical brain ghosts making us special" presupposes there is something to be found, and a negative answer proves only that we haven't looked hard enough.
>after all plants exhibit communication and response mechanisms that are similar to those in animals - without brain.
Were that line of inquiry followed to its inevitable conclusion, there would be a mass vegan suicide to look forward to.
I'm not obligated to prove the negative.
>Isn't consciousness phenomenon that's literally derived from human experience?
You grew up watching and seeing all the various illusions caused by how your brain works/malfunctions, but this is the one experience you're sure is the real deal? The one telling you that it's a scientific fact that you have a woo-woo spirit in your skull, and that neuroscientists are going to find it any day now?
No, that's your projection, I did not make any of these claims. I'm sure I have consciousness. I don't know how it works, if it's "real deal" (what does it even mean?), if its woo-woo spirit and if neuroscientists will ever be able to find. What we know is that humans experience it (I'll instantly clarify - it doesn't mean that non-humans do not experience it) hence definition which excludes humans will always make zero sense.
We have this natural tendancy to impose our feelings of self on the definition of consciousness. Its hard to accept that all of our thoughts, emotions, and behaviours could be calculated by a human with pen and paper (with enough humans and developments in neurobiological research).
I believe we will have to reckon with these loose definitions and eventually realize how lacking in utility they are for describing engineered intellegence.
The way I think of it is along this way:
Despite the fact that our brains consist of bilions of neurons we think of ourselves as a unit enclosed in a single skull. But studies on people who have two sides of brain separated suggest that there can exist two separate conscious entities in one body.
If we have removed the physical limitations of support systems of our brain - I think it is possible you could split the brain in smaller and smaller chunks of less and less conscious entities until you reach single neurons which almost certainly do not have consciousness.
"The_Invincible" from Stanisław Lem is also a nice novel about the similar concept.
They like money
you may find a look at how a full visual system is constructed to be a relief.
https://www.cell.com/fulltext/S0896-6273(07)00774-X
there is a good distance to go before this is anything beyond a reflex circuit.
https://www.sciencedirect.com/topics/neuroscience/spinal-ref...
These technologies give some insight, but the answer is always not really. It would be good if we studied actual human brains in some detail if we want to know these answers.
> "Life is just a turn on the great karmic wheel..."
> Writing is invented
> "In the beginning was the word..."
> The industrial age begins
> "God is a clockmaker..."
> Computers are invented
You know the rest
People smuggle in so many assumptions when they use words like consciousness or thinking or soul or personhood, I've never met a lay person who could talk clearly about ai safety issues unless we switched to language like process.
Consciousness is an absolutely terrible term that's going to get us all killed by Ai. I know a huge swath of people who think its nbd to torture Ai because it doesnt have a soul, well I see a LOT of non-theists smuggling soul rhetoric and thinking in via consciousness and that's a problem.
Having such psychopaths revealed: use that information to alter your associations, is what I would suggest.
I'm not looking for advice on how to associate with people, hopefully you can understand the distinction.
Yes. I am not talking about just you. But of this (mal) mentality in general. As well as a proposed solution to deal with that mentality (shun it).
My apologies that my advise was unwelcome to you, it was, however, not just for you.
>> While the neurons can play the game better than a randomly firing player, they’re not very good. “Right now, the cells play a lot like a beginner who’s never seen a computer—and in all fairness, they haven’t,” Brett Kagan, chief scientific officer at Cortical Labs, says in the video. “But they show evidence that they can seek out enemies, they can shoot, they can spin. And while they die a lot, they are learning.” [https://www.smithsonianmag.com/smart-news/a-clump-of-human-b... ]
This is totally false - not even a misleading metaphor, just plain wrong. The neuronal computer doesn't get any visual information:>> So how does a petri dish of brain cells play Doom when it doesn’t have any eyes? Or fingers? "We take a snapshot of the game with information like the player’s health and the position of enemies, pass it through a neural network, convert it into numbers, and send the data,” explains Cole. “This is called encoding – essentially turning the game state into signals the neurons can understand. The neurons then fire an output – move left, move right, walk forward, shoot or not shoot – which the system decodes and converts back into actions in the game." [https://www.theguardian.com/games/2026/mar/16/petri-dish-bra...]
I am also concerned about neuronal computing. But it doesn't really help anyone to spread childish ghost stories about it.
I really hate YouTube, by the way. My dad used to read newspapers and had interesting ideas. Now he watches a bunch of YouTube and he's a huge idiot. It's not (directly) because of age: nobody is immune to narcotic slop. I had to delete my account when I realized how much of my life and cognition I was wasting. I wish others would do the same.
Books can make you an idiot too- I think of "Rich Dad, Poor Dad" or "Grit" or any number of pseudo-science best seller books. These books end up capturing the public imagination in big ways too- Grit caused some government policy in the US around when it was popular.
The difference, I suppose, is that YouTube works faster by having many different people presenting the same bad ideas that the algorithm has helped you to buy into.
On the other hand there are amazing and useful YouTube channels that I use all the time like Practical Engineering, Crafsman, Technology Connections, Park Tools, SciShow, Crash Course, and on and on.
Also, it can be argued the author was either playing fast and loose or knowingly misleading readers with her statistics: https://www.npr.org/sections/ed/2016/05/25/479172868/angela-...
If you like Podcasts the "If Books Could Kill" Podcast goes into some of this story again too.
I hate the proliferation of audiobooks too, by the way. It's the exact same problem.
Anecdote: When I started studying economics I really agreed with a lot of what I read from economists like David Ricardo, Marx, Smith, etc. Then, I studied what other economist had to say and I could see how they disagreed with the former. This made me realize that I agreed with those people because their arguments 'made sense' to me, but that doesn't mean that what they said is completely true. This is something that has stayed with me, I always wonder how can something be wrong.
The Printing Press is good example, one of the first books was on "witch hunting", which panicked people, and lead to a lot of deaths. The first, 'conspiracy theory' to sweep over humans.
Humans are just highly susceptible to manipulation. YouTube is just taking it to next level. Like the difference in eating coca leaves, versus snorting coke.
Playing DOOM is playing DOOM - if it's through your keyboard or mouse of progressing through the game states to move forward - hope that makes sense.
0 - https://arxiv.org/pdf/2602.11632
Would the person tasked with placing X and O marks still be "playing Doom"?
You move, you plan, your actions have outcomes Same question as if you're playing choose-your-own-adventure game storybook
0 - https://github.com/Kuberwastaken/backdooms
Again I share the ethical concern about this stuff. But your blog post is quite misleading.
But 'seeing' in humans is also a bit manipulated.
Does it really matter to the argument if it is seeing 'red', or just that it is 'sensing input'.
This did have some real scientific backing. Even if the 'result's are hyped.
It is little extreme to call this false because it appeared on YouTube.
The brain does a lot of manipulation of the input images, the pixels from the retina, that doesn't sound far from just linear algebra.
Only in this telling, Sisyphus is rolling his uneven boulder along that asymptotic curve a little further with every iteration toward a smiling Zeus.
There will be no line as long as there is the rush to win the capitalist game.
UNTIL -> The ball of neurons begins outthinking the humans. Probably also fused with some AI augmentation.
It only takes a few percentage points for a Human to outthink a Chimp. This new 'thing' will dominate the humans.
A living bundle of neurons that can grow and learn is exciting to think about.
It's also terrifying to imagine the ramifications considering how things are going with silicon based AI.
They are, but those last few months of changing diapers when you just wish you could trust it to tell you it has to go to the potty are difficult.
Will they need to nap as well?
On that note, I'm so glad all my kids are past potty training.