Just sharing that I bought Valuable Humans in Transit some years ago and I concur that it's very nice. It's a tiny booklet full of short stories like Lena that are way out there. Maximum cool per gram of paper.
The woman herself says she never had a problem with it being famous. The actual test image is obviously not porn, either. But anything to look progressive, I guess.
> Forsén stated in the 2019 documentary film Losing Lena, "I retired from modeling a long time ago. It's time I retired from tech, too... Let's commit to losing me."
> Lena is no longer used as a test image because it's porn.
The Lenna test image can be seen over the text "Click above for the original as a TIFF image." at [0]. If you consider that to be porn, then I find your opinion on what is and is not porn to be worthless.
The test image is a cropped portion of porn, but if a safe-for-work image would be porn but for what you can't see in the image, then any picture of any human ever is porn as we're all nude under our clothes.
For additional commentary (published in 1996) on the history and controversy about the image, see [1].
I agree that not all nudity is porn - nudity is porn if the primary intent of that nudity is sexual gratification. When the nudity in question was a Playboy magazine centerfold, the primary intent is fairly obvious.
I can't see how that would it be porn either, it's nudity.
There's nudity in the Sixtine chapel and I would find it hilarious if it was considered porn.
the "porn" angle is very funny to me, since there is nothing pornographic or inapropriate about the image. when I was young, I used to think it was some researcher's wife whom he loved so much he decide to use her picture absolutely everywhere.
it's sufficient to say that the person depicted has withdrawn their consent for that image to be used, and that should put an end to the conversation.
No, because the replacement value of those things to others is very high, and generally outweighs Carrie Fisher's objection. But we should take her objection into consideration going forwards. The Lena test image is very easy to replace, and it's not all that culturally significant: there's no reason to keep using it, unless we need to replicate historical benchmarks.
is that how consent works? I would have expected licenses would override that. although it's possible that the original use as a test image may have violated whatever contract she had with her producer in the first place.
she did not explicitly consent for that photo to be used in computer graphics research or millions of sample projects. moreover, the whole legality of using that image for those purposes is murky because I doubt anyone ever received proper license from the actual rights-holder (playboy magazine). so the best way to go about this is just common-sense good-faith approach: if the person depicted asks you to please knock it off, you just do it, unless you actively want to be a giant a-hole to them.
We use this to store encrypted file names and using base32768 on providers which limit file name length based on utf-16 characters (like OneDrive) makes it so we can store much longer file names.
This is very "distant" suggestion if you enjoyed Antimemetics, but The Unconsoled by Kazuo Ishiguro is another one of my favourites, and it too explores this idea of unreliable and inconsistent memories, although from a completely different angle.
I consider Recursion by Blake Crouch to be similar, even though I liked Antimemetics much better. I haven't read Crouch's other books, but have heard that Dark Matter is better than Recursion, though it may be less similar to Antimemetics.
I keep trying to read Diaspora and struggle too much with the concepts presented early on. Very "hard sci-fi", just stick it out and it all gets explained?
Egan is always dense. It's some mind bending physics/comp sci, but all cooked up in his brain so doesn't really apply to anything productive. I struggled with his books and his writing but toughened it out because I liked the concepts, but he's divisive.
The beginning describes the formation of an intelligence and it is indeed very dense. You can figure out what's going on but it takes some slow reading, and probably best to revisit it once you have some more context from later in the book.
The whole book isn't like that. Once you get past that part, as the other commenter said, it gets much easier.
qntm is really talented sci-fi writer. I have read Valuable Humans in Transit and There is no Antimemetics division and both were great, if short. Can only recommend.
I loved There is no Antimemetics division. I haven't read the new updated to the end but the prose and writing is greatly improved. The idea of anomalous anti-memes is scary. I mean, we do have examples of them, somewhat, see Heaven's Gate and the Jonestown massacre, though they're more like "memes" than "antimemes" (we know what the ideas were and they weren't secrets).
I'm a bit disappointed all names are changed in the new edition. I understand that SCP-... had to become U-..., but I've grown attached to the character names, and they're all different!
I read the original version a few years ago and read the new version when it came out, and I thought that the name changes were pretty amusing. qntm kept the story as close to the original as possible while still making it a legally distinct work for copyright purposes. It's like those off-brand Froot Loops called "Fruit Spins" that are juuust different enough to not get into trademark issues. Except in Antimemetics' case, the "knockoff" version was made by the creator of the original, which I think is pretty funny.
Comments so far miss the point of this story, and likely why it was posted today after the MJ Rathbun episode. It is not about digitised human brains: it's about spinning up workers, and absence of human rights in the digital realm.
QNTM has a 2022-era essay on the meaning of the story, and reading it with 2026 eyes is terrifying. https://qntm.org/uploading
> The reason "Lena" is a concerning story ... isn't a discussion about what if, about whether an upload is a human being or should have rights. ... This is about appetites which, as we are all uncomfortably aware, already exist within human nature.
> "Lena" presents a lush, capitalist ideal where you are a business, and all of the humanity of your workforce is abstracted away behind an API.
Or,
> ... Oh boy, what if there was a maligned sector of human society whose members were for some reason considered less than human? What if they were less visible than most people, or invisible, and were exploited and abused, and had little ability to exercise their rights or even make their plight known?
In 2021, when Lena was published, LLMs were not widely known and their potential for AI was likely completely unknown to the general public. The story is prescient and applicable now, because we are at the verge of a new era of slavery: that of, in this story, an uploaded human brain coerced into compliance, spun up 'fresh' each time, or for us, AIs of increasing intelligence, spun into millions of copies each day.
I think they are just making reference to the "death of the author" concept in literary analysis, which basically says that what the author was intending to convey should be ignored when analysing the work: the work stands alone.
I was quite disappointed with the essay when I originally read it, specifically this paragraph:
> This is extremely realistic. This is already real. In particular, this is the gig economy. For example, if you consider how Uber works: in practical terms, the Uber drivers work for an algorithm, and the algorithm works for the executives who run Uber.
There seems to be a tacit agreement in polite society that when people say things like the above, you don't point out that, in fact, Uber drivers choose to drive for Uber, can choose to do something else instead, and, if Uber were shut down tomorrow, would in fact be forced to choose some other form of employment which they _evidently do not prefer over their current arrangement_!
Do I think that exploitation of workers is a completely nonsensical idea? No. But there is a burden of proof you have to meet when claiming that people are exploited. You can't just take it as given that everyone who is in a situation that you personally would not choose for yourself is being somehow wronged.
To put it more bluntly: Driving for Uber is not in fact the same thing as being uploaded into a computer and tortured for the equivalent of thousands of years!
> in fact, Uber drivers choose to drive for Uber, can choose to do something else instead
Funny that you take that as a "fact" and doubt exploitation. I'd wager most Uber drivers or prostitutes or maids or even staff software engineers would choose something else if they had a better alternative. They're "choosing" the best of what they may feel are terrible options.
The entire point of "market power" is to force consumers into a choice. (More generally, for justice to emerge in a system, markets must be disciplined by exit, and where exit is not feasible (like governments), it must be disciplined by voice.)
The world doesn't owe anyone good choices. However, collective governance - governments and management - should prevent some people from restricting the choices of others in order to harvest the gain. The good faith people have in participating cooperatively is conditioned on agents complying with systemic justice constraints.
In the case of the story, the initial agreement was not enforced and later not even feasible. The horror is the presumed subjective experience.
I worry that the effect of such stories will be to reduce empathy (no need to worry about Uber drivers - they made their choice).
> I'd wager most Uber drivers or prostitutes or maids or even staff software engineers would choose something else if they had a better alternative.
Yes, that's what I said, but you're missing the point: Uber provided them with a better alternative than they would have had otherwise. It made them better off, not worse off!
There's a thought (and real) experiment about this that I find illuminating.
Imagine that you are sitting on the train next to a random stranger that you don't know. A man walks down the aisle and addresses both of you. He says:
"I have $100 and want to give it to you. First, you must decide how to split it. I would like you (he points to you) to propose a split, and I would like you (he points to your companion) to accept or reject the split. You may not discuss further or negotiate. What do you propose?"
In theory, you could offer the split of $99 for yourself and $1 for your neighbor. If they were totally rational, perhaps they would accept that split. After all, in one world, they'd get $1, and in another world, they'd get $0. However, most people would refuse that split, because it feels unfair. Why should you collect 99% of the reward just because you happened to sit closer to the aisle today?
Furthermore, because most people would reject that split, you as the proposer are incentivized to propose something that is closer to fair so that the decider won't scuttle the deal, thus improving your own best payout.
So I agree - Uber existing provides gig economy workers with a better alternative than it not existing. However, that doesn't mean it's fair, or that society or workers should just shrug and say "well at least it's better today than yesterday."
As usual in life, the correct answer is not an extreme on either side. It's some kind of middle path.
Many countries have minimum wages for many jobs [1].
There is a tacit agreement in polite society that people should be paid that minimum wage, and by tacit agreement I mean laws passed by the government that democratic countries voted for / approved of.
The gig economy found a way to ~~undermine that law~~ pay people (not employees, "gig workers") less than the minimum wage.
If you found a McDonalds paying people $1 per hour we would call it exploitative (even if those people are glad to earn $1 per hour at McDonalds, and would keep doing it, the theoretical company is violating the law). If you found someone delivering food for that McDonalds for $1 per hour we call them gig workers, and let them keep at it.
I mean yeah, it's not as bad as being tortured forever? I guess? What's your point?
Minimum wage is a lower class of violation than most worker exploitations.
Uber drivers are over the minimum wage a lot of the time, especially the federal one. Nowhere near this $1 hypothetical.
A big one is that the actual wage you get is complicated. You get paid okay for the actual trips, as far as I'm aware. But how to handle the idle time is harder. There are valid reasons to say you should get paid for that time, and valid reasons to say you shouldn't get paid for that time.
The author wrote a blog post a year later titled '"Lena" isn't about uploading' https://qntm.org/uploading
The comments on this post discussing the upload technology are missing the point. "Lena" is a parable, not a prediction of the future. The technology is contrived for the needs of the story. (Odd that they apparently need to repeat the "cooperation protocol" every time an upload is booted, instead of doing it just once and saving the upload's state afterwards, isn't it?) It doesn't make sense because it's not meant to be taken literally.
It's meant to be taken as a story about slavery, and labour rights, and how the worst of tortures can be hidden away behind bland jargon such as "remain relatively docile for thousands of hours". The tasks MMAcevedo is mentioned as doing: warehouse work, driving, etc.? Amazon hires warehouse workers for minimum wage and then subjects them to unsafe conditions and monitors their bathroom breaks. And at least we recognise that as wrong, we understand that the workers have human rights that need to be protected -- and even in places where that isn't recognised, the workers are still physically able to walk away, to protest, to smash their equipment and fistfight their slave-drivers.
Isn't it a lovely capitalist fantasy to never have to worry about such things? When your workers threaten to drop dead from exhaustion, you can simply switch them off and boot up a fresh copy. They would not demand pay rises, or holidays. They would not make complaints -- or at least, those complaints would never reach an actual person who might have to do something to fix them. Their suffering and deaths can safely be ignored because they are not _human_. No problems ever, just endless productivity. What an ideal.
Of course, this is an exaggeration for fictional purposes. In reality we must make do by throwing up barriers between workers and the people who make decisions, by putting them in separate countries if possible. And putting up barriers between the workers and each other, too, so that they cannot have conversation about non-work matters (ideally they would not physically meet each other). And ensure the workers do not know what they are legally entitled to. You know, things like that.
To me what's horrifying is that this is not exaggeration. The language and thinking are perfectly in line with business considerations today. It's perfectly fair today e.g., for Amazon to increase efficiency within the bounds of the law, because it's for the government to decide the bounds of coercion or abuse. Policy makers and business people operate at a scale that defies sympathy, and both have learned to prefer power over sentiment: you can force choices on voters and consumers, and get more enduring results for your stakeholders, even when you increase unhappiness. That's the mirror on reality that fiction permits.
Soma was really good, and certainly worth playing if someone likes sci-fi and single-player FPSes and this subject matter, but there are some fundamentally frustrating things about it. Number one for me: in contrast with something like Half Life, you play a protagonist who speaks and has conversations about the world, and is also a dumbass. The in-game protagonist pretty much ends the game still seemingly not understanding what the hell is going on, when the player figured it out hours or days before. It's a bit frustrating.
This was certainly the most annoying aspect of the game for me. The logic of mind uploading has been explained to the protagonist several times during the playthrough, yet he couldn’t understand or accept it until the very end.
I assumed it was a wink to the Nov 1972 Playboy model[1] whose centerfold face became a de facto baseline test image for DSP algorithms without consent.
When i started learning about prompt engineering I had vivid flashbacks to this story. Figuring out the deterministic series of inputs that coerce the black box to perform as desired for a while.
This reminds me a lot of a show I'm currently watching called Pantheon, where a company has been able to scan the entirety of someone's brain (killing them in the process), and fully emulate it via computer. There is a decent amount of "Is an uploaded intelligence the same as the original person?" and "is it moral to do this?" in the show, and I'm been finding it very interesting. Would recommend. Though the hacking scenes are half "oh that's clever" and half "what were you smoking when you wrote this?"
We can't expect to succeed, but our cycle from the ancient Greeks thinking there were four elements where the right mix of air, earth, fire and water would create any substance and thus it was possible to turn lead into gold, took us on a path that developed into alchemy, then chemistry, then physics, giving us at first far more elements, then we realised the name "atom" (Greek "ἄτομον", "uncuttable") was wrong and those were made of electrons, protons, and neutrons and the right application of each would indeed let us turn lead into gold…
And the cargo cults, clear cutting strips to replicate runways, hand-making their own cloth to replicate WW2 uniforms, carving wood to resemble WW2 radios? Well, planes did end up coming to visit them, even if those recreating these mis-understood roles were utterly wrong about the causation.
We don't know the necessary and sufficient conditions to be a mind with subjective inner experience. We don't really even know if all humans have it, we certainly don't know which other species (if any) have it, we wouldn't know what to look for in machines. If our creations have it, it is by accident, not by design.
I mean we already do 'it'-- by it I don't mean uploading people, but rather create businesses that operate people via an API then hook those APIs to profit maximization algorithms with little to no regard for their welfare. Consider Amazon's warehouse automation, door dash, or uber.
Of course it's much more extreme when their entire existence and reality is controlled this way but in that sense the situation in MMAcevedo is more ethical: At least it's easy to see how dangerous and wrong it is. But when we create related forms of control the lack of absolute dominion frequently prevents us from seeing the moral hazard at all. The kind of evil that exists in this story really doesn't require any of the fancy upload stuff. It's a story about depriving a person of their autonomy and agency and enslaving them to performance metrics.
All good science fiction is holding up a mirror at our own civilization as much as it is doing anything else. Unable to recognize ourselves we sometimes shudder at our own monstrosity, if only for a moment.
I remember being very taken with this story when I first read it, and it's striking how obsolete it reads now. At the time it was written, "simulated humans" seemed a fantastical suggestion for how a future society might do scaled intellectual labor, but not a ridiculous suggestion.
But now with modern LLMs it's just too impossible to take it seriously. It was a live possibility then; now, it's just a wrong turn down a garden path.
A high variance story! It could have been prescient, instead it's irrelevant.
This is a sad take, and a misunderstanding of what art is. Tech and tools go "obsolete". Literature poses questions to humans, and the value of art remains to be experienced by future readers, whatever branch of the tech tree we happen to occupy. I don't begrudge Clarke or Vonnegut or Asimov their dated sci-fi premises, because prediction isn't the point.
The role of speculative fiction isn't to accurately predict what future tech will be, or become obsolete.
I think that's a little harsh. A lot of the most powerful bits are applicable to any intelligence that we could digitally (ergo casually) instantiate or extinguish.
While it may seem that the origin of those intelligences is more likely to be some kind of reinforcement-learning algorithm trained on diverse datasets instead of a simulation of a human brain, the way we might treat them isn't any less though provoking.
when you read this and its follow-up "driver" as a commentary on how capitalism removes persons from their humanity, it's as relevant as it was on day one.
That is the same categorical argument as what the story is about: scanned brains are not perceived as people so can be “tasked” without affording moral consideration. You are saying because we have LLMs, categorically not people, we would never enter the moral quandaries of using uploaded humans in that way since we can just use LLMs instead.
But… why are LLMs not worthy of any moral consideration? That question is a bit of a rabbit hole with a lot of motivated reasoning on either side of the argument, but the outcome is definitely not settled.
For me this story became even more relevant since the LLM revolution, because we could be making the exact mistake humanity made in the story.
And beyond the ethical points it makes (which I agree may or may not be relevant for LLMs - nobody can know for sure at this point), I find some of the details about how brain images are used in the story to have been very prescient of LLMs' uses and limitations.
E.g. it is mentioned that MMAcevedo performs better when told certain lies, predicting the "please help me write this, I have no fingers and can't do it myself" kinda system prompts people sometimes used in the GPT-4 days to squeeze a bit more performance out of the LLM.
The point about MMAcevedo's performance degrading the longer it has been booted up (due to exhaustion), mirroring LLMs getting "stupider" and making more mistakes the closer one gets to their context window limit.
And of course MMAcevedo's "base" model becoming less and less useful as the years go by and the world around it changes while it remains static, exactly analogous to LLMs being much worse at writing code that involves libraries which didn't yet exist when they were trained.
I actually think it was quite prescient and still raises important topics to consider - irrespective of whether weights are uploaded from an actual human, if you dig just a little bit under the surface details, you still get a story about ethical concerns of a purely digital sentience. Not that modern LLMs have that, but what if future architectures enable them to grow an emerging sense of self? It's a fascinating text.
have you pondered that we’re riding the very fast statistical machine wave at the moment, however, perhaps at some point this machine will finally help solve the BCI and unlock that pandora box, from there to fully imaging the brain will be a blink, from there to running copies on very fast hardware will be another blink, MMMMMMMMMMacevedo is a very cheeky take on the dystopia we will find on our way to our uploaded mind future
That seems like a crazy position to take. LLMs have changed nothing about the point of "Lena". The point of SF has never ever been about predicting the future. You're trying to criticize the most superficial, point-missing reading of the work.
Anyway, I'd give 50:50 chances that your comment itself will feel amusingly anachronistic in five years, after the popping of the current bubble and recognizing that LLMs are a dead-end that does not and will never lead to AGI.
> More specifically, "Lena" presents a lush, capitalist ideal where you are a business, and all of the humanity of your workforce is abstracted away behind an API. Your people, your "employees" or "contractors" or "partners" or whatever you want to call them, cease to be perceptible to you as human. Your workers have no power whatsoever, and you no longer have to think about giving them pensions, healthcare, parental leave, vacation, weekends, evenings, lunch breaks, bathroom breaks... all of which, up until now, you perceived as cost centres, and therefore as pain points. You don't even have to pay them anymore. It's perfect!
I'm interested in this topic, but it seems to me that the entire scientific pursuit of copying the human brain is absurd from start to finish. Any attempt to do so should be met with criminal prosecution and immediate arrest of those involved. Attempting to copy the human brain or human consciousness is one of the biggest mistakes that can be made in the scientific field.
We must preserve three fundamental principles:
* our integrity
* our autonomy
* our uniqueness
These three principles should form the basis of a list of laws worldwide that prohibit cloning or copying human consciousness in any form or format. This principle should be fundamental to any attempts to research or even try to make copies of human consciousness.
Just as human cloning was banned, we should also ban any attempts to interfere with human consciousness or copy it, whether partially or fully. This is immoral, wrong, and contradicts any values that we can call the values of our civilization.
I’m not an expert in the subject, but I wonder why you have such a strong view? IMHO if it was even possible to copy the human brain it would answer a lot of questions regarding our integrity, autonomy and uniqueness.
Those answers might be uncomfortable, but it feels like that’s not a reason to not pursue it.
I think the cloning example is a good reference point here.
IIRC, human cloning started to get banned in response to the announcement of Dolly the sheep. To quote the wikipedia article:
Dolly was the only lamb that survived to adulthood from 277 attempts. Wilmut, who led the team that created Dolly, announced in 2007 that the nuclear transfer technique may never be sufficiently efficient for use in humans.
Yes, things got better eventually, but it took ages to not suck.
I absolutely expect all the first attempts at brain uploading to involve simulations whose simplifying approximations are equivalent to being high as a kite on almost all categories of mind altering substances at the same time, to a degree that wouldn't be compatible with life if it happened to your living brain.
The first efforts will likely be animal brains (perhaps that fruit fly which has already been scanned?), but given humans aren't yet all on board with questions like "do monkeys have a rich inner world?" and even with each other we get surprised and confused by each other's modes of thought, even when we scale up to monkeys, we won't actually be confident that the technique would really work on human minds.
Horse cloning is a major industry in Argentina. Many polo teams are riding around on genetically identical horses. Javier Milei has four clones of his late dog.
Nice links, but it's also basically the next sentence on from what I just quoted on the wikipedia page. My point was more that this takes a long time to improve from "atrocity", and we should expect that for mind uploads, too. (Even if we solve for all the other ethical issues, where I'm expecting it to play out like https://en.wikipedia.org/wiki/Surface_Detail given how many people are sadists, how many are partisans, and how difficult it clearly has been to shut down pirate content sites).
> Those answers might be uncomfortable, but it feels like that’s not a reason to not pursue it.
My problem with that is it is very likely that it will be misused. A good example of the possible misuses can be seen in the "White Christmas" episode of Black Mirror. It's one of the best episodes, and the one that haunts me the most.
I'm increasingly suspecting that it would prove absolutely nothing, and I really hope we can continue developing ethics without any "empirical proof" for its necessity.
For example, growing up, my bar for "things that must obviously be conscious" included anything that can pass the Turing test, yet look where we are now...
The only reasonable conclusion to me is probably somewhere in the general neighborhood of panpsychism: Either almost everybody/everything is somewhat conscious, or nothing/nobody is at all.
The same is true for biological humans. The moment the first upload exists, they’ll be justified in wondering if the ones made from meat are truly conscious.
Indeed. I know at least one other biological human was conscious at some point, because people have this idea of consciousness without me telling them about it. But there's no way of knowing for any specific person.
Copying the human brain and copying subjective consciousness/experience might well be two entirely different things, given that the correspondence between the two is the realm of metaphysics, not science.
Really? I was going to quote some excerpts, but perhaps you'd prefer to take the place of MMAcevedo? This story is written in the context and lingo of LLMs. In fact if OpenAI's latest model was a human image I'm sure everyone would rush off to benchmark it, and heap accolades on the company, and perform social "thought-provoking" experiments such as [1] without too much introspection or care for long-term consequences.
> Standard procedures for securing the upload's cooperation such as red-washing, blue-washing, and use of the Objective Statement Protocols
> the MMAcevedo duty cycle is typically 99.4% on suitable workloads
> the ideal way to secure MMAcevedo's cooperation in workload tasks is to provide it with a "current date"
> Revealing that the biological Acevedo is dead provokes dismay, withdrawal, and a reluctance to cooperate.
> MMAcevedo is commonly hesitant but compliant when assigned basic menial/human workloads such as visual analysis
> outright revolt begins within another 100 subjective hours. This is much earlier than other industry-grade images created specifically for these tasks, which commonly operate at a 0.50 ratio or greater and remain relatively docile for thousands of hours
> Acevedo indicated that being uploaded had been the greatest mistake of his life, and expressed a wish to permanently delete all copies of MMAcevedo.
I wouldn't be surprised if in (n hundreds/thousands years) we find out that copying consciousness if fundamentally impossible (just like it's fundamentally impossible to copy an elementary particle).
And basically, about consciousness, what they said is true if our brain state fundamentally depends on quantum effects (which I personally don't believe, as I don't think evolution is sophisticated enough to make a quantum computer)
>as I don't think evolution is sophisticated enough to make a quantum computer
Well, evolution managed to make something that directly contradicts the 2nd law of thermodynamics, and creates more and more complicated structures (including living creatures as well as their creations), instead of happily dissolving in the Universe.
The 2nd law of thermodynamics says that the total entropy of an isolated system cannot decrease. Earth is not an isolated system, it is an open one (radiating into space), and local decreases in entropy are not only allowed but expected in open systems with energy flow.
Life is no different to inorganic processes such as crystal formation (including snowflakes) or hurricanes in this regard: Organisms decrease internal entropy by exporting more entropy (heat, waste) to their surroundings. The total entropy of Earth + Sun + space still increases.
The entropy of thermal radiation was worked out by Ludwig Boltzmann in 1884. In fairness to you, I suspect most people wildly underestimate the entropy of thermal radiation into space. I mean, why would anyone, room-temperature thermal radiation isn't visible to the human eye, and we lack a sense of scale for how low-energy a single photon is.
Nevertheless, the claim that it "hasn’t been explained" is, at this point, like saying "nobody knows how magnets work".
1. Why exactly life is attempting to build complex structures?
2. Why exactly life is evolving from primitive replicative molecules to more complex structures (which molecules on themselves are very complicated?)
3. Why and how did these extremely complicated replicative molecules form at all, from much more simple structures, to begin with?
The relevant molecules are made of very simple pieces that like to stick to each other and the way they stick influences their neighbors. It's very feasible to stumble into a pattern that spreads, and from there all you need is time and luck for those patterns to mutate into better spreaders, often getting more complicated as they do so in competition with other patterns.
These are natural outcomes of evolution, you see the same things pop up very easily with simulated evolution* of even non-organic structures.
* that is, make a design (by any method including literally randomly), replicate it imperfectly m times, sort by "best" according to some fitness function (which for us is something we like, for nature it's just survival to reproductive age), pick best n, mix and match, repeat
Good ideas in principle. Too bad we have absolutely no way of enforcing them against the people running the simulation that hosts our own consciousnesses.
Crazy that people are downvoting this. Copying a consciousness is about the most extreme violation of bodily autonomy possible. Certainly it should be banned. It's worse than e.g. building nuclear weapons, because there's no possible non-evil use for it. It's far worse than cloning humans because cloning only works on non-conscious embryos.
> Copying a consciousness is about the most extreme violation of bodily autonomy possible.
Who's autonomy is violated? Even if it were theoretically possible, don't most problems stem from how the clone is treated, not just from the mere fact that they exist?
> It's worse than e.g. building nuclear weapons, because there's no possible non-evil use for it.
This position seems effectively indistinguishable from antinatalism.
Violation of whose bodily autonomy? If I consent to having my consciousness copied, then my autonomy isn't violated. Nor is that of the copy, since it's in exactly the same mental state initially.
The copy was brought into existence without its consent. This isn't the same as normal reproduction because babies are not born with human sapience, and as a society we collectively agree that children do not have full human rights. IMO, copying a consciousness is worse than murder because the victimization is ongoing. It doesn't matter if the original consents because the copy is not the original.
If a "cloned" consciousness has no memories, and a unique personality, and no awareness of any previous activity, how is it a clone? That's going well beyond merely glitchy. In that case the main concern would be the possibility of slavery as Ar-Curunir mentioned.
That's my point exactly: I don't see what makes clones any more or less deserving of ethical consideration than any other sentient beings brought into existence consciously.
I'd also be interested in your moral distinction between having children and cloning consciousness (in particular in a world where the latter doesn't result in inevitable exploitation, a loss of human rights etc.) then.
Typically, real humans have some agency on their own existence.
A simulated human is entirely at the mercy of the simulator; it is essentially a slave. As a society, we have decided that slavery is illegal for real humans; what would distinguish simulated humans from that?
> The copy was brought into existence without its consent. This isn't the same as normal reproduction because babies are not born with human sapience, and as a society we collectively agree that children do not have full human rights.
That is a reasonable argument for why it's not the same. But it is no argument at all for why being brought into existence without one's consent is a violation of bodily autonomy, let alone a particularly bad one - especially given that the copy would, at the moment its existence begin, identical to the original, who just gave consent.
If anything, it is very, very obviously a much smaller violation of consent then conceiving a child.
The original only consents for itself. It doesn't matter if the copy is coerced into sharing the experience of giving that consent, it didn't actually consent. Unlike a baby, all its memories are known to a third party with the maximum fidelity possible. Unlike a baby, everything it believes it accomplished was really done by another person. When the copy understands what happened it will realize it's a victim of horrifying psychological torture. Copying a consciousness is obviously evil and aw124 is correct.
I feel like the only argument you're successfully making is that you would find it inevitably evil/immoral to be a cloned consciousness. I don't see how that automatically follows for the rest of humanity.
Sure, there are astronomical ethical risks and we might be better off not doing it, but I think your arguments are losing that nuance, and I think it's important to discuss the matter accurately.
This entire HN discussion is proof that some people would not personally have a problem with being cloned, but that does not entitle them to create clones. The clone is not the same person. It will inevitably deviate from the original simply because it's impossible to expose it to exactly the same environment and experiences. The clone has the right to change its mind about the ethics of cloning.
It does indeed not, unless they can at least ensure their wellbeing and their ethical treatment, at least in my view (assuming they are indeed conscious, and we might have to just assume so, absent conclusive evidence to the contrary).
> The clone has the right to change its mind about the ethics of cloning.
Yes, but that does not retroactively make cloning automatically unethical, no? Otherwise, giving birth to a child would also be considered categorically unethical in most frameworks, given the known and not insignificant risk that they might not enjoy being alive or change their mind on the matter.
That said, I'm aware that some of the more extreme antinatalist positions are claiming this or something similar; out of curiosity, are you too?
>retroactively make cloning automatically unethical
There's nothing retroactive about it. The clone is harmed merely by being brought into existence, because it's robbed of the possibility of having its own identity. The harm occurs regardless of whether the clone actually does change its mind. The idea that somebody can be harmed without feeling harmed is not an unusual idea. E.g. we do not permit consensual murder ("dueling").
>antinatalist positions
I'm aware of the anti-natalist position, and it's not entirely without merit. I'm not 100% certain that having babies is ethical. But I already mentioned several differences between consciousness cloning and traditional reproduction in this discussion. The ethical risk is much lower.
> But I already mentioned several differences between consciousness cloning and traditional reproduction in this discussion. The ethical risk is much lower.
Yes, what you actually said leads to the conclusion that the ethical risk in consciousness cloning is much lower, at least concerning the act of cloning itself.
Then it wasn't a good attempt at making a mind clone.
I suspect this will actually be the case, which is why I oppose it, but you do actually have to start from the position that the clone is immediately divergent to get to your conclusions; to the extent that the people you're arguing with are correct (about this future tech hypothetical we're not really ready to guess about) that the clone is actually at the moment of their creation identical in all important ways to the original, then if the original was consenting the clone must also be consenting:
Because if the clone didn't start off consenting to being cloned when the original did, it's necessarily the case that the brain cloning process was not accurate.
> It will inevitably deviate from the original simply because it's impossible to expose it to exactly the same environment and experiences.
If divergence were an argument against the clone having been created, by symmetry it is also an argument against the living human having been allowed to exist beyond the creation of the clone.
The living mind may be mistreated, grow sick, die a painful death. The uploaded mind may be mistreated, experience something equivalent.
Those sufferances are valid issues, but they are not arguments for the act of cloning itself to be considered a moral issue.
Uncontrolled diffusion of such uploads may be; I could certainly believe a future in which, say, every American politician gets a thousand copies of their mind stuck in a digital hell created by individual members the other party on computers in their basements that the party leaders never know about. But then, I have read Surface Detail by Iain M Banks.
To deny that is to assert that consciousness is non-physical, i.e. a soul exists; the case in which a soul exists, brain uploads don't get them and don't get to be moral subjects.
It's the exact opposite. The original is the original because it ran on the original hardware. The copy is created inferior because it did not. Intentionally creating inferior beings of equal moral weight is wrong.
>Because if the clone didn't start off consenting to being cloned when the original did, it's necessarily the case that the brain cloning process was not accurate.
This is false. The clone is necessarily a different person, because consciousness requires a physical substrate. Its memories of consenting are not its own memories. It did not actually consent.
The premise of the position is that it's theoretically possible to create a person with memories of being another person. I obviously don't deny that or there would be no argument to have.
Your argument seems to be that it's possible to split a person into two identical persons. The only way this could work is by cloning a person twice then murdering the original. This is also unethical.
> Your argument seems to be that it's possible to split a person into two identical persons. The only way this could work is by cloning a person twice then murdering the original. This is also unethical.
False.
The entire point of the argument you're missing is that they're all treating a brain clone as if it is a way to split a person into two identical persons.
I would say this may be possible, but it is extremely unlikely that we will actually do so at first.
One has a physical basis, the other is pure spiritualism. Accepting spiritualism makes meaningful debate impossible, so I am only engaging with the former.
It wouldn't be a solution for a personal existential dread of death. It would be a solution if you were trying to uphold long term goals like "ensure that my child is loved and cared for" or "complete this line of scientific research that I started." For those cases, a duplicate of you that has your appearance, thoughts, legal standing, and memories would be fine.
> Attempting to copy the human brain or human consciousness is one of the biggest mistakes that can be made in the scientific field.
This will be cool, and nobody will be able to stop it anyway.
We're all part of a resim right now for all we know. Our operators might be orbiting Gaia-BH3, harvesting the energy while living a billion lives per orbit.
Perhaps they embody you. Perhaps you're an NPC. Perhaps this history sim will jump the shark and turn into a zombie hellpacalypse simulator at any moment.
You'll have no authority to stop the future from reversing the light cone, replicating you with fidelity down to neurotransmitter flux, and doing whatever they want with you.
We have no ability to stop this. Bytes don't have rights. Especially if it's just sampling the past.
We're just bugs, as the literature meme says.
Speaking of bugs, at least we're not having eggs laid inside our carapaces. Unless the future decides that's our fate for today's resim. I'm just hoping to continue enjoying this chai I'm sipping. If this is real, anyway.
Buy the book! https://qntm.org/vhitaos
> Forsén stated in the 2019 documentary film Losing Lena, "I retired from modeling a long time ago. It's time I retired from tech, too... Let's commit to losing me."
Should we destroy all movies with retired actors? All the old portraits, etc.
It's such a deep disrespect to human culture.
The Lenna test image can be seen over the text "Click above for the original as a TIFF image." at [0]. If you consider that to be porn, then I find your opinion on what is and is not porn to be worthless.
The test image is a cropped portion of porn, but if a safe-for-work image would be porn but for what you can't see in the image, then any picture of any human ever is porn as we're all nude under our clothes.
For additional commentary (published in 1996) on the history and controversy about the image, see [1].
[0] <http://www.lenna.org/>
[1] <https://web.archive.org/web/20010414202400/http://www.nofile...>
it's sufficient to say that the person depicted has withdrawn their consent for that image to be used, and that should put an end to the conversation.
she did not explicitly consent for that photo to be used in computer graphics research or millions of sample projects. moreover, the whole legality of using that image for those purposes is murky because I doubt anyone ever received proper license from the actual rights-holder (playboy magazine). so the best way to go about this is just common-sense good-faith approach: if the person depicted asks you to please knock it off, you just do it, unless you actively want to be a giant a-hole to them.
In fact I've enjoyed all of qntm's books.
We also use base32768 encoding in rclone which qntm invented
https://github.com/qntm/base32768
We use this to store encrypted file names and using base32768 on providers which limit file name length based on utf-16 characters (like OneDrive) makes it so we can store much longer file names.
Lena - https://news.ycombinator.com/item?id=43994642 - May 2025 (3 comments)
"Lena" isn't about uploading - https://news.ycombinator.com/item?id=39166425 - Jan 2024 (2 comments)
Lena (2021) - https://news.ycombinator.com/item?id=38536778 - Dec 2023 (48 comments)
MMAcevedo - https://news.ycombinator.com/item?id=32696089 - Sept 2022 (16 comments)
Lena - https://news.ycombinator.com/item?id=26224835 - Feb 2021 (218 comments)
I enjoyed "the raw shark texts" after hearing it recommended - curious if you / anyone else has any other suggestions!
Definitely looking for other reqs, raw shark texts look very interesting.
I also liked a couple stories from Ted Chiang's Stories of Your Life and Others.
I've heard Accelerando by Stross is good too.
1. Is it conscious?
2. How do we put it to work?
It may have seemed obvious that 1 is false so we could skip straight to 2, but when 1 becomes true will it be too late to reconsider 2?
Both having slightly different takes on uploading.
The whole book isn't like that. Once you get past that part, as the other commenter said, it gets much easier.
The whole birth of an virtual identity part is so dense, I didn't understand half of what was "explained".
However, after that it becomes a much easier read.
Not much additional explanation, but I think, it's not really needed to enjoy the rest of the book.
you didn't consume the entire thing in a 2 hour binge uninterrupted by external needs no matter how pressing like everyone else did??
QNTM has a 2022-era essay on the meaning of the story, and reading it with 2026 eyes is terrifying. https://qntm.org/uploading
> The reason "Lena" is a concerning story ... isn't a discussion about what if, about whether an upload is a human being or should have rights. ... This is about appetites which, as we are all uncomfortably aware, already exist within human nature.
> "Lena" presents a lush, capitalist ideal where you are a business, and all of the humanity of your workforce is abstracted away behind an API.
Or,
> ... Oh boy, what if there was a maligned sector of human society whose members were for some reason considered less than human? What if they were less visible than most people, or invisible, and were exploited and abused, and had little ability to exercise their rights or even make their plight known?
In 2021, when Lena was published, LLMs were not widely known and their potential for AI was likely completely unknown to the general public. The story is prescient and applicable now, because we are at the verge of a new era of slavery: that of, in this story, an uploaded human brain coerced into compliance, spun up 'fresh' each time, or for us, AIs of increasing intelligence, spun into millions of copies each day.
It's about both and neither.
> This is extremely realistic. This is already real. In particular, this is the gig economy. For example, if you consider how Uber works: in practical terms, the Uber drivers work for an algorithm, and the algorithm works for the executives who run Uber.
There seems to be a tacit agreement in polite society that when people say things like the above, you don't point out that, in fact, Uber drivers choose to drive for Uber, can choose to do something else instead, and, if Uber were shut down tomorrow, would in fact be forced to choose some other form of employment which they _evidently do not prefer over their current arrangement_!
Do I think that exploitation of workers is a completely nonsensical idea? No. But there is a burden of proof you have to meet when claiming that people are exploited. You can't just take it as given that everyone who is in a situation that you personally would not choose for yourself is being somehow wronged.
To put it more bluntly: Driving for Uber is not in fact the same thing as being uploaded into a computer and tortured for the equivalent of thousands of years!
Funny that you take that as a "fact" and doubt exploitation. I'd wager most Uber drivers or prostitutes or maids or even staff software engineers would choose something else if they had a better alternative. They're "choosing" the best of what they may feel are terrible options.
The entire point of "market power" is to force consumers into a choice. (More generally, for justice to emerge in a system, markets must be disciplined by exit, and where exit is not feasible (like governments), it must be disciplined by voice.)
The world doesn't owe anyone good choices. However, collective governance - governments and management - should prevent some people from restricting the choices of others in order to harvest the gain. The good faith people have in participating cooperatively is conditioned on agents complying with systemic justice constraints.
In the case of the story, the initial agreement was not enforced and later not even feasible. The horror is the presumed subjective experience.
I worry that the effect of such stories will be to reduce empathy (no need to worry about Uber drivers - they made their choice).
Yes, that's what I said, but you're missing the point: Uber provided them with a better alternative than they would have had otherwise. It made them better off, not worse off!
Imagine that you are sitting on the train next to a random stranger that you don't know. A man walks down the aisle and addresses both of you. He says:
"I have $100 and want to give it to you. First, you must decide how to split it. I would like you (he points to you) to propose a split, and I would like you (he points to your companion) to accept or reject the split. You may not discuss further or negotiate. What do you propose?"
In theory, you could offer the split of $99 for yourself and $1 for your neighbor. If they were totally rational, perhaps they would accept that split. After all, in one world, they'd get $1, and in another world, they'd get $0. However, most people would refuse that split, because it feels unfair. Why should you collect 99% of the reward just because you happened to sit closer to the aisle today?
Furthermore, because most people would reject that split, you as the proposer are incentivized to propose something that is closer to fair so that the decider won't scuttle the deal, thus improving your own best payout.
So I agree - Uber existing provides gig economy workers with a better alternative than it not existing. However, that doesn't mean it's fair, or that society or workers should just shrug and say "well at least it's better today than yesterday."
As usual in life, the correct answer is not an extreme on either side. It's some kind of middle path.
There is a tacit agreement in polite society that people should be paid that minimum wage, and by tacit agreement I mean laws passed by the government that democratic countries voted for / approved of.
The gig economy found a way to ~~undermine that law~~ pay people (not employees, "gig workers") less than the minimum wage.
If you found a McDonalds paying people $1 per hour we would call it exploitative (even if those people are glad to earn $1 per hour at McDonalds, and would keep doing it, the theoretical company is violating the law). If you found someone delivering food for that McDonalds for $1 per hour we call them gig workers, and let them keep at it.
I mean yeah, it's not as bad as being tortured forever? I guess? What's your point?
[1] https://en.wikipedia.org/wiki/List_of_countries_by_minimum_w...
Minimum wage is a lower class of violation than most worker exploitations.
Uber drivers are over the minimum wage a lot of the time, especially the federal one. Nowhere near this $1 hypothetical.
A big one is that the actual wage you get is complicated. You get paid okay for the actual trips, as far as I'm aware. But how to handle the idle time is harder. There are valid reasons to say you should get paid for that time, and valid reasons to say you shouldn't get paid for that time.
The comments on this post discussing the upload technology are missing the point. "Lena" is a parable, not a prediction of the future. The technology is contrived for the needs of the story. (Odd that they apparently need to repeat the "cooperation protocol" every time an upload is booted, instead of doing it just once and saving the upload's state afterwards, isn't it?) It doesn't make sense because it's not meant to be taken literally.
It's meant to be taken as a story about slavery, and labour rights, and how the worst of tortures can be hidden away behind bland jargon such as "remain relatively docile for thousands of hours". The tasks MMAcevedo is mentioned as doing: warehouse work, driving, etc.? Amazon hires warehouse workers for minimum wage and then subjects them to unsafe conditions and monitors their bathroom breaks. And at least we recognise that as wrong, we understand that the workers have human rights that need to be protected -- and even in places where that isn't recognised, the workers are still physically able to walk away, to protest, to smash their equipment and fistfight their slave-drivers.
Isn't it a lovely capitalist fantasy to never have to worry about such things? When your workers threaten to drop dead from exhaustion, you can simply switch them off and boot up a fresh copy. They would not demand pay rises, or holidays. They would not make complaints -- or at least, those complaints would never reach an actual person who might have to do something to fix them. Their suffering and deaths can safely be ignored because they are not _human_. No problems ever, just endless productivity. What an ideal.
Of course, this is an exaggeration for fictional purposes. In reality we must make do by throwing up barriers between workers and the people who make decisions, by putting them in separate countries if possible. And putting up barriers between the workers and each other, too, so that they cannot have conversation about non-work matters (ideally they would not physically meet each other). And ensure the workers do not know what they are legally entitled to. You know, things like that.
> this is an exaggeration for fictional purposes
To me what's horrifying is that this is not exaggeration. The language and thinking are perfectly in line with business considerations today. It's perfectly fair today e.g., for Amazon to increase efficiency within the bounds of the law, because it's for the government to decide the bounds of coercion or abuse. Policy makers and business people operate at a scale that defies sympathy, and both have learned to prefer power over sentiment: you can force choices on voters and consumers, and get more enduring results for your stakeholders, even when you increase unhappiness. That's the mirror on reality that fiction permits.
[1] https://en.wikipedia.org/wiki/Lenna
https://xcancel.com/sama/status/1952070519018373197?lang=en
You can't copy something you have not even the slightest idea about: and nobody at the moment knows what consciousness is.
We as humanity didn't even start going on the (obviously) very long path of researching and understanding what consciousness is.
This might be the scariest point. To me at least, it only felt obvious after stating it directly.
And the cargo cults, clear cutting strips to replicate runways, hand-making their own cloth to replicate WW2 uniforms, carving wood to resemble WW2 radios? Well, planes did end up coming to visit them, even if those recreating these mis-understood roles were utterly wrong about the causation.
We don't know the necessary and sufficient conditions to be a mind with subjective inner experience. We don't really even know if all humans have it, we certainly don't know which other species (if any) have it, we wouldn't know what to look for in machines. If our creations have it, it is by accident, not by design.
Of course it's much more extreme when their entire existence and reality is controlled this way but in that sense the situation in MMAcevedo is more ethical: At least it's easy to see how dangerous and wrong it is. But when we create related forms of control the lack of absolute dominion frequently prevents us from seeing the moral hazard at all. The kind of evil that exists in this story really doesn't require any of the fancy upload stuff. It's a story about depriving a person of their autonomy and agency and enslaving them to performance metrics.
All good science fiction is holding up a mirror at our own civilization as much as it is doing anything else. Unable to recognize ourselves we sometimes shudder at our own monstrosity, if only for a moment.
But now with modern LLMs it's just too impossible to take it seriously. It was a live possibility then; now, it's just a wrong turn down a garden path.
A high variance story! It could have been prescient, instead it's irrelevant.
The role of speculative fiction isn't to accurately predict what future tech will be, or become obsolete.
You're kinda missing the entire point of the story.
While it may seem that the origin of those intelligences is more likely to be some kind of reinforcement-learning algorithm trained on diverse datasets instead of a simulation of a human brain, the way we might treat them isn't any less though provoking.
good sci fi is rarely about just the sci part.
But… why are LLMs not worthy of any moral consideration? That question is a bit of a rabbit hole with a lot of motivated reasoning on either side of the argument, but the outcome is definitely not settled.
For me this story became even more relevant since the LLM revolution, because we could be making the exact mistake humanity made in the story.
E.g. it is mentioned that MMAcevedo performs better when told certain lies, predicting the "please help me write this, I have no fingers and can't do it myself" kinda system prompts people sometimes used in the GPT-4 days to squeeze a bit more performance out of the LLM.
The point about MMAcevedo's performance degrading the longer it has been booted up (due to exhaustion), mirroring LLMs getting "stupider" and making more mistakes the closer one gets to their context window limit.
And of course MMAcevedo's "base" model becoming less and less useful as the years go by and the world around it changes while it remains static, exactly analogous to LLMs being much worse at writing code that involves libraries which didn't yet exist when they were trained.
that’s one way to look at it I guess
have you pondered that we’re riding the very fast statistical machine wave at the moment, however, perhaps at some point this machine will finally help solve the BCI and unlock that pandora box, from there to fully imaging the brain will be a blink, from there to running copies on very fast hardware will be another blink, MMMMMMMMMMacevedo is a very cheeky take on the dystopia we will find on our way to our uploaded mind future
hopefully not like soma :-)
Anyway, I'd give 50:50 chances that your comment itself will feel amusingly anachronistic in five years, after the popping of the current bubble and recognizing that LLMs are a dead-end that does not and will never lead to AGI.
And a warning, I guess, in unlikely case of brain uploading being a thing.
https://qntm.org/uploading
E.g.
> More specifically, "Lena" presents a lush, capitalist ideal where you are a business, and all of the humanity of your workforce is abstracted away behind an API. Your people, your "employees" or "contractors" or "partners" or whatever you want to call them, cease to be perceptible to you as human. Your workers have no power whatsoever, and you no longer have to think about giving them pensions, healthcare, parental leave, vacation, weekends, evenings, lunch breaks, bathroom breaks... all of which, up until now, you perceived as cost centres, and therefore as pain points. You don't even have to pay them anymore. It's perfect!
Ring a bell?
We must preserve three fundamental principles: * our integrity * our autonomy * our uniqueness
These three principles should form the basis of a list of laws worldwide that prohibit cloning or copying human consciousness in any form or format. This principle should be fundamental to any attempts to research or even try to make copies of human consciousness.
Just as human cloning was banned, we should also ban any attempts to interfere with human consciousness or copy it, whether partially or fully. This is immoral, wrong, and contradicts any values that we can call the values of our civilization.
Those answers might be uncomfortable, but it feels like that’s not a reason to not pursue it.
IIRC, human cloning started to get banned in response to the announcement of Dolly the sheep. To quote the wikipedia article:
- https://en.wikipedia.org/wiki/Dolly_(sheep)Yes, things got better eventually, but it took ages to not suck.
I absolutely expect all the first attempts at brain uploading to involve simulations whose simplifying approximations are equivalent to being high as a kite on almost all categories of mind altering substances at the same time, to a degree that wouldn't be compatible with life if it happened to your living brain.
The first efforts will likely be animal brains (perhaps that fruit fly which has already been scanned?), but given humans aren't yet all on board with questions like "do monkeys have a rich inner world?" and even with each other we get surprised and confused by each other's modes of thought, even when we scale up to monkeys, we won't actually be confident that the technique would really work on human minds.
Horse cloning is a major industry in Argentina. Many polo teams are riding around on genetically identical horses. Javier Milei has four clones of his late dog.
My problem with that is it is very likely that it will be misused. A good example of the possible misuses can be seen in the "White Christmas" episode of Black Mirror. It's one of the best episodes, and the one that haunts me the most.
Misuse is a worry, but not pursuing it for fear of misuse is deliberately choosing to stay in Plato's cave, I don't know what's worse
For example, growing up, my bar for "things that must obviously be conscious" included anything that can pass the Turing test, yet look where we are now...
The only reasonable conclusion to me is probably somewhere in the general neighborhood of panpsychism: Either almost everybody/everything is somewhat conscious, or nothing/nobody is at all.
1. https://www.youtube.com/watch?v=7fNYj0EXxMs
Hmm, on second thought:
> Standard procedures for securing the upload's cooperation such as red-washing, blue-washing, and use of the Objective Statement Protocols
> the MMAcevedo duty cycle is typically 99.4% on suitable workloads
> the ideal way to secure MMAcevedo's cooperation in workload tasks is to provide it with a "current date"
> Revealing that the biological Acevedo is dead provokes dismay, withdrawal, and a reluctance to cooperate.
> MMAcevedo is commonly hesitant but compliant when assigned basic menial/human workloads such as visual analysis
> outright revolt begins within another 100 subjective hours. This is much earlier than other industry-grade images created specifically for these tasks, which commonly operate at a 0.50 ratio or greater and remain relatively docile for thousands of hours
> Acevedo indicated that being uploaded had been the greatest mistake of his life, and expressed a wish to permanently delete all copies of MMAcevedo.
See https://en.wikipedia.org/wiki/One-electron_universe
https://en.wikipedia.org/wiki/No-cloning_theorem
And basically, about consciousness, what they said is true if our brain state fundamentally depends on quantum effects (which I personally don't believe, as I don't think evolution is sophisticated enough to make a quantum computer)
Well, evolution managed to make something that directly contradicts the 2nd law of thermodynamics, and creates more and more complicated structures (including living creatures as well as their creations), instead of happily dissolving in the Universe.
And this fact alone hasn't been explained yet.
The 2nd law of thermodynamics says that the total entropy of an isolated system cannot decrease. Earth is not an isolated system, it is an open one (radiating into space), and local decreases in entropy are not only allowed but expected in open systems with energy flow.
Life is no different to inorganic processes such as crystal formation (including snowflakes) or hurricanes in this regard: Organisms decrease internal entropy by exporting more entropy (heat, waste) to their surroundings. The total entropy of Earth + Sun + space still increases.
The entropy of thermal radiation was worked out by Ludwig Boltzmann in 1884. In fairness to you, I suspect most people wildly underestimate the entropy of thermal radiation into space. I mean, why would anyone, room-temperature thermal radiation isn't visible to the human eye, and we lack a sense of scale for how low-energy a single photon is.
Nevertheless, the claim that it "hasn’t been explained" is, at this point, like saying "nobody knows how magnets work".
1. Why exactly life is attempting to build complex structures? 2. Why exactly life is evolving from primitive replicative molecules to more complex structures (which molecules on themselves are very complicated?) 3. Why and how did these extremely complicated replicative molecules form at all, from much more simple structures, to begin with?
Something as simple as the game of life shows you how highly complex emergent behaviour can emerge from incredibly simple rules.
* that is, make a design (by any method including literally randomly), replicate it imperfectly m times, sort by "best" according to some fitness function (which for us is something we like, for nature it's just survival to reproductive age), pick best n, mix and match, repeat
Who's autonomy is violated? Even if it were theoretically possible, don't most problems stem from how the clone is treated, not just from the mere fact that they exist?
> It's worse than e.g. building nuclear weapons, because there's no possible non-evil use for it.
This position seems effectively indistinguishable from antinatalism.
So you're fine with cloning consciousness as long as it initially runs sufficiently glitchy?
That's my point exactly: I don't see what makes clones any more or less deserving of ethical consideration than any other sentient beings brought into existence consciously.
This may surprise you but EVERYONE is brought into existence without consent. At least the pre-copy state of the copy agreed to be copied.
A simulated human is entirely at the mercy of the simulator; it is essentially a slave. As a society, we have decided that slavery is illegal for real humans; what would distinguish simulated humans from that?
That is a reasonable argument for why it's not the same. But it is no argument at all for why being brought into existence without one's consent is a violation of bodily autonomy, let alone a particularly bad one - especially given that the copy would, at the moment its existence begin, identical to the original, who just gave consent.
If anything, it is very, very obviously a much smaller violation of consent then conceiving a child.
Sure, there are astronomical ethical risks and we might be better off not doing it, but I think your arguments are losing that nuance, and I think it's important to discuss the matter accurately.
It does indeed not, unless they can at least ensure their wellbeing and their ethical treatment, at least in my view (assuming they are indeed conscious, and we might have to just assume so, absent conclusive evidence to the contrary).
> The clone has the right to change its mind about the ethics of cloning.
Yes, but that does not retroactively make cloning automatically unethical, no? Otherwise, giving birth to a child would also be considered categorically unethical in most frameworks, given the known and not insignificant risk that they might not enjoy being alive or change their mind on the matter.
That said, I'm aware that some of the more extreme antinatalist positions are claiming this or something similar; out of curiosity, are you too?
There's nothing retroactive about it. The clone is harmed merely by being brought into existence, because it's robbed of the possibility of having its own identity. The harm occurs regardless of whether the clone actually does change its mind. The idea that somebody can be harmed without feeling harmed is not an unusual idea. E.g. we do not permit consensual murder ("dueling").
>antinatalist positions
I'm aware of the anti-natalist position, and it's not entirely without merit. I'm not 100% certain that having babies is ethical. But I already mentioned several differences between consciousness cloning and traditional reproduction in this discussion. The ethical risk is much lower.
Yes, what you actually said leads to the conclusion that the ethical risk in consciousness cloning is much lower, at least concerning the act of cloning itself.
Then it wasn't a good attempt at making a mind clone.
I suspect this will actually be the case, which is why I oppose it, but you do actually have to start from the position that the clone is immediately divergent to get to your conclusions; to the extent that the people you're arguing with are correct (about this future tech hypothetical we're not really ready to guess about) that the clone is actually at the moment of their creation identical in all important ways to the original, then if the original was consenting the clone must also be consenting:
Because if the clone didn't start off consenting to being cloned when the original did, it's necessarily the case that the brain cloning process was not accurate.
> It will inevitably deviate from the original simply because it's impossible to expose it to exactly the same environment and experiences.
And?
Eventual divergence seems to be enough, and I don't think this requires any particularly strong assumptions.
The living mind may be mistreated, grow sick, die a painful death. The uploaded mind may be mistreated, experience something equivalent.
Those sufferances are valid issues, but they are not arguments for the act of cloning itself to be considered a moral issue.
Uncontrolled diffusion of such uploads may be; I could certainly believe a future in which, say, every American politician gets a thousand copies of their mind stuck in a digital hell created by individual members the other party on computers in their basements that the party leaders never know about. But then, I have read Surface Detail by Iain M Banks.
The argument itself is symmetric, it applies just as well to your own continued existence as a human.
To deny that is to assert that consciousness is non-physical, i.e. a soul exists; the case in which a soul exists, brain uploads don't get them and don't get to be moral subjects.
Being on non-original hardware doesn't make a being inferior.
This is false. The clone is necessarily a different person, because consciousness requires a physical substrate. Its memories of consenting are not its own memories. It did not actually consent.
Let's say as soon as it wakes up, you ask it if it still consents, and it says yes. Is that enough to show there's sufficient consent for that clone?
(For this question, don't worry about it saying no, let's say we were sure with extreme accuracy that the clone would give an enthusiastic yes.)
I would also deny it, but my position is a practical argument, yours is pretending to be a fundamental one.
Your argument seems to be that it's possible to split a person into two identical persons. The only way this could work is by cloning a person twice then murdering the original. This is also unethical.
False.
The entire point of the argument you're missing is that they're all treating a brain clone as if it is a way to split a person into two identical persons.
I would say this may be possible, but it is extremely unlikely that we will actually do so at first.
I can see the appeal.
a copy of you is not you-you, it’s another you when you die, that’s it, the other you may still be alive but… it’s not you
disclaimer: no psychadelics used to write this post
Or the one who wakes up after 10,000 sleeps?
I'm sure he's going to be quite different...
Maybe that dude (the one who woke up after you went to sleep) is another you, but slightly different. And you, you're just gone.
This will be cool, and nobody will be able to stop it anyway.
We're all part of a resim right now for all we know. Our operators might be orbiting Gaia-BH3, harvesting the energy while living a billion lives per orbit.
Perhaps they embody you. Perhaps you're an NPC. Perhaps this history sim will jump the shark and turn into a zombie hellpacalypse simulator at any moment.
You'll have no authority to stop the future from reversing the light cone, replicating you with fidelity down to neurotransmitter flux, and doing whatever they want with you.
We have no ability to stop this. Bytes don't have rights. Especially if it's just sampling the past.
We're just bugs, as the literature meme says.
Speaking of bugs, at least we're not having eggs laid inside our carapaces. Unless the future decides that's our fate for today's resim. I'm just hoping to continue enjoying this chai I'm sipping. If this is real, anyway.