Im fully willing to believe I just don’t “get it” but I took a pretty deep dive into quantum computing and the underlying mechanics and I kind of got the sense (with QC) that nobody really knows what they are talking about. I got this feeling so strongly that stopped studying the topic all together.
I’m probably way off base and I’m probably missing some insights that I could get by going to school or something but that’s was just my experience with the subject.
However, it still doesn't really address the core question of when the collapse actually occurs. All it really seems to add is that the environment is an "observer" and that decoherence actually causes the collapse.
> I’m probably way off base and I’m probably missing some insights that I could get by going to school
A school would usually teach the "shut up (about philosophy) and calculate" approach. These philosophical problems about the meaning of quantum mechanics have been with us for 100 years, and mainstream physics sees them as too hard or even intractable, and thus as waste of time.
These debates over the interpretation of Quantum Mechanics (i.e. what ultimately happens when a “measurement” takes place) are important but don’t bear on the effectiveness of quantum computing. Regardless of your favorite interpretation (almost) everyone agrees that quantum computers should work and be able to do things classical computers cannot.
"I think I can safely say that nobody understands quantum mechanics."
--Richard Feynman
You're far from alone. Quantum physics is tricky because it frequently doesn't agree with our physical intuition. Humans are used to dealing with macroscopic objects. They surround us for our whole lives. Matter behaves in surprisingly different ways at the level of single quanta. Seemingly impossible things flop out of the math and then clever experiments show that reality is consistent with the math, but we struggle to reach the point where that reality feels correct. When we try to translate the math into human language, we often wind up overloading words and concepts in a way that can be misleading or even false.
Perhaps we just haven't reached the point where things are sufficiently well explained and simplified, but it may be be that quantum physics will always seem strange and counter-intuitive.
> Quantum physics is tricky because it frequently doesn't agree with our physical intuition.
Quantum physics tricky for two separate reasons.
(i) The mathematical theory (Schrödinger equation, wave function, operators, probabilities) is solid and well-defined, but may feel unintuitive, as you say.
(ii) But quantum mechanics is also an incomplete theory. Even if you learn to be at peace with the unintuitive aspects of the mathematical theory, the measurement problem remains an unsolved problem.
"The Schrödinger equation describes quantum systems but does not describe their measurement."
"Quantum theory offers no dynamical description of the "collapse" of the wave function"
Feynman, a famous man from an older era who tried to inspire, remind, and spur people...
> macroscopic objects
It's not about scale at all though. It's just that small systems tend to be observed with this other, specific property that we associate with causing "quantum" like effects. Not only do those effects happen at mesoscopic scale but aside from gravity, quantum theory already can be and is used to describe things on large scales too. Classical computers and desks are still "quantum" systems. Recently theory and experiments have developed to connect with gravity in many ways. I'm more confused when people say something is mysterious. They're usually referring to apparent randomness but I think even that is explained already with partitions or even just wave math (complementarity).
The interpretations of what the math is saying happens a varied and sometimes contradictory.
We can predict what's going to happen extremely well, we just can't tell the story of what's happening. And there's been a century of trying to avoid the weirdness and failing. The problem might just be our brains evolved in a world that behaves so much differently that we can't understand.
I often observe that humans are wired to create causal stories, whether we intend to or not, even in circumstances, we know are false.
A great example involves flipping a coin. Even people who know it's basically an independent 50/50 chance every time get drawn into thinking about "hot streaks" and "overdue for the opposite."
It's arguably a superpower that has given us lots of agriculture and tools and technology and culture, but like hunger and obesity we can't just turn it off when it gets maladaptive.
Humans are very good at pattern matching and explanation and that's what's given us success, but false-positive matches sometimes are the result and need to be corrected down a bit
Do I get this right? Wave function collapse due to measurements is not real, the wave function evolves unitarily all the time. But as quantum states get amplified into the macroscopic world, superposition states are somehow amplified asymmetrically which makes it look like wavefunction collapse.
But isn’t it conceivable, because the original quantum state contains probabilities of different outcomes, that one imprint might correspond to “up” and another to “down,” [...] [Zurek’s theory] predicts that all the imprints must be identical.
Does this not imply that there is an asymmetry, one half of the state gets imprinted, the other half neglected? This however also raises the question about the basis, what is a superposition and what is not depends on the choice of basis. Is there a special basis just as pointer states are somehow special?
Maybe I am just out of my depth, but I don't understand what problem quantum Darwinism is solving. The Schrödinger equation already explains why observers seem to agree: the ones that don't are separated from each other.
This article is making some pilot-wave-like claim on top of quantum Darwinism that while the Schrödinger equation is real, all the 'real realness' exists in some pointer to a specific location inside it. Why does it do this? Where does this claim come from? At least collapse theories allow that the thing the Schrödinger equation is modelling is actually real up until the part God gets out his frustum culler.
Highly recommend looking at Jacob Barandes’ formulation of quantum mechanics as non-Markovian stochastic processes. It was the first introduction to quantum mechanics I could actually follow.
In a fews sentences: the evolution of a physical system (quantum and classical) can very successfully be modeled as a stochastic process, and ...
1. state of the system is a real-valued "vector" (could be a vector of with continuous indices), or to put it another way, a "point" in state space.
2. system evolution is described by a real-valued "matrix" (matrix in quotes because it is also possibly a matrix with continuous indices), defined by the laws physics as they apply to the system
3. evolution of the system is modeled by repeatedly applying the matrix to the system (to the vector), possibly with infinitesimal steps.
The major discovery Jacob made is that, historically, folks working on stochastic processes had restricted themselves to studying "markovian" stochastic processes, where the transformation matrix has specific mathematical properties, and this fails to be able to properly model QM.
Jacob removes the constraint that the matrix should obey markovian constraints and lands us in an area of maths that's woefully unexplored: non markovian stochastic processes.
The net result though: you can model quantum mechanics with simple real-valued probabilities and do away entirely with the effing complex numbers.
The whole thing is way more intuitive than the traditional complex number based approach.
Jacob also apparently formally demonstrates that his approach is equivalent to the traditional approach.
This was a good discussion on the topics involved as well; between Jacob Barandes & Tim Maudlin. Though I don't recommend watching this without first getting some familiarity with Barandes's ideas... while there's some explanatory dialog in this video I'm posting, mostly is a discussion. It's nice to see the ideas (politely) challenged and answered.
It doesn’t. Decoherence is the technical step in the Everett picture defining what a “classical branch” even is and explaining how the state vector branches. Every claim that “Decoherence” somehow offers a distinct interpretation to Everett is pure confusion.
The article asks the same question in the last part, wondering whether it's just randomly selected. MWI proponents have always argued decoherence leads to the entire world being put into superposition as decoherence just spreads entanglement to the environment. The math never says entanglement destroys superposition beyond a certain point of complexity (many different entangled systems forming the environment).
The author does say the approach is a combination of Copenhagen and MWI, removing the outlandish parts of both. Seems to preserve the randomness of the former though.
> MWI proponents have always argued decoherence leads to the entire world being put into superposition as decoherence just spreads entanglement to the environment.
Well, duh. It's not like classic objects actually exist, or the classical/quantum divide: everything is quantum, including the "observers". The "classical observer" is a crude approximation that breaks down to a pointy enough question. Just like shorting the perfect battery (with zero internal resistance) with a perfect wire (with zero external resistance) — this scenario is not an approximation of any possible real scenario so it's paradoxicality (infinite current!) is irrelevant.
Random is a very interesting concept. In relation to nature we seem to use "random" as anything we can't or are currently unable to model.
To call something random doesn't mean it's impossible to model, in fact all sorts of natural facts seemed random one day before being covered by a model. One very relatable example example is the motion of stars in the the night sky, which seemed random for ages, until the Copernican revolution.
The fact we have access to random() function in programming seems to trip many people. random() is a particular model implementation of random, but stuff in nature isn't random().
My point is, using "just random" to do work in any scientific explanation is a clutch.
In science randomness is usually used to abstract over a large number of possible paths that result in some outcome without having to reason individually about any specific path or all such paths.
It does not have to mean something inherently non-deterministic or something that can't be modelled, although it certainly is the case that if something is inherently non-deterministic then it would necessarily have to be modelled randomly. Modelling things as a random process is very useful even in cases where the underlying phenomenon has a fully understood and deterministic model; a simple example of this would be chess. It's an entirely deterministic game with perfect information that is fully understood, but nevertheless all the best chess engines model positions probabilistically and use randomness as part of their search.
There's disagreement on this. You seem to just be saying that brute facts or brute contingencies don't exist, but I suspect most scientists would disagree with that.
The use of "random" as explanation or characterization in science has certainly spanned everything from "we don't know", to "there is inherent indivisible physical randomness".
And I would agree, in the latter case it is a crutch. A postulate that something gets decided by no mechanisms whatsoever (randomness obeying a distribution still leaves the unexplained "choice").
It is remarkable that people still suggest the latter, when the theory, both in theory and experiment, doesn't require a physical choice at all (even if we experience a choice, that experience is explained without the universe making a choice).
It is not incomplete to say that something does not require explanation, nor is it saying it's "magic". It is a cost that your model might incur, that's it.
In this paper a plurality of physicists stated that they felt that the initial conditions of the universe are brute facts that warrant no further explanation. This is not "our model doesn't yet account for it", it's "there is no explanation to be given".
A model is incomplete if it doesn't explain something.
That doesn't make a model wrong. All models we have are partial explanations.
But that doesn't make it rational to claim that an incomplete model is complete. Or to treat unexplained specifics as inherently "just so", without cause or reason (i.e. magic), and we must just accept them as unexplainable instead of pursuing them with further inquiry.
> How, for example, are we supposed to think about the domain in which all possibilities still exist before decoherence? How “real” is it?
The quantum function is the real object. Little balls we like to imagine the particles as are just perception of quantum functions very narrowed down by entangling with macroscopic objects. The way we measure anything is through the entanglement between the measured entity and our macroscopic instruments.
To me, the fact that quantum mechanics is intrinsically "random" and unknowable beforehand, is what makes living bearable in this universe as a sentient being. If we, two legged viruses that we are, could reach a level of understanding that could show the universe to be fully deterministic and every future state to be knowable given that you know the current states, then this human condition would be impossible to stand. I love the fact that we just can't predict the future. It's what makes existing be a good thing instead of a bad one.
#1: You do not want randomness. You may believe you do until the Titanic crashes into your front yard and your significant vanishes into thin air. You want quite a lot of predictability, up to a degree where it might not even matter if things at the lowest level of existence are not perfectly deterministic.
#2: What's so bad about thinking about life as an exciting rollercoaster ride? The tracks are laid but the ride is still fun.
If everything is deterministic, i.e. determined, there's no free will, so you/I are just a NPC. I prefer to live in a universe where my conscious decisions matter, or at least can't be predicted beforehand.
Randomness doesn't imply free will. What if you/I are NPCs that just roll the dice before doing something? It's not you that chose the outcome, it's the dice, i.e. the laws of physics.
I don't know how free will could actually work with any kind of universe governed by a set of laws, whether they include randomness or not. So I don't believe in it, but of course in my day to day life I act as if it exists.
Yet I don't know how qualia or subjective experiences could actually work with any kind of universe governed by a set of laws, whether they include randomness or not. But I believe I have this subjective view of the world that doesn't seem to be explainable with a set of equations.
So it's weird. At least philosophy and science agree that.
> None of the leading interpretations of quantum theory are very convincing. They ask us to believe, for example, that the world we experience is fundamentally divided from the subatomic realm it’s built from. Or that there is a wild proliferation of parallel universes, or that a mysterious process causes quantumness to spontaneously collapse.
Actually, the "many worlds" "interpretation", simply treats the highly successful equations as meaning what they say.
And it is misnamed. The field equations describe a highly interconnected "web universe" of "tangles" (what I call spans of entangled interactions) and "spangles". (My shorthand for superpositions, i.e. disjoint interactions of particles. Think of all the alternate lines leading from and two distinguishable states, like star patterns.) Basically, a graph of union and intersection relations where all combinations, individually and en masse, are determined exactly by the laws of conservation.
That's an amazingly good property for a theory. And we have it.
By including all consistent versions, no external information is required by the theory. It is informationally complete. A successful objective explanation. With deep experimental support that entanglement and superposition actually exist, because their interactions are easily testable.
In fact, entanglement doesn't "violate" locality, it is the more general case which explains locality. Locality is just tightly coupled entanglement/interaction. Not a fundamental constraint on connections. There is no fundamental "distance", just loose and dense connections. Locality is just what we see wherever there are patterns of dense connections. They are an effect, not a constraint.
Even in the classical world of large (highly tangled) objects, we take it for granted that dependent objects can separate over arbitrarily vast dimensions of space and time and yet return together. If that isn't entanglement over vast distances, what is it? It is a basic property of classical physics. Quantum mechanics reveals more subtlety in those maintained connections, including interactions between connections, but it didn't originate them.
Forces disappear. They become passive in an interesting way. Histories where information cancel, leave structured distribution patterns behind, which to us look like forces. Cancellation is just information being conserved. Not an active force. But the results appear active.
In a similar way to how the evolutionary umbrella seems very smart and creative, when really, it is just poorly adapted individual creatures independently cancelling themselves out blindly, leaving a distributional improvement behind.
There is no additional information needed to explain the effect of quantum "collapse" because it is already explained by the fast bifurcation of disjoint tangles when lots of particles interact in an unorganized manner. It is thermodynamics being thermodynamics.
Anyone attempting to invent a mechanism for "collapse" is like someone trying to explain why the spherical Earth appears "flat" by introducing additional speculative theories. Despite the spherical world theory already explaining why it looks flat locally.
And the only reason to not take the experimentally verified field equations as a plain reading, is the result is "too big" for someone's imagination.
Our everyday experience doesn't limit reality, despite humans having trouble with theories that reveal a bigger reality, over and over and over.
Bluntly: The total field equations preserve information - that is the plain implication and guarantee for having both unions (tangles) and intersections (spangles) of interactions.
Anything else requires a universal firehose of magically appearing information to choose collapses, i.e. particular interactions, in order to explain something already explained. In other words, dressed up voodoo. And by "re-complicating", uh, "re-explaining" the already explained, introduces a ridiculous new puzzle: Where does all that pervasively intrusive relentless injection of information (that determines every single extricable particle interaction!), come from? (Occam is spinning like a particle accelerator in his grave.)
Saying it "Just Happens" is like someone "explaining" their pet version of a creator with "Just Is". It is a psychological non-taulogy for "Don't Ask Questions".
The part that I have trouble wrapping around with many worlds interpretation is how I as an observer end up in one of the many bifurcations. Any links you can share that will help me with understanding that is welcome!
The Stanford Encyclopedia of Philosophy (https://plato.stanford.edu/entries/qm-manyworlds/) goes into this in some depth, and it seems like the right way to think about it is say that "I" in one branch is a different entity than the "I" in a different branch. I have somehow not been able to grok it yet.
And I agree about the naming. I really dislike the name "many worlds interpretation", which seems to imply that we have to postulate the existence of these additional worlds, whereas in fact they are branches of the wavefunction exactly predicted by standard quantum mechanics.
The problem with Many Worlds is that it doesn't place a bound on the number of worlds, so you can't derive the Born Rule from it.
That's quite a serious issues. And arguments against that - like Self-Locating Uncertainty, or Zurek's Envariance - look suspiciously circular if you pull them apart.
There's also the issue that if you don't have a mechanism that constrains probability, you can't say anything about the common mechanism of any of the worlds you're in. Your world may be some kind of lottery-winning statistical freak world which happens to have very unusual properties, and generalising from them is absolutely misleading.
There's no way of testing that, so you end up with something unfalsifiable.
> The problem with Many Worlds is that it doesn't place a bound on the number of worlds, so you can't derive the Born Rule from it.
I have no idea what this means.
Is there a bound on anything in reality, in terms of scale? Beyond its own laws?
I am reminded of how often in history, too much time, or too much scale, were unsuccessful arguments against many theories we accept today. Those critiques died without any need for special arguments, because they don't have a logical basis.
Also, there are not a number of many "worlds". That is a reflection of poor naming. There is an interleaving of all interactions, so if you zoom out, a smeared landscape across all configurations, from the plank scale up.
Because the connections involve both intersection (entanglement) and union (alternate paths), we get bifurcation of classical sized paths (dense entanglements), while the individual particles continue unconcerned by how they appear to create different classical histories at large scale.
And yes it is experimentally validated. This is the theory that everyone accepts in the lab, even as larger scales of experiment continue to progress.
But some people have difficulty believing/visualizing that it continues to work at larger scales. Despite no scale limitation in the theory, no scale related violations ever suggested experimentally, and the strong likely that scale limitations would produce new physics in at-scale observations of our cosmos if they did exist.
> The part that I have trouble wrapping around with many worlds interpretation is how I as an observer end up in one of the many bifurcations.
Pour water down a hill. Water clings to water, and we have hills that already have lots of correlations. We get streams that break up into multiple streams.
How did one stream end up where it is? It seems like a good question, but it is circular. The stream is defined by where it is. You are here (in some circumstance), because the version of you in this circumstance is you.
A transporter accident that creates several versions of you, on several planets with difference colors, doesn't need to explain to each version how they ended up at a planet with their color. Even if for a particular copy, it seems like there should be an answer why they showed up on a planet of a particular specific color. The "why" is just, all paths were taken.
What you said here makes sense. Forgive me, but I have trouble even articulating what it is that I don’t understand correctly.
Maybe what I meant was this: if I perform a quantum experiment where the spin measurement of an electron could be spin up or spin down, the future me would end up in one of two branches: I measure spin up, or I measure spin down. There wouldn’t be any possible world where I measure a superposition of spin up and spin down, because such a a state is going to decohere rapidly. This makes sense. What I’m unable to grasp is that even though the wave function of the universe contains both branches, “I” somehow experience only one of the two branches.
The answer to that I guess if that the two branches are nearly orthogonal they will merrily evolve independent of each other. But somehow “I” only experience only one of them.
Sorry for the rambling. I’m not able to articulate what I don’t understand.
> The future me would end up in one of two branches: I measure spin up, or I measure spin down.
The future "you's" would each see spin up, and spin down, respectively.
We are just as quantum as what we measure. There isn't a scale where entanglement and superposition turn into something else. No classical vs. quantum atoms.
Just as an up-spin qubit touching an up/down qubit results in an up-up qubit pair in superposition with an up-down superposition, conserving the qubit, when we touch a qubit we get "us"-up and "us"-down versions.
No information is created. None is destroyed. We experience a correlation = "collapse" (both versions of us), but the quantum information just continues on as before, qubit conserved.
"Hard problem" makes it out to be much more difficult than it actually is. To simplify things a little bit, if you combine a spatiotemporal sense (a sense of bounded being in space and time) with a general predictive ability (the ability to freely extrapolate in time and space from one's surroundings,) "consciousness" arises necessarily. It's what having such senses feels like from the inside; the first-person view. It's a matter of degree, of course.
The writing of Chalmers and its consequences have been a catastrophe for philosophy.
It's not hard at all when you acknowledge that such senses exist in the world, and that you (like others) possess them. As an aside it tends to foster a certain tendency towards empathy.
In essence, you're asking why there's an inside to being a self-modeling system. But "inside" isn't something extraneous, something additional -- rather, it's what "self-modeling" means.
Really the "hard problem" has a very easy answer, but it's a physical/functional answer, and dualists and obscurantists simply don't like it.
It's embarrassingly silly to say but I've frequently just boiled down the hard question to the question of "where is the experience of the color blue stored in the universe?" Even as a non-dualist, I still haven't found much of an answer that I like. I'm all ears if you've got a book recommendation.
The question presupposes that "the experience of the color blue" is a discrete object that needs a storage location. But that's the dualist picture in disguise. On a functionalist view, blueness isn't stored; it's what certain neural activity constitutively is when you're that system observing that blue.
As an aside, isn't it more weird that violet and purple look indistinguishable despite being physically so different? It's said that this is because our L-cones (red-sensitive) have a secondary sensitivity peak at short wavelengths. So violet light triggers S-cones + a bit of L-cone. Purple light (red + blue) also triggers S-cones + L-cones. Similar activation pattern = same quale. It's all functional/physical.
Read Tom Cuda "Against Neural Chauvinism." Also Daniel Dennett.
What is mysterious to me is why and how chemical reactions in a certain part of my brain creates an experience of blue.
Yes some chemical change happened there, but so what.
These are not very unusual chemical reactions and happen and is happening everywhere. Does all the chemical reactions going on generate an experience to some experiencer?
This is where these questions take me. Since the experience is the only thing I can be certain of, I'm less drawn to "everything is physical" answers and more drawn to ideas from phenomenology and Bishop George Berkeley. And since I'm not super religious, I'm not really comfortable with those "answers" either.
The kneejerk response would be: Are you not conscious at this present moment? If we were to modulate your spatiotemporal senses with drugs or a lobotomy, do you doubt that you would be very differently conscious, or perhaps entirely unconscious?
I mean, there is a credible first-person answer to that question of yours, which each man can answer for himself.
But considered more seriously, the "hard problem" is an artifact of treating experience as a separate thing that needs to be generated. If you accept that self-modeling systems bounded in space and time exist, you've already accepted that experience exists -- because experience is what such a system is, from the inside. There's no second step where experience gets added. The question "why is there experience?" is exactly akin to "Why is there an interior to four walls and a roof?" The interior isn't a separate thing; it's necessarily constitutive.
I'm not a dualist or anything. I'm in the "it's weird and I have no idea what the answer is" camp. And yes, I've read Dennett. I'm trying to understand your views. Lots of questions follow, but don't feel like I'm barraging you unnecessarily. Just trying to figure out your view with what seem to me like interesting questions that I myself can't really answer.
I'm using "consciousness", "subjective experiences", "senses" and "qualia" as synonyms here, but if you see a difference, please mention it. Obviously "consciousness" has many definitions that have nothing to do with the "hard problem of consciousness", so I'm using it in this sense here. I'll use "qualia" as it's the word that relates most to the hard problem of consciousness. You can substitute it with "sense"/"senses" if you like.
1. Do you view qualia as an emergent property? Of what exactly? What is a self-modeling system? Is a human one? Where would the boundaries be; would they even be defined? The human body or the brain only or the nervous system? Or whatever neurons activate when a certain thing happens, like seeing blue or feeling pain? What about animals - pigs, dogs, rats, snails, ants, bacteria? What about AI, current and theoretical?
2. Could there be a set of minimal self-modelling systems in some abstract space that are the boundary of what has qualia and what doesn't? Like, these 1000000 neurons arranged like that qualify, but if you take 1 out, they don't? Or is it a fuzzy boundary somehow?
3. What kind of statements could be made about the qualia of yourself and of others? Not sure what kind of answer I'm looking for, but how objective or truthful would those statements be? Maybe "qualia is nothing really, we only have the set of equations that govern physics and everything else is an abstraction"? Like an apple isn't anything really, it's just a badly defined set of atoms and energy. There is no "apple" or "chair". Or is it something else?
4. What are your views on meta-ethics and ethics in general? Should we care about it at all?
We have a theory whose plain reading matches experiment at all scales.
Consciousness is something else. It is tempting for humans to pair mysteries up, pyramids and aliens, or whatever. But there isn't any factual basis for linking the experience of self-awareness with quantum mechanics.
Is there a factual reason we know digital minds couldn't be conscious? Where quantum effects have been isolated from the operations of mental activity. That seems like a premature constraint to assume.
Yes, the MWI is falsifiable. It asserts that objective collapse does not occur, therefore any observation of objective collapse (such as predicted by GRW or Penrose-Diosi) would falsify it.
It touches you, and you are just as quantum as the bit.
So two entangled versions of you follow, one entangled with each state. (Actually as many quantum versions of you that touched the qubit times two.)
Which is what happens, as we know from experiment when any one qubit interacts with another independent qubit. We get the product of entangled states, each now correlated. But different entangles states are now in superpostion with each other.
So correlation/entanglement happens and is experienced, despite no collapse of superposition. No information was destroyed or created.
Each of you thinks, wow now the qubit only has one state. But that is because there are two versions of you, correlated respectively with the two uncollapsed qubit states.
Complete conservation. That is the "experience" of collapse that needs no explanation, because it is a predicted experience not requiring an actual collapse. Just as spherical Earth models don't need a special explanation for the appearance of locally flat Earth, because spherical models predict a local flat Earth experience.
I think you're right, the many worlds interpretation makes the most sense. Unfortunately out current technology is very far from delivering any experimental confirmation or denial of any of the mainstream interpretations.
You are right, but I think there is a more positive viewpoint.
All experiments agree with the many worlds interpretation (again, better described as a quantum web interpretation), and it is the plain Occam's Razor interpretation.
No additional flourishes are needed. That is strong theoretical support. It is the default (plain reading) interpretation already.
And it is the interpretation that doesn't just conserve in one history (i.e. conservation of energy etc.), but conserves information universally.
So again, very strong specific theoretical support.
It is the conjectures about experimentally unmotivated elaborations, like "collapses", that would also break universal conservation of information, for no theoretically necessary reason, that need dramatic new evidence to prove themselves.
If I lack any optimism, it is for conjectured complications with no evidentiary support and weaker explanatory/conservation powers. In any other context, nobody would be entertaining the need for such conjectures.
The "Quantum Collapsers" are right up their with the "Flat Earthers", or solar system "Epicycle Theorists", for not being happy with accepting a working and successful theory as is. Even though their imagined shivs introduce more questions than they answer, and would dispense with its unique advantages.
What if we create a situation in a lab that can be labelled as a collapse of the wave function by interaction with a macroscopic object. Except the macroscopic object is under our control and we can reverse the collapse.
Are the Mysteries of Quantum Mechanics Beginning to Dissolve? I don’t think so.
Zurek’s Decoherence and Quantum Darwinism is thought-provoking, but it’s still speculation without broad buy-in from researchers. We might need ASI to crack these mysteries — our brains weren’t built for this kind of problem.
I think the brains of our stone age ancestors were not built for relativity either. In the end the normal sequence of generations (having children and then die at some point) offers "re-trainings" of the brains. So, besides waiting/hoping for artificial intelligence, we should continue to make (and train) children. Worked great so far.
What we need are tractable experiments to test these theories.
Maybe ASI can help design these. Until it can, it will just be another voice arguing for one position over another on pretty weak arguments. Right now my money would be more on human researchers finding those experiments, but even among those few are even trying
"Thus the wave function can’t tell us what the quantum system is like before we measure it. "
Nothing is a particle, all measured things are a probability that we make a certainty when we measure them.
When you stop looking at things as things, but instead, see them as probabilities, it will all make sense. My hand and the beer bottle I pick up are both probabilities. Since the mind cannot navigate the world based on probabilities it turns them into certainties.
Physical science is is the only way we can perceive quantum science. There is no "collapse" outside of our brains perception.
Quite frankly, Quantum is probably known or solved by a nation state (probably the United States). Similar to AI, they will release it in a safe roll out (as they deem it).
Maybe, but the AI we see in the mainstream today -- generative image/video/text creations and Large Language Model chatbots -- were done via non-governmental public and private companies. And a lot of the work hitting the scene loudly and somewhat prematurely. My understanding is the amount of and type of compute needed for Quantum is pretty intense, so there'd be a huge footprint from its manufacturing to keep it hidden.
It would be interesting if most of our confusion with quantum mechanics came from treating probabilities as independent when they are actually highly correlated. I don’t really know any physics, but I’m familiar with probability and this type of problem seems to be the most common error in interpreting probabilities.
I don't have any skin in the game, but people should be aware of Induction vs Deduction.
Induction had the earth at the center of the solar system and had the best calculations to predict where Mars was. Copernicus said earth was at the center, the equations were simpler, but were worse at predicting the location of planets.(until we figured out they moved in ellipses)
When we say "All swans are white, because I've never seen a black swan." Its probabilistically true. That is induction. If we found swans didn't have the gene to make black feathers, that would be deduction.
Deduction is probably the most true, if it is true. (But it is often 100% wrong)
Induction is always semi true.
Quantum mechanics seems to be in the stage of induction. Particles are like the earth at the center of the solar system. We need a Copernican revolution.
I’m probably way off base and I’m probably missing some insights that I could get by going to school or something but that’s was just my experience with the subject.
https://arxiv.org/pdf/1811.09062
However, it still doesn't really address the core question of when the collapse actually occurs. All it really seems to add is that the environment is an "observer" and that decoherence actually causes the collapse.
A school would usually teach the "shut up (about philosophy) and calculate" approach. These philosophical problems about the meaning of quantum mechanics have been with us for 100 years, and mainstream physics sees them as too hard or even intractable, and thus as waste of time.
--Richard Feynman
You're far from alone. Quantum physics is tricky because it frequently doesn't agree with our physical intuition. Humans are used to dealing with macroscopic objects. They surround us for our whole lives. Matter behaves in surprisingly different ways at the level of single quanta. Seemingly impossible things flop out of the math and then clever experiments show that reality is consistent with the math, but we struggle to reach the point where that reality feels correct. When we try to translate the math into human language, we often wind up overloading words and concepts in a way that can be misleading or even false.
Perhaps we just haven't reached the point where things are sufficiently well explained and simplified, but it may be be that quantum physics will always seem strange and counter-intuitive.
Quantum physics tricky for two separate reasons.
(i) The mathematical theory (Schrödinger equation, wave function, operators, probabilities) is solid and well-defined, but may feel unintuitive, as you say.
(ii) But quantum mechanics is also an incomplete theory. Even if you learn to be at peace with the unintuitive aspects of the mathematical theory, the measurement problem remains an unsolved problem.
"The Schrödinger equation describes quantum systems but does not describe their measurement."
"Quantum theory offers no dynamical description of the "collapse" of the wave function"
https://en.wikipedia.org/wiki/Wave_function_collapse#The_mea...
I'm thinking that the nature of intuition is about training your neurons to approximate stuff without needing to detour through conscious calculation.
And QM is in too high of a complexity class for this to be a thing.
I always fell back on "Spooky action at a distance"; If Einstein found it weird, I shouldn't feel that bad if I can't quite make sense of it.
> macroscopic objects
It's not about scale at all though. It's just that small systems tend to be observed with this other, specific property that we associate with causing "quantum" like effects. Not only do those effects happen at mesoscopic scale but aside from gravity, quantum theory already can be and is used to describe things on large scales too. Classical computers and desks are still "quantum" systems. Recently theory and experiments have developed to connect with gravity in many ways. I'm more confused when people say something is mysterious. They're usually referring to apparent randomness but I think even that is explained already with partitions or even just wave math (complementarity).
Could you form a specific question that you're wondering about? (Have you looked at condensed matter physics yet?)
The mathematics of QM works extremely well.
The interpretations of what the math is saying happens a varied and sometimes contradictory.
We can predict what's going to happen extremely well, we just can't tell the story of what's happening. And there's been a century of trying to avoid the weirdness and failing. The problem might just be our brains evolved in a world that behaves so much differently that we can't understand.
A great example involves flipping a coin. Even people who know it's basically an independent 50/50 chance every time get drawn into thinking about "hot streaks" and "overdue for the opposite."
It's arguably a superpower that has given us lots of agriculture and tools and technology and culture, but like hunger and obesity we can't just turn it off when it gets maladaptive.
Humans are very good at pattern matching and explanation and that's what's given us success, but false-positive matches sometimes are the result and need to be corrected down a bit
Does this not imply that there is an asymmetry, one half of the state gets imprinted, the other half neglected? This however also raises the question about the basis, what is a superposition and what is not depends on the choice of basis. Is there a special basis just as pointer states are somehow special?
This article is making some pilot-wave-like claim on top of quantum Darwinism that while the Schrödinger equation is real, all the 'real realness' exists in some pointer to a specific location inside it. Why does it do this? Where does this claim come from? At least collapse theories allow that the thing the Schrödinger equation is modelling is actually real up until the part God gets out his frustum culler.
https://www.jacobbarandes.com/
seconded
>might make sense to link to the actual material you're referring to
https://www.youtube.com/watch?v=sshJyD0aWXg
In a fews sentences: the evolution of a physical system (quantum and classical) can very successfully be modeled as a stochastic process, and ...
1. state of the system is a real-valued "vector" (could be a vector of with continuous indices), or to put it another way, a "point" in state space.
2. system evolution is described by a real-valued "matrix" (matrix in quotes because it is also possibly a matrix with continuous indices), defined by the laws physics as they apply to the system
3. evolution of the system is modeled by repeatedly applying the matrix to the system (to the vector), possibly with infinitesimal steps.
The major discovery Jacob made is that, historically, folks working on stochastic processes had restricted themselves to studying "markovian" stochastic processes, where the transformation matrix has specific mathematical properties, and this fails to be able to properly model QM.
Jacob removes the constraint that the matrix should obey markovian constraints and lands us in an area of maths that's woefully unexplored: non markovian stochastic processes.
The net result though: you can model quantum mechanics with simple real-valued probabilities and do away entirely with the effing complex numbers.
The whole thing is way more intuitive than the traditional complex number based approach.
Jacob also apparently formally demonstrates that his approach is equivalent to the traditional approach.
Really worth taking a read/listen at.
https://www.youtube.com/watch?v=8xPvxAdmhKM
The author does say the approach is a combination of Copenhagen and MWI, removing the outlandish parts of both. Seems to preserve the randomness of the former though.
Well, duh. It's not like classic objects actually exist, or the classical/quantum divide: everything is quantum, including the "observers". The "classical observer" is a crude approximation that breaks down to a pointy enough question. Just like shorting the perfect battery (with zero internal resistance) with a perfect wire (with zero external resistance) — this scenario is not an approximation of any possible real scenario so it's paradoxicality (infinite current!) is irrelevant.
To call something random doesn't mean it's impossible to model, in fact all sorts of natural facts seemed random one day before being covered by a model. One very relatable example example is the motion of stars in the the night sky, which seemed random for ages, until the Copernican revolution.
The fact we have access to random() function in programming seems to trip many people. random() is a particular model implementation of random, but stuff in nature isn't random().
My point is, using "just random" to do work in any scientific explanation is a clutch.
It does not have to mean something inherently non-deterministic or something that can't be modelled, although it certainly is the case that if something is inherently non-deterministic then it would necessarily have to be modelled randomly. Modelling things as a random process is very useful even in cases where the underlying phenomenon has a fully understood and deterministic model; a simple example of this would be chess. It's an entirely deterministic game with perfect information that is fully understood, but nevertheless all the best chess engines model positions probabilistically and use randomness as part of their search.
The use of "random" as explanation or characterization in science has certainly spanned everything from "we don't know", to "there is inherent indivisible physical randomness".
And I would agree, in the latter case it is a crutch. A postulate that something gets decided by no mechanisms whatsoever (randomness obeying a distribution still leaves the unexplained "choice").
It is remarkable that people still suggest the latter, when the theory, both in theory and experiment, doesn't require a physical choice at all (even if we experience a choice, that experience is explained without the universe making a choice).
https://arxiv.org/abs/2503.15776
In this paper a plurality of physicists stated that they felt that the initial conditions of the universe are brute facts that warrant no further explanation. This is not "our model doesn't yet account for it", it's "there is no explanation to be given".
That doesn't make a model wrong. All models we have are partial explanations.
But that doesn't make it rational to claim that an incomplete model is complete. Or to treat unexplained specifics as inherently "just so", without cause or reason (i.e. magic), and we must just accept them as unexplainable instead of pursuing them with further inquiry.
https://www.cambridge.org/core/books/decoherence-and-quantum...
Hardly. Some philosophers say that. But I don't take much from philosophers reasoning about physics.
Born after 1 hour of prompting back and forth learning all I can about quantum
Then came up with this idea
https://github.com/zerocool26/Quantum-Observability-Contract...
The quantum function is the real object. Little balls we like to imagine the particles as are just perception of quantum functions very narrowed down by entangling with macroscopic objects. The way we measure anything is through the entanglement between the measured entity and our macroscopic instruments.
#2: What's so bad about thinking about life as an exciting rollercoaster ride? The tracks are laid but the ride is still fun.
I don't know how free will could actually work with any kind of universe governed by a set of laws, whether they include randomness or not. So I don't believe in it, but of course in my day to day life I act as if it exists.
Yet I don't know how qualia or subjective experiences could actually work with any kind of universe governed by a set of laws, whether they include randomness or not. But I believe I have this subjective view of the world that doesn't seem to be explainable with a set of equations.
So it's weird. At least philosophy and science agree that.
What you seem to prefer is libertarianism.
Actually, the "many worlds" "interpretation", simply treats the highly successful equations as meaning what they say.
And it is misnamed. The field equations describe a highly interconnected "web universe" of "tangles" (what I call spans of entangled interactions) and "spangles". (My shorthand for superpositions, i.e. disjoint interactions of particles. Think of all the alternate lines leading from and two distinguishable states, like star patterns.) Basically, a graph of union and intersection relations where all combinations, individually and en masse, are determined exactly by the laws of conservation.
That's an amazingly good property for a theory. And we have it.
By including all consistent versions, no external information is required by the theory. It is informationally complete. A successful objective explanation. With deep experimental support that entanglement and superposition actually exist, because their interactions are easily testable.
In fact, entanglement doesn't "violate" locality, it is the more general case which explains locality. Locality is just tightly coupled entanglement/interaction. Not a fundamental constraint on connections. There is no fundamental "distance", just loose and dense connections. Locality is just what we see wherever there are patterns of dense connections. They are an effect, not a constraint.
Even in the classical world of large (highly tangled) objects, we take it for granted that dependent objects can separate over arbitrarily vast dimensions of space and time and yet return together. If that isn't entanglement over vast distances, what is it? It is a basic property of classical physics. Quantum mechanics reveals more subtlety in those maintained connections, including interactions between connections, but it didn't originate them.
Forces disappear. They become passive in an interesting way. Histories where information cancel, leave structured distribution patterns behind, which to us look like forces. Cancellation is just information being conserved. Not an active force. But the results appear active.
In a similar way to how the evolutionary umbrella seems very smart and creative, when really, it is just poorly adapted individual creatures independently cancelling themselves out blindly, leaving a distributional improvement behind.
There is no additional information needed to explain the effect of quantum "collapse" because it is already explained by the fast bifurcation of disjoint tangles when lots of particles interact in an unorganized manner. It is thermodynamics being thermodynamics.
Anyone attempting to invent a mechanism for "collapse" is like someone trying to explain why the spherical Earth appears "flat" by introducing additional speculative theories. Despite the spherical world theory already explaining why it looks flat locally.
And the only reason to not take the experimentally verified field equations as a plain reading, is the result is "too big" for someone's imagination.
Our everyday experience doesn't limit reality, despite humans having trouble with theories that reveal a bigger reality, over and over and over.
Bluntly: The total field equations preserve information - that is the plain implication and guarantee for having both unions (tangles) and intersections (spangles) of interactions.
Anything else requires a universal firehose of magically appearing information to choose collapses, i.e. particular interactions, in order to explain something already explained. In other words, dressed up voodoo. And by "re-complicating", uh, "re-explaining" the already explained, introduces a ridiculous new puzzle: Where does all that pervasively intrusive relentless injection of information (that determines every single extricable particle interaction!), come from? (Occam is spinning like a particle accelerator in his grave.)
Saying it "Just Happens" is like someone "explaining" their pet version of a creator with "Just Is". It is a psychological non-taulogy for "Don't Ask Questions".
The Stanford Encyclopedia of Philosophy (https://plato.stanford.edu/entries/qm-manyworlds/) goes into this in some depth, and it seems like the right way to think about it is say that "I" in one branch is a different entity than the "I" in a different branch. I have somehow not been able to grok it yet.
And I agree about the naming. I really dislike the name "many worlds interpretation", which seems to imply that we have to postulate the existence of these additional worlds, whereas in fact they are branches of the wavefunction exactly predicted by standard quantum mechanics.
That's quite a serious issues. And arguments against that - like Self-Locating Uncertainty, or Zurek's Envariance - look suspiciously circular if you pull them apart.
There's also the issue that if you don't have a mechanism that constrains probability, you can't say anything about the common mechanism of any of the worlds you're in. Your world may be some kind of lottery-winning statistical freak world which happens to have very unusual properties, and generalising from them is absolutely misleading.
There's no way of testing that, so you end up with something unfalsifiable.
I don’t claim to understand them though. I have tried.
I have no idea what this means.
Is there a bound on anything in reality, in terms of scale? Beyond its own laws?
I am reminded of how often in history, too much time, or too much scale, were unsuccessful arguments against many theories we accept today. Those critiques died without any need for special arguments, because they don't have a logical basis.
Also, there are not a number of many "worlds". That is a reflection of poor naming. There is an interleaving of all interactions, so if you zoom out, a smeared landscape across all configurations, from the plank scale up.
Because the connections involve both intersection (entanglement) and union (alternate paths), we get bifurcation of classical sized paths (dense entanglements), while the individual particles continue unconcerned by how they appear to create different classical histories at large scale.
And yes it is experimentally validated. This is the theory that everyone accepts in the lab, even as larger scales of experiment continue to progress.
But some people have difficulty believing/visualizing that it continues to work at larger scales. Despite no scale limitation in the theory, no scale related violations ever suggested experimentally, and the strong likely that scale limitations would produce new physics in at-scale observations of our cosmos if they did exist.
Pour water down a hill. Water clings to water, and we have hills that already have lots of correlations. We get streams that break up into multiple streams.
How did one stream end up where it is? It seems like a good question, but it is circular. The stream is defined by where it is. You are here (in some circumstance), because the version of you in this circumstance is you.
A transporter accident that creates several versions of you, on several planets with difference colors, doesn't need to explain to each version how they ended up at a planet with their color. Even if for a particular copy, it seems like there should be an answer why they showed up on a planet of a particular specific color. The "why" is just, all paths were taken.
Maybe what I meant was this: if I perform a quantum experiment where the spin measurement of an electron could be spin up or spin down, the future me would end up in one of two branches: I measure spin up, or I measure spin down. There wouldn’t be any possible world where I measure a superposition of spin up and spin down, because such a a state is going to decohere rapidly. This makes sense. What I’m unable to grasp is that even though the wave function of the universe contains both branches, “I” somehow experience only one of the two branches.
The answer to that I guess if that the two branches are nearly orthogonal they will merrily evolve independent of each other. But somehow “I” only experience only one of them.
Sorry for the rambling. I’m not able to articulate what I don’t understand.
> The future me would end up in one of two branches: I measure spin up, or I measure spin down.
The future "you's" would each see spin up, and spin down, respectively.
We are just as quantum as what we measure. There isn't a scale where entanglement and superposition turn into something else. No classical vs. quantum atoms.
Just as an up-spin qubit touching an up/down qubit results in an up-up qubit pair in superposition with an up-down superposition, conserving the qubit, when we touch a qubit we get "us"-up and "us"-down versions.
No information is created. None is destroyed. We experience a correlation = "collapse" (both versions of us), but the quantum information just continues on as before, qubit conserved.
The writing of Chalmers and its consequences have been a catastrophe for philosophy.
The hard problem is that there is such a feeling at all.
In essence, you're asking why there's an inside to being a self-modeling system. But "inside" isn't something extraneous, something additional -- rather, it's what "self-modeling" means.
Really the "hard problem" has a very easy answer, but it's a physical/functional answer, and dualists and obscurantists simply don't like it.
As an aside, isn't it more weird that violet and purple look indistinguishable despite being physically so different? It's said that this is because our L-cones (red-sensitive) have a secondary sensitivity peak at short wavelengths. So violet light triggers S-cones + a bit of L-cone. Purple light (red + blue) also triggers S-cones + L-cones. Similar activation pattern = same quale. It's all functional/physical.
Read Tom Cuda "Against Neural Chauvinism." Also Daniel Dennett.
Yes some chemical change happened there, but so what.
These are not very unusual chemical reactions and happen and is happening everywhere. Does all the chemical reactions going on generate an experience to some experiencer?
I mean, there is a credible first-person answer to that question of yours, which each man can answer for himself.
But considered more seriously, the "hard problem" is an artifact of treating experience as a separate thing that needs to be generated. If you accept that self-modeling systems bounded in space and time exist, you've already accepted that experience exists -- because experience is what such a system is, from the inside. There's no second step where experience gets added. The question "why is there experience?" is exactly akin to "Why is there an interior to four walls and a roof?" The interior isn't a separate thing; it's necessarily constitutive.
I'm using "consciousness", "subjective experiences", "senses" and "qualia" as synonyms here, but if you see a difference, please mention it. Obviously "consciousness" has many definitions that have nothing to do with the "hard problem of consciousness", so I'm using it in this sense here. I'll use "qualia" as it's the word that relates most to the hard problem of consciousness. You can substitute it with "sense"/"senses" if you like.
1. Do you view qualia as an emergent property? Of what exactly? What is a self-modeling system? Is a human one? Where would the boundaries be; would they even be defined? The human body or the brain only or the nervous system? Or whatever neurons activate when a certain thing happens, like seeing blue or feeling pain? What about animals - pigs, dogs, rats, snails, ants, bacteria? What about AI, current and theoretical?
2. Could there be a set of minimal self-modelling systems in some abstract space that are the boundary of what has qualia and what doesn't? Like, these 1000000 neurons arranged like that qualify, but if you take 1 out, they don't? Or is it a fuzzy boundary somehow?
3. What kind of statements could be made about the qualia of yourself and of others? Not sure what kind of answer I'm looking for, but how objective or truthful would those statements be? Maybe "qualia is nothing really, we only have the set of equations that govern physics and everything else is an abstraction"? Like an apple isn't anything really, it's just a badly defined set of atoms and energy. There is no "apple" or "chair". Or is it something else?
4. What are your views on meta-ethics and ethics in general? Should we care about it at all?
Consciousness is something else. It is tempting for humans to pair mysteries up, pyramids and aliens, or whatever. But there isn't any factual basis for linking the experience of self-awareness with quantum mechanics.
Is there a factual reason we know digital minds couldn't be conscious? Where quantum effects have been isolated from the operations of mental activity. That seems like a premature constraint to assume.
Is it falsifiable?
If you have a theory that seems unassailable by any logic, that's a good signal it is tautological and not very useful.
So two entangled versions of you follow, one entangled with each state. (Actually as many quantum versions of you that touched the qubit times two.)
Which is what happens, as we know from experiment when any one qubit interacts with another independent qubit. We get the product of entangled states, each now correlated. But different entangles states are now in superpostion with each other.
So correlation/entanglement happens and is experienced, despite no collapse of superposition. No information was destroyed or created.
Each of you thinks, wow now the qubit only has one state. But that is because there are two versions of you, correlated respectively with the two uncollapsed qubit states.
Complete conservation. That is the "experience" of collapse that needs no explanation, because it is a predicted experience not requiring an actual collapse. Just as spherical Earth models don't need a special explanation for the appearance of locally flat Earth, because spherical models predict a local flat Earth experience.
All experiments agree with the many worlds interpretation (again, better described as a quantum web interpretation), and it is the plain Occam's Razor interpretation.
No additional flourishes are needed. That is strong theoretical support. It is the default (plain reading) interpretation already.
And it is the interpretation that doesn't just conserve in one history (i.e. conservation of energy etc.), but conserves information universally.
So again, very strong specific theoretical support.
It is the conjectures about experimentally unmotivated elaborations, like "collapses", that would also break universal conservation of information, for no theoretically necessary reason, that need dramatic new evidence to prove themselves.
If I lack any optimism, it is for conjectured complications with no evidentiary support and weaker explanatory/conservation powers. In any other context, nobody would be entertaining the need for such conjectures.
The "Quantum Collapsers" are right up their with the "Flat Earthers", or solar system "Epicycle Theorists", for not being happy with accepting a working and successful theory as is. Even though their imagined shivs introduce more questions than they answer, and would dispense with its unique advantages.
A quantum computer is such a macroscopic state.
Zurek’s Decoherence and Quantum Darwinism is thought-provoking, but it’s still speculation without broad buy-in from researchers. We might need ASI to crack these mysteries — our brains weren’t built for this kind of problem.
Maybe ASI can help design these. Until it can, it will just be another voice arguing for one position over another on pretty weak arguments. Right now my money would be more on human researchers finding those experiments, but even among those few are even trying
Nothing is a particle, all measured things are a probability that we make a certainty when we measure them.
When you stop looking at things as things, but instead, see them as probabilities, it will all make sense. My hand and the beer bottle I pick up are both probabilities. Since the mind cannot navigate the world based on probabilities it turns them into certainties.
Physical science is is the only way we can perceive quantum science. There is no "collapse" outside of our brains perception.
Induction had the earth at the center of the solar system and had the best calculations to predict where Mars was. Copernicus said earth was at the center, the equations were simpler, but were worse at predicting the location of planets.(until we figured out they moved in ellipses)
When we say "All swans are white, because I've never seen a black swan." Its probabilistically true. That is induction. If we found swans didn't have the gene to make black feathers, that would be deduction.
Deduction is probably the most true, if it is true. (But it is often 100% wrong)
Induction is always semi true.
Quantum mechanics seems to be in the stage of induction. Particles are like the earth at the center of the solar system. We need a Copernican revolution.