0

It is commonly held, in modern physics, that it is impossible to predict when a particular atom will decay. This is often taken to mean that there is no particular reason why the atom decided to decay at time t.

The Principle of Sufficient Reason (PSR) states that everything must have a reason. Thus, it appears that the standard interpretation of quantum mechanics violates this principle.

Now, when considering groups of atoms, we do manage to predict certain things about them. For example, we know that radon has a half-life of 3.8 days.

Is there a reason for why the half-life for radon is 3.8 days compared to say, 5, or 10? If this half life is determined by historical frequencies of data and nothing else, does that mean there is no reason for this half life either?

How does this extend to the macro world? Sure, at the subatomic level, certain things may be happening for no reason. But in the macro world, things do seem to be happening for a reason. For example, a field may have a crater because of a bomb.

If there is a reason for the half life mentioned earlier and many other things that happen in the macro world, how does something that occurs for no reason (at the subatomic level) manage to “create” things in the macro world that do seem to occur for a reason?

In other words, if quantum mechanics is the fundamental fabric of reality, how do blind/random/“reasonless” processes create lawful/seemingly determined/“reasoned” processes?

  • 2
    Concerning the question how the classical world emerges from quantum mechanics see https://plato.stanford.edu/entries/qm-decoherence/. The mechanism is named "decoherence". But it's understanding requires some background in quantum mechanics. I recommend to post the question in condensed form also on https://physics.stackexchange.com/questions – Jo Wehler Oct 20 '23 at 14:35
  • 2
    Once PSR fails at the atomic level it also fails macroscopically, in contraptions like Geiger counters, for example. That the macroworld seems "lawful", outside of strategically chosen characteristics in carefully constrained models, is a bold proposition. As for random processes creating some laws, it happens the same way as with the law of large numbers in coin tosses - due to coarse-grained averaging over large numbers of microscopic events. – Conifold Oct 20 '23 at 17:25
  • @Conifold But when modeling coins, we admit that we can predict individual coin tosses. And not just theoretically, but practically, as shown in experiments. We also have reason to think heads has a 50% probability before it is tossed due to the shape of the coin and knowing how gravity works with other variables remaining constant. We don’t have such a reason for figuring out probabilities of a radioactive atom decaying before it actually decays, to my knowledge. So the law of large numbers in the coin case resulting in ordered laws seems to make more sense than in the QM case. –  Oct 20 '23 at 17:37
  • Also, when you say it’s a bold proposition, are you saying that the world does not contain laws for the most part in the macroscopic world? Contraptions like the Geiger Counter seem to be exceptions rather than the ordinary in the macro world. –  Oct 20 '23 at 17:37
  • That we can predict carefully staged individual coin tosses makes no difference to the law of large numbers, you will get the same law running double slit experiments instead, which we cannot predict individually even in principle. Macroworld is generally pretty random in many ways, and much effort is put into containing that randomness for human benefit. Any macroscopic system sensitive to initial conditions, when exposed to quantum fluctuations, is a naturally occurring variant of the Geiger counter. That is how ionizing radiation induces mutations in organisms, for example. – Conifold Oct 20 '23 at 18:15
  • @Conifold The law of large numbers occurs because each individual random process is considered the same. If each coin toss is modeled as a random process, it is assumed that the process producing each coin toss has the exact same probabilistic “structure”. But isn’t the assumption of this constancy a law of sorts? Without this, I fail to see why any sort of pattens would emerge in the macro world. The question therefore is why or how this constancy exists when each individual atomic decay process for example is blind and occurring for “no reason”. –  Oct 20 '23 at 23:07
  • 2
    To get something you have to assume something, that is a triviality. But you will no more get PSR out of "patterns" than Descartes got the world out of cogito. The "law of sorts" is individually indeterministic, but it does not mean "blind". It has "constancy", "patterns" and even causes, just very insufficient. Some aspects are "for a reason" others "for no reason". And there is no more question why than why not. It doesn't have to be all the way one way or the other, and no special explanation is needed for that. Reasons are human preoccupation, God or nature owe us none. – Conifold Oct 20 '23 at 23:35
  • @Conifold Interesting comment. I’ve never thought of it that way. We certainly are instilled with the notion of looking for reasons everywhere. When you say there is no more question why than why not, do you mean that when X happens, the question “why does X happen?” is just as valid as “why can’t it just happen for no reason?” if no reason can be found? –  Oct 21 '23 at 12:23
  • There's an implicit assumption baked into the language that, if you're delicate enough in your handling, you can take the measurement device out of the system without changing the system measured, so that "this happened" and "I measured this to happen" become synonymous. The assumption is false. – g s Oct 22 '23 at 23:11
  • How do you know that? @gs At best, you can probably argue that the assumption is untestable, not that it’s false. –  Oct 23 '23 at 01:55
  • @thinkingman see wiki: measurement – g s Oct 23 '23 at 15:48

2 Answers2

0

I can think of two approaches to the question. Firstly, causes and effects tend to be chained, so that an effect becomes the cause of another effect and so on. So working back, we might, for example, have the following scenario...

Q Why did the the employees at the nuclear power plant put on their hazmat suits?

A Because an alarm went off (sufficient reason).

Q Why did the alarm go off?

A Because it detected a signal from a sensor (sufficient reason).

Q Why did the sensor emit the signal?

A Because it detected radioactivity (sufficient reason).

Q Why did it detect radioactivity?

A Because a nucleus decayed (sufficient reason).

Q Why did the nucleus decay?

A No idea- it just did.

In that example, we have a chain of events that begin with the random, unpredictable decay of a nucleus which seems to violate PSR. So you can either take the view that every step in the chain had a sufficient reason apart from the first, or you can consider the chain as a whole and say that the employees put their hazmat suits on for no sufficient reason. Most people would take the first view.

The other approach is to consider that while a given quantum event, such as the decay of a nucleus, doesn't seem to have a reason and seems undetermined, a large set of such events can still have an aggregate effect that is very predictable and determined.

Marco Ocram
  • 20,914
  • 1
  • 12
  • 64
  • Yeah I guess I just have a hard time wrapping my head around the concept of individual events being “fundamentally” random yet still resulting in predictable behavior as an aggregate. All other everyday examples like coins and dice have probabilities that theoretically make sense. Before tossing a dice or coin, one can reasonably posit their probabilities given the shape of a coin/dice, their sides, and what we know about the world when it comes to how things move. In the case of an atom, its particular half life is determined from data, not posited in advance –  Oct 21 '23 at 22:55
  • @thinkingman I don't think that Marco intended to say that, but it's false. Decay is a barrier penetration (tunneling) event, whose probability, hence frequency per unit time, hence half life, can be obtained theoretically, euther by making a few simplifying assumptions or (hypothetically) by grinding out the whole wave function from first principles. See e.g. the Gamow model for alpha decays. – g s Oct 25 '23 at 16:37
0

The reason that undeterministic micro-processes produce deterministic macro-processes is simply the law of large numbers. When you flip a coin, the outcome is random. when you flip 6x10^23 coins (which is the number of particles in a mole), the outcome is practically guaranteed to be 50-50 with extremely high confidence.

Jumboman
  • 382
  • 1
  • 5