I'm going to talk about quantum physics again. As readers know I studied a little physics at university level in 2005 and, over the last little while, I have been trying to fill in some gaps in my knowledge about quantum mechanics by watching sometimes slightly misleading educational videos on Youtube, reading Wikipedia, and referring to the Stanford Encyclopaedia of Philosophy entry on Heisenberg's Uncertainty Principle. The Encyclopaedia entry is excellent but I suspect that the Wikipedia entry I looked at, concerning the Particle in a Box thought experiment, is at best obfuscatory and might even be just plain wrong. In talking about quantum physics I am going to attempt to describe it in a way that is reasonably clear to people who only have a high school understanding of physics but I also have a reasonably innovative idea, the seed of which occurred to me a long time ago and which I have been developing in this blog for some time, that may even interest people who have done proper degrees in physics. In this respect this essay will resemble an essay I wrote last year, "Quantum Physics for Dummies and a New Idea". It may be helpful to read this earlier essay first. What I intend to do in my main argument is to show that there is an apparent contradiction in the laws of physics which forces us to choose between the principle that indeterminacy is a necessary feature of quantum mechanics and the Law of Conservation of Momentum. I do not pretend to be an expert on physics and there is much I don't know about the mathematics and proposed interpretations of quantum mechanics but, as I said in the essay "Evolution, Ideas, and Hiveminds", what I can do is take what I know and think though it with some semblance of rationality. In the first part of the essay I will describe what I know about the Particle in the Box experiment and in the second part, the most important part, I will present my main argument.
Let's start with the core idea. A fundamental principle in quantum mechanics is that sometimes it is better to describe things like electrons and photons as particles and sometimes it is better to describe them in terms of waves. The simplest waves are sine waves and cosine waves but a great deal of phenomena can be described as wave-like if it is periodic. We can get these other types of waves by adding together a lot of sine waves with different wavelengths, frequencies, and phase shifts. It is exactly the same as when we talk about musical notes not only having 'pitch' but also 'timbre'. If we add together an infinite number of waves that all have crests at or near a single point we can get something very localised in space– we call this a 'wave packet'. A wave packet is not periodic. Wave packets are very close to being particles, although, importantly, the waves that together contribute to the wave packet exist throughout all space and time and the speed of these individual waves do not have to be same as the speed of the wave packet. A great deal of quantum physics follows from this wave-particle duality.
What we want to do now is to imagine situations in which an electron or photon is subject to boundary conditions. If these boundaries do not change with respect to time, we can use an equation known as the time-independent Schrodinger equation to find the wave function – I will describe what I mean by the wave function as we go along. The simplest such case is the Particle in the Box thought experiment, a thought experiment often pulled out in introductory courses in quantum mechanics. Imagine an electron in a one-dimensional box with impenetrable walls on either side. We make the walls impenetrable by stipulating that the electron could only escape the box if it had infinite energy, something that is impossible; we also assume that there are no varying electromagnetic fields inside the box, that the potential energy associated with the electron is always constant inside the box. To find the wave function, we try to find solutions to the Schrodinger equation that are consistent with these boundary conditions. Because the wave function needs to be zero at the walls we want the wave function to be zero at both walls; these conditions and the fact that the potential energy inside the box is constant leads us to conclude that the wave function must be a simple sinusoidal wave. (I won't go into the mathematical proof here but readers can find proofs on the Internet.) I can't draw pictures in this blog but visualise the first half of a sine wave going up from zero at the left hand wall, peaking in the middle of the box, and then descending back to zero at the right hand wall. In this solution the wavelength is exactly 2L, twice the width of the box. However this is not the only solution. We could imagine a sine wave that has exactly half the wavelength of the first we considered or a third or a quarter and so on; these solutions will all also work. There is an infinite family of solutions but all these solutions are discreet in that they are all n times the wavenumber of the simplest solution where n is any natural number. (Wavenumber is simply the inverse of wavelength times 𝝿 and the term we use for the solutions is eigenstates.) Mathematically, when the width of the the box is L, the equation for the wave function associated with a given eigenstate is √(2/L) multiplied by sin(2n𝝿x/L) where x is the coordinate on the x axis and has the value 0 at the left hand wall.
What is the physical significance of the wave function? The wave function by itself does not represent anything measurable but if we apply the right operations on it we can find all of measurable properties of the associated particle such as its average expected position, momentum, and energy. Perhaps the most helpful way to make sense of what the wave function does is to talk about probabilities. If we want to find the probability that the electron is between two points in the box, a and b, we first square the wave function and then find the area under this new curve between the two points a and b. This area equals the probability of finding it there. The total area under the whole curve must equal 1 because we are assuming that exactly one electron exists in the box. What this means is that for the n = 1 case, the probability of finding the electron in the middle of the box is much greater than the probability of finding it near the walls. However there is a surprising twist. If n = 2, the wave function is zero in the middle of the box: this point is called a node. In general, for any eigenstate associated with the Particle in the Box thought experiment, there are always n – 1 nodes. The fact that the wave function can sometimes be zero means that if we choose a region arbitrarily small around a node, the probability of finding the electron there gets arbitrarily close to zero.
This raises a question that has often occupied me in the past. If the probability of finding an electron in the middle of the box is zero, how does the electron move from one side of the box to another? This puzzle vexed me because it is tempting to think that behind the mathematics there is a real world in which the electron is flying from one side of the box to the other and back again, bouncing off the walls. This puzzle appears if we assume that the electron is only in one eigenstate. However what I didn't fully understand in the past but have a better understanding of now is the notion of superposition. By far the most helpful Youtube video I watched is "Superposition for the particle in a box – David Miller", a video which has criminally few views but which I recommend to readers particularly because of the visual animations he includes in the second part of the video. The idea here is that the total wave function can be a sum of different eigenstates properly 'normalised' so that the area under the total wave function still equals one. We might suppose that the wave function is composed of the n=1 eigenstate and the n=2 eigenstate and then assign coefficients to each eigenstate, c1 and c2 ,such that c1 squared and c2 squared equal one. If we suppose these two coefficients are equal, this is like saying that each of the two eigenstates is equally probable. These eigenstates need time components because this superposition changes with time and so we have to use the time-dependent Schrodinger equation. What we find now is that if we take the absolute square of the wave function to find the probabilities it no longer has a node and now alternates between swelling in the left side of the box and the right side of the box as time progresses. Although Miller does not say this, this leads me to make the following conjecture which people who are experts might hopefully agree with. Let us suppose that n is very very large as we might expect with a macroscopic system and also that there is uncertainty about the value of n, that there are very very many other possible values of n. We might suppose that all the possible values of n lie on a bell curve with our preferred value at the crest and choose our coefficients accordingly. Our wave function is then the superposition of all of these very many eigenstates. We might then find that the wave function is now characterised by a very sharp spike or pulse, a wave packet, travelling from left to right and back again with constant velocity given by p/2m as would be the case in Newtonian mechanics. If this conjecture is correct, it would suggest that it is waves that are really fundamental and that particles increasingly emerge from these waves as we increase the size of the system, increase the size of the eigenstates and the number of eigenstates we are superposing.
I want now to talk about the momentum of the electron in the box. At high school we learn that momentum is mass times velocity, mv. This enables us to define kinetic energy as p²/m where p is momentum. Then in the nineteenth century physicists discovered light, which has no mass, can transport momentum through space as well, and with Einstein's special theory of relativity in 1905, we arrived at a new definition relating momentum to energy, E² = p²c² + (mc²)². Photons, the elementary particles Einstein postulated, are massless and so simply have a momentum p equal to E/c. In 1923, Lois De Broglie proposed yet another definition for momentum by arguing that particles have characteristic wavelengths and that the momentum of a particle is related to its wavelength by the simple equation: p = h/λ where p is the momentum, h is a very important constant called Planck's constant, and λ is its wavelength. Note that even though momentum is a vector, wavelength is a scalar. If we apply the De Broglie equation to the Particle in the Box thought experiment, we find that the momentum must be nh/2L inside the box and zero outside it because for each eigenstate there is a single sine wave. It is reasonable to suppose that the momentum of the particle, insofar as we can speak of a particle at all, if in a single eigenstate, is either nh/2L going left or nh/2L going right with equal probability and so the average expected momentum must be zero. In fact this is fairly easy to prove mathematically. We can then use the Newtonian definition relating energy to momentum to find the allowed values for the energy of the electron. (Although the Einsteinian definition is superior, it is the Newtonian definition that is typically used.) If we have a superposition, time becomes involved again and so it is more complicated to work out the momentum.
However we seem to have a problem with this simple formula. The problem relates to another key idea in quantum physics: the Heisenberg Uncertainty Principle, a Principle that is very important to my main argument and which I will come back to later. This principle says that the uncertainty in the momentum of a particle when measured times the uncertainty in the position of a particle when measured can never be less than ℏ/2, where ℏ is the Reduced Planck constant or h/2𝝿. Schematically, ΔPΔx ≥ ℏ/2. Uncertainty here can be defined to be the standard deviation in momentum or position if we could somehow carry out a measurement on the same particle in identical situations many times; this is not the only way to define uncertainty (it works best with bell curves) but it is the one physicists usually accept. The problem is that if the momentum is known precisely then the uncertainty in position must be infinite but the uncertainty in position cannot be infinite because the particle must be somewhere inside the box. If the uncertainty in position is not infinite, then there must be uncertainty related to the momentum and so this seems to suggest that momentum cannot be the simple expression we arrived at above, the one given by the De Broglie equation.
Faced with problems such as this, quantum physicists have come up with yet another way of working out momentum. It involves something known as the momentum operator. That is, we take the partial derivative of the wave function with respect to space to find another function. This second function, a kind of momentum wave function, does not give an exact value for the momentum but, similar to the operation we can carry out on the ordinary wave function, it seems that we should be able to find the probability that the momentum lies between two fixed values, p1 and p2, by taking the absolute square of this second wave function and finding the area under the curve between these values. A difficulty I have faced when thinking about the momentum wave function is that the variable in the first wave function is x (we are assuming time invariance) whereas the variable in the second function is p and I am unsure how we exchange variables or how we change the bounds of integration. I have gone through innumerable videos on Youtube and read a few conversations on the Physics Stack Exchange website and have yet to find a satisfactory answer to this question. It seems everyone is as confused about this as I am.
It may be useful to consider the Wikipedia entry on the Particle in a Box. The equation for the momentum function given in it, presumably determined using the momentum operator, is complex, perhaps unnecessarily complex. I'll make some important points about it. Everyone agrees that for any eigenstate there is a fixed energy associated with the particle which, if we define E as p²/m, would lead us to suppose that the momentum is also fixed. But the Wikipedia entry quite clearly says the momentum can take any value at all before measurement. The writers argue that the Newtonian relation does not hold with respect to the Particle in a Box, even though it is supposed to hold in many other quantum experiments. It is important to also note that the Einsteinian definition of momentum is not used either – physicists often analyse quantum situations from a kind of Newtonian perspective even though Dirac uncovered a better version of the Schrodinger equation that is relativistic, perhaps because using the Dirac equation is just too unwieldy in most situations and so it is easier to use the older Newtonian way of defining momentum. Finally it is also worth noting that the Wikipedia article does not use the De Broglie relation either – although momentum is still defined in terms of a wavelength, this wavelength is not the same as the one we worked out when we applied the Schrodinger equation to the particle in the box originally. Our original treatment suggested that the wavelength of a particle in an n eigenstate is 2L/n but this is not the wavelength used in the article's momentum calculation. It is not clear to me how the physicists who contributed to this entry arrived at another value for the wavelength of the electron, a wavelength that does not seem to clearly follow from an analysis of the original situation, although, if the treatment is correct, it presumably follows from a Fourier transform of the original wave function. I admit I currently don't fully understand Fourier transforms.
There is one final point worth making about the Wikipedia entry. There is a discussion of the Uncertainty Principle and what I want to focus on is that the uncertainty in momentum, once we take the square root of the variance and swap the reduced Planck constant for the normal Planck constant, is nh/2L. If this is the uncertainty we could suppose from it that the momentum, although zero on average, is indeed either either nh/2L to the left or nh/2L to the right as we seemed to conclude based on our original calculations, calculations we made assuming the De Broglie relation did indeed hold, and that the Newtonian definition E = p²/m also did indeed hold. Although the Wikipedia writers claim that the momentum is continuous for any eigenstate, it seems worth noting that the uncertainty calculated is consistent with this other treatment of the thought experiment, the treatment I originally learned way back in 2005.
So I admit I am somewhat confused about quantum physics, even in the simple case of a Particle in a Box. However the main argument I wish to present does not rely on a sophisticated understanding of quantum physics. Rather it follows from some simple physical premises that physicists usually accept.
My main argument concerns measurements, the Uncertainty Principle, and Conservation of Momentum. Suppose we carry out the particle in a box experiment in the real world. The box we are considering is three-dimensional rather than one-dimensional and it may not be reasonable to suppose that the potential energy is infinite outside the box. Nevertheless the properties of the system should still be calculable in principle. What we do first is perform measurements on the box that will provide us with information about the electron. That is, we measure the width, height, and length of the box and, based on empirical data, we work out that potential energies both inside and outside the box. We also know that there is exactly one electron in the box and we know, of course, the mass of an electron. Although we do not know the eigenstate of the electron, or if the wave function is a superposition of multiple eigenstates, I claim that by applying the appropriate equations to the results of these measurements, we can set limits on the potential wave functions the electron must have and on its properties: its possible positions, momenta, and energies. Although the measurements of the box's dimensions occur at a particular moment, we assume that these dimensions are completely invariant with respect to time: this means that because the uncertainty in time is infinite, we can know with absolute precision the possible energy levels associated with the electron. (This is because the Uncertainty Principle applies to time and energy in the same way that it applies to spatial position and momentum.) Then, at some particular later time, we perform another measurement. We irradiate the box with photons at this particular time and from this determine where the electron is at this particular moment. Then a little later we perform yet another measurement: we again irradiate the box and work out very precisely again where the electron is.
What I am claiming is that with every measurement we must update our model of the wave function inside the box. The wave function must change with every additional measurement. Whereas the probabilities associated with position, momentum, and energy were originally based on our measurements of the box, probabilities given by a wave function somewhat comparable to the wave function of the Particle in One Dimensional Box, when we perform the measurement at t1 , we must suppose now that the wave function has a very sharp spike around the point where we find the electron at this time and is almost zero everywhere else. This is because the probability of finding it very near this point must be almost a certainty. If there is great certainty about the particle's position, there must be great uncertainty in its momentum. As a result of this measurement the wave function has changed, our model has changed. This has consequences for our predictions not only concerning where the particle will be in the future but also where it was in the past. Because this measurement occurs at a particular time, we can no longer employ the time-independent Schrodinger equation; whatever equation we use to determine the particle's future and past positions must involve time as well as space. Then when we perform the next measurement at t2 we we must change the model once again.
This proposal, a proposal I have hinted at in earlier essays, is that each new measurement changes the wave function. It raises some curly questions. It involves a view of wave functions, and probability distributions more generally, as being incompletely described models that can be improved upon by more measurements, more evidence; this might seem to suggest that if we performed enough measurements on a system, if we could gather all of the information about a given system or situation, all of our predictions concerning it would be certain. This seems to rub up against the idea that quantum physics is fundamentally probabilistic, indeterminate. To fully work out this proposal you would need a different theory of probability than the one often used by physicists. This is something I have thought about for a while; in the previous essay when I talked about Bayes' Theorem I should have been clearer about my own view of probability but I have still not worked out my own theory of it sufficiently well to clearly articulate here. The most important issue raised by this imagined experiment that I want to talk about in this essay however concerns Conservation of Momentum. Let us assume that we know that the particle's momentum has some certain range of values before the first irradiation; it seems that as a result of the measurement its momentum 'randomly' changes. There are two possible explanations. Either momentum is not absolutely conserved or the photons with which we have irradiated the electron have imparted momentum to it. Furthermore, by finding out very precisely where the electron is we have supposedly lost information about its momentum, but, supposing we found the electron to be at point A at t1 and at point B at t2 , it seems that, if momentum is conserved, its velocity between these two measurements must be exactly (B – A)/(t2 – t1) and can thus be precisely calculated. Either momentum can randomly change between measurements, violating Conservation of Momentum, or, although the Heisenberg Uncertainty Principle might indeed apply to particular measurements, we can in a sense violate this principle, hack it, by taking into account multiple measurements.
This issue concerning multiple measurements is the reason I have named this essay "Quantum Mechanics and Multiple Measurements". If you have a look at the Stanford Encyclopaedia entry on the Uncertainty Principle, you'll find that Heisenberg himself conceded that it was possible his principle could theoretically be violated by, as it were, bringing multiple measurements of a particle or system made at different times into the calculations. Unfortunately the Encyclopaedia entry does not explain how physicists since Heisenberg have solved this apparent paradox or even if they have.
I want now to discuss these ideas in relation to another quantum experiment, the diffraction experiment, the example I used in "Quantum Physics for Dummies". In this experiment, going from left to right, we have an emitter, a screen with a very small aperture, and then some distance away another screen. We fire electrons from the emitter through the aperture and then record where these electrons land on the second screen. There is, as readers will remember, a very simple equation that can be used to work out the probabilities concerning where each electron will arrive at the second screen. Some electrons may go in a straight line from the detector through the aperture to the second screen but some will be deflected upwards and others downwards. If we treat the electrons as particles it seems that they usually pick up either positive or negative vertical momentum either at the aperture or somewhere else along the way. Where does this momentum come from? It seems, again, either that momentum is not absolutely conserved or that momentum has been imparted to each electron somehow. The electron is, from a quantum mechanical perspective, not really localised in space and so one might reasonably suppose that random vibrations in the molecules surrounding the aperture might communicate momentum to it in an unpredictable manner. But it is difficult to reconcile this hypothesis with the simplicity of the equation that describes any diffraction experiment. The other worry is also relevant here. When the electron passes through the aperture this event can be considered a kind of measurement because we know with some degree of precision the particle's vertical position; consequently there is uncertainty about its vertical momentum. The diffraction pattern may be partly explainable in terms of the Uncertainty Principle. However if we observe, measure, where and when any individual particle lands on the second screen with great precision and assume that momentum is conserved, we can in the same way as discussed earlier calculate the momentum the particle must have had between the aperture and the screen – unless its momentum has randomly changed en route. It seems, again, either that momentum is not always conserved or that we can violate the Uncertainty Principle by combining the results of multiple measurements.
I want now to elaborate on my conception of what the wave function is in its most general sense. The picture I want to draw may seem radical but I would argue it follows directly from a clear conception of quantum mechanics. Ordinarily we are supposed to talk about the wave function being associated with both a particle and its system but I shall talk simply about a particle's wave function. For every point in space-time, there is a value associated with that point which we can also associate with the particle; the totality of these values is the particle's wave function. If we integrate the absolute square of these values over some region in space-time, we find the probability of locating the particle in that region. This wave function conforms to some appropriate set of equations – this equation will probably not be the Schrodinger equation because it is not relativistic but perhaps rather the Dirac equation, an equation that, unlike Schrodinger's equation, is relativistic, or perhaps some even better set of equations yet to be discovered that are compatible with General Relativity. At any specific time, the integral under the absolute wave function squared for all space equals one, meaning that this one particle must exist somewhere at this time. However to know any of the values associated with any of these points, we need information first, measurements; although this information will necessarily be incomplete it enables us to build a model of the wave function. In fact the wave function simply is this model. Every subsequent measurement may enable us to 'improve' the model – a new measurement may force us to change it, make a somewhat different model (although it may be possible for subsequent measurements to simply confirm the original model). Because the wave-function extends throughout all space and time, we don't need to measure the particle directly to acquire knowledge about it. This view is compatible with a phenomenon that students weren't typically taught about in 2005 – quantum entanglement. The basic idea behind quantum entanglement is that if two particle are 'entangled', that is, if they interacted in the past in such a way that some of their information is shared, then a measurement on one at some later time will immediately affect the properties of the other even if the two particles are very far apart. This makes sense if measurements on one particle affect the wave function associated with the other particle. However it goes further than just entanglement. If the wave function extends through all space-time, then any measurement carried anywhere or anytime can affect the wave function associated with the particle. We might be uncertain about the position of a particular particle and then carry out a measurement somewhere else that finds quite conclusively that the particle isn't there – this measurement will also affect the wave function associated with the particle. My solution to the measurement problem is not to say, as proponents of the Many-Worlds interpretation claim, that the wave function is real and that measurements aren't, that they never really happen (a view that ignores the fact the fact that at least some measurements are necessary to establish what the wave function is at all), but rather that measurements are real and that the wave function is, in a sense, unreal, a model. And, if the wave function associated with a particle extends throughout all space and time, a measurement made anywhere and anytime will influence the wave function associated with that particle.
This is not the most radical part of my conception of quantum physics. My most radical insight involves also recognising that measurements are subjective – where by 'subjective' I mean that measurements are carried out by particular individuals. Because measurements are subjective, the wave function can be different for different people. I don't know if readers remember but I first started writing about quantum physics in, I think, 2018, in the posts "Probability and Schrodinger's Cat" and its sequel, "Probability and Schrodinger's Cat Part 2". The basic idea behind these posts, which I didn't express clearly back then because I hadn't thought it all the way through, is that two different observers might have made or be aware of two different sets of measurements associated with a system; they will consequently be led to construct two different models of the wave function associated with it. This is why 'wave-function collapse' can occur in the world of one observer but not in the world of another, why a cat in a box can be both alive and dead for one observer and definitely either alive or dead for another. This raises two important questions. First, if we suppose that the wave function is indeed a model, does that mean it exists in the mind of a person? Second, do we all live in the same world or does each person live in a different world? I approached these questions obliquely in "The Meaning of Meaning" and may come back to them in a latter essay.
The proposal I have just made may seem extraordinary but the point of this essay is not to present this proposal but to go one step further. There are two different directions we can go. The first is take a perspective we can call epistemological realism. In the examples I gave earlier, it seemed that, although the Uncertainty Principle applies with respect to any particular measurement, if we take multiple measurements and assume that the Law of Conservation of Momentum is a hard law, we can reduce the uncertainty associated with a system below the limit imposed by the Uncertainty Principle. This would imply that the more measurements we perform at different times on a system the more confidence we can have that we know exactly where all the particles in it are and how fast they are moving at any particular time. In the limit, as the number of measurements approach infinity, if we could have perfectly complete information about a system, our model would conform exactly with reality. The map will have become the territory. This perspective aligns with ideas of determinism and reductionism because we are assuming that everything, even complex human-level phenomena like Hollywood films and Jordan Peterson, can ultimately be explained by simple laws acting on fundamental particles, one of these laws being Conservation of Momentum.
The second perspective is this. We could suppose that no matter how many measurements a person makes on a system, there is always residual uncertainty. Something like the Uncertainty Principle is indeed a hard law. This seems to me to imply that the Law of Conservation of Momentum is not a hard law: it would be emergent like the Second Law of Thermodynamics. The Law of Conservation of Momentum would apply almost absolutely to macroscopic phenomena but would apply much less so at the level of particles. This is because the motion of particles can change randomly between measurements. The implication of this is that 'randomness' or 'non-determinism' is a fundamental feature of the universe where by 'random' and 'non-deterministic', I mean that we cannot explain such changes of momentum in terms of the types of the reductive physical laws discovered by physicists.
Almost from the time I learned about quantum mechanics, I simply assumed that it involved elements of irreducible chance, non-determinism, without realising that this would then imply violations of the Law of Conservation of Momentum. I am not alone. In Determined, Robert Sapolosky, himself a determinist and a reductionist, when discussing quantum physics seems to concede that quantum phenomena can be random. His argument, if you recall from my essay discussing his book, is that these random quantum fluctuations, random changes of motion, do not percolate up to the level of human behaviour. The notion that randomness and non-determinism are necessary features of quantum mechanics is something that seems to be the mainstream view even among physicists. I did not fully realise that true randomness and the Law of Conservation of Momentum are irreconcilable until I watched Sabine Hossenfelder's video "So You Think You Understand Quantum Physics?" a little while ago, a video which greatly influenced this essay although I am not entirely sure if her analysis in this video is totally correct because she assumes both locality and Conservation of Momentum. We seem to need to make a choice between the two laws, between the two perspectives, the first involving the idea that Conservation of Momentum is absolute and the second involving the idea that there is ineliminable uncertainty involved in subatomic processes. Of these two perspectives I prefer the second because it seems to fit more neatly with the view of the world I have developed over the course of my life.
In opting for the second interpretation, I am endorsing a view that readers may think is woo-woo or new agey. I'll explain why I say this. My use of the word 'randomness' is idiosyncratic because what I mean by it is that we cannot explain such changes in momentum naturalistically but rather must invoke something supernatural. (I am aware that to make my position clearer I would need to define what I mean by 'natural' and 'supernatural' but it would take me too far afield to do so here in this essay.) People today assume that if something is 'random' it is 'causeless' but I am suggesting that phenomena can be construed as 'random' within a materialist framework but as 'deterministic' when interpreted within a more mystical paradigm, that they might have causes that can not be explained in terms of simple reductive physical laws but might be explainable in other ways. This second way of understanding the world might provide some comfort to believers in free will, although I do not believe in free will myself because I find the concept incoherent. It might also give comfort to some religious or spiritual people although I do not want to endorse the worldview of Evangelicals who reject Evolution entirely, claim climate change is a hoax, endorse Old Testament attitudes towards homosexuals and women's rights, or support Israel's indiscriminate bombing of civilians in Gaza. My view, that I presented in the essay about Sapolsky's book "Determinism, Quantum Physics, and Free Will" and in the essay "Evolution, Ideas, and Hiveminds" is that there is top-down causation and spooky action at a distance. I also believe that living creatures have minds or souls in some sense separate from their bodies and that psychic phenomena such as clairvoyance, synchronicity, and precognition, although probably not direct telepathy, can genuinely occur. I am permitted to take this position based on reflection on my own life and on the world.
In wading so deeply into quantum physics, as readers will appreciate, I am perhaps moving outside my area of competence. Nevertheless my main argument is based on premises that physicists generally accept. One such premise is that wave functions extend throughout all space and time (a premise that quantum physicists have yet to fully reconcile with either Special or General Relativity.) Another is that quantum phenomena are 'random' or 'non-deterministic' although as I said I am using these terms in a different way than physicists often do. Physicists seldom publicise the fact that this second premise implies that Conservation of Momentum must sometimes be violated. Where I depart from the mainstream view of physicists is that I regard measurements as fundamental and wave functions as models that somehow exist in the minds of conscious beings, although I admit that I am unsure what this would mean; physicists tend to regard the wave function as real and some believe that it is measurements that are unreal. In taking a kind of mystical position, I am aware that I am vulnerable to charges of being anti-science because my view might seem to suggest that some phenomena cannot be scientifically explained and I suppose I should just bite the bullet and accept that this criticism would be fair. It may be that scientific endeavour in order to continue requires its practitioners to at least pretend that the natural world can be explained by simple reductive laws because the alternative would be to 'explain' phenomena by simply saying something like, "God did it" which is of course no real explanation at all.
I'll finish this essay with an addendum to the previous essay's discussion of Bayes' Theorem, not because I said anything incorrect it it (I think) but because readers may want a better understanding of how Bayes' theorem should actually be used. Readers may recall that I pointed out that in order to use Bayes' Theorem, when working out P(E | ~H) you still need a hypothesis related to it. Suppose now that Jones, the visitor I described in the previous essay, knows that Smith has fifty animals on his farm. If Jones's main hypothesis is that every animal on Smith's farm is a sheep, it may be that his best alternative hypothesis is to suppose that Smith has forty-nine sheep and one cow. This would make P(E | ~ H), given that he sees a sheep, equal to 49/50. This is not the only alternative hypothesis (the probability will be higher if more alternative hypotheses are included in the calculation) but, if Jones treats this alternative hypothesis as a basis for his calculation, it will enable him to set a lower bound on P(H | E). The more sheep Jones sees without ever seeing a cow, the more P(H | E) will approach one, but it will approach one much more slowly than the example I gave in the previous essay. It is this kind of reasoning that was actually employed by the mathematicians who initially embraced Bayes Theorem in the eighteenth and nineteenth centuries. However, to reiterate the point I made at the end of the previous essay, statisticians do not use this kind of method when working out the probabilities associated with the null hypothesis and many statistical errors may spring from false assumptions baked into what versions of the null hypothesis researchers use. I am skeptical of much population statistics but my skepticism should be the topic of another essay.
No comments:
Post a Comment