One advantage of writing a blog like this is that I can talk about anything I want. If I have readers at all, it may be that they are interested in my life, the condition with which I was diagnosed and from which I recovered, and other anecdotes drawn from my own lived experience. They may be interested in religion or narrative theory or Meinongianism or any of the other various topics I have covered from time to time. I may have readers who know something about quantum physics and are interested in speculation concerning it; I may have readers who know little about quantum physics but read the essays I have written about it because I have endeavoured to describe this recondite theory in clear accessible language. My vain conceit is that readers may come back to this blog because I am a fairly interesting person. Because I have little sense of who my readers are and because I earn no money from this blog there is no possibility of 'audience capture' – I am not like Bret Weinstein and Heather Haying who have built up an audience of vaccine skeptics and have to pander to this audience for an income. I am not like Alex O'Conner who specialises in discussing arguments for and against the existence of God because that is part of his brand. I can talk about anything that has taken my interest.
Often in the interval between writing posts I consider various subjects upon which to expatiate and try to talk about all of them in a single essay. When considering what to write about in this essay, I thought about talking about what left-wing politics means to me and also considered writing about the relationship between religion and morality. I have decided not to pack everything into the same post but instead concentrate on a single topic in this one, a topic that has surfaced several times during this blog: probability. I want to briefly talk about the Monty Hall problem and then go into more depth on the Two Envelope Paradox. I want to write about the second because I became obsessed with it for a couple of weeks and when I tried to investigate it on the internet I found that every proposed solution to it, on Youtube and elsewhere, was wrong. Although I have not come up with the definitive solution, as I hope you will see, I have made major inroads towards a solution. So, I warn the reader, this essay involves mathematics. The mathematics involved is just simple algebra but I know from experience that arguments involving mathematics are hard to read unless you have specialised in mathematics or math heavy subjects at a university level. If maths is not your thing you can skip this post and wait for the next essay which will concern the history of religion, how morality got involved, and whether it might be possible or not to have religion without morality.
For a warm up, let us consider the Monty Hall problem. I imagine my readers will be familiar with it because it is famous. The pitch is that we have a gameshow in which a contestant is faced with three doors. Behind one door is a new car, which the contestant wants, and behind the other two are goats which the contestant definitely doesn't want because, perhaps, she lives in a small apartment in Manhattan. The contestant doesn't know which door conceals the car. She chooses one of the doors and tells the game show host which one she has chosen. The gameshow host knows which door the car is behind. If she has guessed incorrectly, the gameshow host opens another door behind which is a goat leaving two doors, the one she picked and one other which he knows contains the car. If she has guessed correctly, the gameshow host opens one of the other two doors at random: that is, with a conditional probability of 1/2 for each door. (The probabilities are conditional because they depend on the contestant correctly picking the door.) The gameshow host then asks the contestant whether she would like to stick with her original guess or switch. After she has decided, the door will be opened and she will find out whether she has won a car or a goat. The problem is this. What should the contestant do?
The answer turns out to be simple. The contestant should always switch. The probability of winning the car by sticking is 1/3 and the probability of winning it by switching is 2/3. You can work out that this must be the case by simply repeating the 'experiment' many times. Everyone who discusses the Monty Hall problem nowadays accepts that you should always switch but this wasn't the case in 1975 when the problem was first posed or in 1990 when it became famous after appearing in a column in the magazine Parade. The columnist's solution provoked an enormous backlash sometimes involving eminent mathematicians who couldn't believe the solution Marilyn vos Savant had advanced must be right. Surely it should make no difference whether you switch or not? Surely the odds are 1/2 for both sticking and switching because there are only two doors left? One reason for this confusion is that the Monty Hall problem strikes right at the heart of what we mean by probability. Nowadays, yes, we all know the correct answer but the philosophical justification, the reason why the correct answer is correct, remains disputed.
It is not the purpose of this essay to discuss the Monty Hall problem in depth but I want to present a variant on the Monty Hall problem that makes salient a counterintuitive implication. Suppose we imagine a gameshow identical to the one discussed above in every way except that the gameshow host doesn't know which door the car is behind. After the contestant randomly chooses a door, he then randomly opens any one of the three doors. Hopefully my readers know enough mathematics to realise that there are nine possible outcomes; hopefully, too, my readers will realise that each outcome is equally probable. We are not interested in outcomes in which the gameshow host opens the door the contestant has picked or outcomes in which he opens the door concealing the car. So we throw these outcomes out. This leaves four possible outcomes, two in which she has correctly chosen the car and two in which she hasn't. These outcomes are still equally probable. So, in the variant Monty Hall problem, the probability of winning the car by sticking and the probability of winning it by switching are both the same, 1/2.
Suppose you have an alien who accidentally catches the gameshow on TV. He sees the initial choice, the one door being opened, the contestant making her decision whether to switch or not, and then the next door being opened. If the alien doesn't know whether or not the gameshow host himself knows which door the car is behind, he has no way or knowing if he is watching Monty Hall Classic or the variant I have just presented. What is the difference between the two? In the first it is rational for the contestant to switch if she knows that the host knows which door the car is behind and is deliberately opening another door. In the second there is no rational reason for her to prefer either sticking or switching because she knows that the host doesn't know which door conceals the car himself. Somehow her prior knowledge of the host's knowledge affects the probabilities she assigns to either sticking or switching. Furthermore these probabilities seem to be, in a sense, objective, empirically grounded because we can determine them either through rational analysis or through repeated experimentation. It is common to say that probability estimates depend on information but this discussion leads us to wonder how exactly information is being communicated to the contestant and what 'information' actually is. In probability, there are deep mysteries indeed.
The main topic of this essay is, however, not the Monty Hall problem but the Two Envelope Paradox. It can also be set out simply. Suppose someone presents you with two envelopes and tells you that one envelope contains twice as much money as the other but doesn't tell you which. You pick one. Before you open it, you are given a choice. You can either stick with the original envelope or you can switch. Intuitively, because the situation is symmetrical, it should make no difference whether you switch or not. I shall call this the Intuitive Common Sense argument or ICS argument. However it is possible to make the argument that you should always switch, that this is the best strategy. According to Decision Theory, we should multiply the utilities of outcomes by their probabilities thus forming something known as Expected Utilities. Because we don't know the amount of money in the envelope we picked, we call it x. The other envelope either contains x/2 or 2x. Because the probability of choosing the higher amount is 1/2, the Expected Utility of switching and thus getting the lower amount is (1/2)(x/2). Because the probability of picking the lower amount is 1/2, the Expected Utility of switching and getting the larger amount is (1/2)(2x). The Total Expected Utility of switching therefore is the sum of the two, (1/2)(x/2) +(1/2)(2x) or 5x/4. Because on average there is a higher Expected Utility associated with switching than with sticking, because the total Expected Utility gained by switching is 5x/4 and the total Expected Utility of sticking is x, we should always switch. But then suppose you are asked again if you want to switch or not. By the same reasoning, you should switch once more. And continue switching forever. Obviously there must be something wrong with this argument somewhere.
There are two ways to conceptualise this paradox, the Intuitive Common Sense perspective, which I am calling the ICS argument, or the argument involving Expected Utility which depends on the equation written above. Let's call this equation the TEP (the Two Envelope Paradox equation) because I will come back to it repeatedly. According to the TEP equation, EU = (1/2)(x/2) + (1/2)(2x), where EU stands for the Expected Utility of switching and x stands for the value in the envelope chosen first. Everyone assumes that the ICS argument, which suggests that it makes no difference whether you switch or not, is correct (which it is) – the solutions to the paradox always involve trying to find some flaw in the reasoning in the TEP argument. As I've said, all the solutions that I've found are incorrect. One attempted solution I found relies on the exponential function – obviously an attempt to solve this problem by involving exponents must have gone wrong somewhere. According to Jade Tan-Homes, who has a Youtube channel called "Up and Atom", the problem is that the TEP is using two different values of x. The first time x appears in the TEP it is the higher of the two amounts and the second time the lower. If we rectify for this error, Jade argues, the equation simply becomes EP = x and the paradox apparently evaporates. The problem here is that Jade is wrong. There is a mistake in the TEP but this is not the mistake. x simply stands for the amount in the first envelope picked: it is the other envelope that either contains x/2 or 2x. This is clearer if we give x a value. Let's suppose the picked envelope contains $10 – in that case the second envelope contains either $5 or $20 and, assuming the TEP is correct, has an Expected Utility of $12.5. Therefore, if we get a $10 (or any other value), we supposedly should always switch. So this solution, a solution which I think Jade presumably got from supposed experts in probability, fails.
Another solution I found attempts to show that the TEP doesn't apply when the envelopes are unopened but does apply when one is. If you open an envelope and find $10, they say that you should assume that the other envelope contains on average $12.5 and always switch. This article says this strategy would be bourn out by multiple experiments. I won't name and shame this site, partly because I can't remember its name, but this is also wrong, especially the claim about multiple experiments, as I will now show.
Having thought so much about the Two Envelope Paradox recently, I found it very helpful to think in terms of what I shall call studies, experiments and scenarios. I shall define scenario in a moment. I believe that if we think about the paradox in these terms, we can understand it better and at least approach a solution. A study consists of, say, a hundred experiments and each experiment consists of a person choosing between two envelopes, a person who has no knowledge of the other subjects. Let us suppose that each envelope contains either z dollars or 2z dollars where z here does not mean the money amount in the envelope first picked (which throughout this essay I denote with the letter x) but rather the lower of the two amounts. On average, half of the people will randomly select the z envelope and half will pick the 2z envelope. Let us suppose that they are all following the switch strategy. Consequently everyone who picked the z amount switches to the 2z and everyone who picked the 2z amount switches to z. The total amount of money they will accrue collectively is 150z and on average each person will win 3z/2. Suppose they all stick. Everyone who picked the z amount first keeps it and everyone who picked the 2z amount first keeps it. They will still all make the same amount of money collectively and on average. If they all switch, the number who get the higher amount by switching is exactly balanced by those who got the lower by switching. It seems that neither strategy, switching or sticking, is better than the other. This little thought experiment shows that the ICS conception, that it makes no difference whether you stick or switch, is the correct conception.
I shall now define a scenario. It will be helpful to introduce some genuine numbers and, as I shall show, it is possible to introduce genuine numbers without distorting the logic. A scenario is a description of the types of experiment involved in a study. In Scenario A, all hundred experiments involve a $10 and a $20. As I have shown in the previous paragraph, it makes no difference whether subjects all stick or all switch – either way each will make on average $15. Let us imagine Scenario B, in which all hundred experiments involve a $5 and a $10. Once again it makes no difference whether subjects all stick or all switch – either way each will make on average $7.5. Now consider Scenario C. Suppose we have a study in which fifty experiments involve a $5 and a $10 and fifty involve a $10 and a $20. On average 25 people will pick a $5, fifty people will pick a $10, and 25 will pick a $20. Let us suppose that they all have been persuaded by the TEP argument and so all switch. Everyone who picked a $5 first will get a $10 but 25 of the people who picked a $10 will get a $5. 25 of the people who picked a $10 will get a $20 but everyone who picked a $20 will get a $10. Again it makes no difference whether everyone follows a switch strategy or everyone follows a stick strategy; on average each subject will receive the same amount of money either way. The total amount of money they will collectively receive is $1,125 and on average they will receive $11.25. Importantly however, if Scenario C obtains, not only everyone who initially picks $5 but everyone who initially picks a $10 benefits from switching – the EV is $12.5 for those who initially pick a $10. The problem is that the losses for those who initially picked a $20 not knowing they'd picked the highest amount cancels out the profits made by the other subjects.
Furthermore, as should be clear by now, it makes no difference whether or not the envelope is opened before the stick or switch decision is made, contrary to what the site I mentioned above says. Suppose all the envelopes contained either $10 or $20. If an individual opens an envelope containing $10, not knowing whether this is high or low amount, then rationally she should assume that the other envelope contains either $5 or $20. If a person opens an envelope containing $20, then rationally she should assume that the other envelope either contains either $10 or $40. The ICS argument leads us to the same conclusion whether or not one of the envelopes is opened before the choice to stick or switch is made. Some of the mathematicians I read argue that we should bring in other considerations. We might say the university carrying out the study is cash strapped and so $10 is more likely than $40. I don't believe that bringing in such considerations is justified by the paradox as usually presented. Considerations such as these are irrelevant.
What I have argued so far is that the ICS argument must be correct because when studies involving a hundred experiments (or any large of number of experiments) are actually carried out, it is evident that neither the switching nor sticking strategy is better than the other. The TEP argument must be wrong and this requires some explanation. Let us now imagine the following. A subject in a study, who we will call Jane, is presented with a pair of envelopes and opens one, finding that it contains $10. The other envelope must either contain $5 or $20. As I have argued above, whether or not the picked envelope is opened makes no difference. Let us suppose that in every experiment in a study, one envelope contains $10 and the other $20. In other words Scenario A obtains. If the subject knows she is in Scenario A, she should switch. However it could be that she is in Scenario B in which every experiment contains either $10 or $5. If she knows she is in Scenario B, she should stick. But she doesn't know whether she is in Scenario A or Scenario B. It could be that she should assign probabilities to each Scenario – she could apply the Principle of Indifference to Scenarios, presume that the probability of Scenario A is 1/2 and the probability of Scenario B is 1/2. If she does this, the TEP equation actually holds: the Expected Value of switching is (1/2)$20 + 1/2($5) or $12.5.
Scenarios A and B are not however the only possible Scenarios. Consider Scenario C again. This scenario is equivalent to saying that Scenarios A and B are equally probable and mutually exhaustive and exclusive. In this scenario, fifty experiments involve a $5 and a $10, and fifty involve a $10 and a $20. As I have shown, it still makes no difference whether all hundred subject stick or switch from a bird's eye perspective– they will still win the same amount of money on average. But we are now considering a subject, Jane, who opens an envelope and finds a $10. In this case, as above, the TEP actually holds – the Expected Value for switching is $12.5. So she should switch. But she should only switch if she knows that she is in Scenario A or Scenario C (rather than Scenario B) and knows therefore that her chosen envelope contains either the intermediate or smallest amount.
Although I have so far mentioned only three scenarios, if the study contains a hundred experiments and the subject knows the money amount in her envelope, which we are here assuming to be $10, then there are in fact a hundred possible scenarios (the ordering of experiments is irrelevant). It might be that all experiments involve a $5 and a $10, that they all involve a $10 and a $20, that it is split 50-50, that 16 experiments contain a $5 and a $10 while 84 contain a $10 and a $20 and so on. It is at this point that I introduce my clever idea. It seems we need to introduce.a new variable a. This new variable is the humber of experiments containing a $10 and a $20 in the study divided by the number of experiments containing a $10. Suppose a subject opens an envelope and finds a $10. The probability of getting a $20 by switching is a and the probability of getting a $5 is (1-a). The Expected Value of switching is 20a + 5(1-a), or 5 + 15a . If we are in Scenario A, then a = 1 and the Expected Utility of switching is $20 as we would expect. We should always switch in this case. If we are in Scenario B, then a = 0, and the Expected Value of switching is $5 as we would expect. We should always stick in this case. If we are in Scenario C, then a = 1/2 and the Expected Value is 12.5 as we would expect if the TEP actually holds. In this case we should always switch. More generally, if the amount in the picked envelope is x, then regardless of whether or not we open the envelope, the Expected Value of switching is a(2x) + (1 -a)(x/2) or, when simplified, x/2 + 3ax/2. If a=0, the EV=x/2 and we should always stick; if a=1 the EV= 2x and we should always switch. I shall call this new equation the New and Improved TEP equation because it involves this new variable, a. This variable is a ratio, the number of experiments that contain a $10 and a $20 divided by the total number of experiments containing a $10.
The reader may ask if the studies we are imagining cover all possible circumstances. We might have a study in which some experiments contain a $1 and a $2, some a $3 and a a $6, some a $5 and a $10, some a $10 and a $20 and so on. We might have a study in which every experiment is different (although one envelope in each always contains twice as much as the other.) It might seem that if the study consists of a hundred different experiments we need a different distribution variable for each experiment, not just the single variable a. However, recall that we are assuming that the subject picks $10 first. The only possible values in the other envelope are $5 and $20. To reiterate the point, because it is important, this variable should be defined as the number of experiments containing a $10 and a $20 divided by the number of experiments containing a $10 where we are assuming a very large number of experiments.
We are now in a position to make a first pass at what is wrong with the reasoning involving the TEP that leads to the conclusion that we should always switch. The TEP argument involves two assumptions. Firstly it assumes that a = 1/2, that we are in Scenario C . If we replace a in the New and Improved TEP equation with 1/2 we arrive back at the original TEP equation. Secondly it assumes that the amount in the first envelope picked is the intermediate amount. What do I mean by intermediate amount? In the studies we imagined above, some experiments contain a $5 and a $10 and others contain a $10 and a $20. The $10 appears in all experiments and, quite evidently, is an intermediate value between the $5 and the $20. The problem with the reasoning that leads to the paradox seems to be that neither assumption is justified by the information given to us. We have no idea what the value a is and cannot justify our assumption that we have initially picked the intermediate amount. It could be argued that because we cannot justify either assumption that the whole edifice that led to the conclusion that we should always switch has no foundation. We simply lack the necessary information.
When first thinking about the Two Envelopes Paradox it occurred to me that we could imagine another type of experiment that might be equivalent to it. Suppose the experimenter gives the subject $10 and then says, "If you want you can bet this ten dollars on a gamble. I'll throw a coin and if a head turns up, I'll give you your $10 back with another $10. If a tails comes up, I'll only give you $5 back." In the same way that if the TEP is correct it is would always be rational to switch, it seems you should always take this bet. What I subsequently realised is that this new experiment is not really equivalent to the two envelope paradox because, in this situation, a has a definite value, 1/2, and it is obviously evident that $10 is the intermediate amount.
In our first pass we seem to have found that the TEP argument fails because we lack all the necessary information. Yet we have not fully come to grips with the paradox. The intuition implicit in the ICS argument, that it makes no difference whether we switch or stick, does seems reasonable, justifiable, and is bourn out by the thought experiments we have considered. I have just suggested that the reasoning that led to the switch strategy fails because we lack the required information needed to justify it. However we do seem to have enough information to justify the position that it makes no difference whether we switch or not. So we need to explore the paradox a little further.
We have imagined studies that each consist of a hundred experiments and I have argued that there could be a hundred different studies each involving a different scenario. I have suggested that because we don't know which scenario obtains, the value a is indeterminate. However it might be possible to reply to this argument in the following way. We could apply the Principle of Indifference to scenarios. We could say that, because we lack the information required to know which scenario obtains, we should assume that each scenario is equally probable. This is the fundamental idea behind the Principle of Indifference. Or we could say that a falls on a normal distribution with Scenario C as the mean scenario and Scenarios A and B as outliers. Either way the distribution is symmetric around Scenario C. Some philosophers might argue that, because of the Principle of Indifference, we are justified in supposing that a = 1/2. They might argue that this value a applies to all experiments in a study. In other words, when faced with a choice between two envelopes one of which contains twice as much as the other, we are justified in imagining that we are participating in a study in which Scenario C obtains and that we have chosen the intermediate amount, and should always switch.
However even if we do try to set a as being equal to 1/2, we still cannot reconcile the TEP argument with the ICS argument. Consider Scenario C. Everyone who got a $10 should always switch presuming she knows that $10 is the intermediate amount. But now consider a subject, call him Bob, who gets a $20. He doesn't know that he is part of the particular study described above, doesn't know that $20 is the highest amount, and so rationally should assume that the other envelope either contain $10 or $40. If he accepts the TEP equation he should switch. If he accepts the New and Improved TEP equation and assumes further that a is equal to 1/2, he should also switch. But he would be wrong to switch. Why? Earlier I defined a as the number of experiments containing a $10 and a $20 divided by the number of experiments containing a $10. I did this in the context of the study I was discussing. In Bob's case, the correct value of a is the number of experiments containing a $20 and a $40 divided by the number of experiments containing a $20. This number is of course equal to 0 because there are no experiments involving a $40 dollar prize. If we substitute 0 for a into the New and Improved TEP equation, we find that the Expected Value of switching is $10. If Bob knows that he has the highest amount, he should stick but the problem for him is that he doesn't know this.
We now arrive at a fundamental point. In introducing the variable a, I implied that this variable is the same for all experiments in a study – but this is not the case. The studies we have considered usually involve three different monetary prizes. When a subject picks an intermediate value, the value of a depends on the number of experiments involving the highest value divided by the number involving the intermediate value. However, in experiments in which the person picks the highest value, a should be 0. In experiments in which the subject picks the lowest value a should be 1. The idea that we simply set a as being equal to 1/2 because of the Principle of Indifference doesn't work because subjects cannot assume they have the intermediate prize: they do not know whether the money in the envelope they have picked is the highest, lowest, or intermediate amount. The value of a in a given experiment depends both on the scenario and which of the three values has been picked. So we have arrived back at the claim I made earlier, that the loophole in the TEP argument is that we don't know the value of a. Simply stipulating that it should be 1/2 fails.
I'll restate the argument I have made so far. Suppose Jane picks an envelope. She might not open it, or she might open it and find that it contains $5, $10, $20 or some other amount. The EV of switching is x/2 + 3ax/2 where x is the amount in her envelope. a can now be simply defined as the probability that Jane has picked the lowest amount of the two envelopes, a probability that she neither knows based on the information she possesses or can guess at as being, for instance, 1/2. Because she does not know a, she cannot work out the EV.
This would suggest that it is impossible to work out whether you should stick or switch. Yet the ICS argument, which says that both strategies are equally good, seems not only Intuitive Common Sense but must be true based on the arguments I made above. Is there a way to reconcile the two approaches? Let us propose the following argument. 1.) The ICS argument is correct and so both the stick and switch strategies are equally good. 2.) If so, the EV of switching must be equal to the EV of sticking. 3.) The New and Improved TEP Equation says that EV= x/2 + 3ax/2. If these premises are true than we can set EV as being equal to x and formulate the following equation: x = x/2 + 3ax/2 . If we now solve this equation for a, we find a must equal 1/3. In other words, if both the New and Improved TEP argument and the ICS argument are sound, then the probability of Jane being in an experiment in which she has chosen the lower amount and should switch is 1/3, while the probability that she has picked the highest amount and should stick is 2/3.
This makes sense mathematically but does not make sense intuitively. There are at least three problems with it. First, if you or Jane or anyone else actually is given a choice between two envelopes one of which contains twice as much as the other, I don't believe anyone would assume that there is only a one third probability that he or she has initially picked the lowest. Everyone assumes that the odds are fifty-fifty. Secondly, the same type of logic can lead us to believe that it is the other envelope that has the one third probability of being the lowest. The third problem again involves a little algebra. Suppose we go right back to the beginning. Suppose someone presents you with two envelopes and tells you that one envelope contains ten times as much as the other envelope. Intuitively we would still believe that both sticking and switching are equally good strategies and we would be right. However the New and Improved TEP Equation is now EV = x/10 + 99ax/10. If we now set EV as being equal to x, we find that the probability that Jane has chosen the lower amount and should switch is 1/11. More generally, suppose that someone presents you with two envelopes and says that one envelope contains n times as much as the other. It seems, according to this line of reasoning, that the probability that one will initially pick the lower amount and should switch is 1/(n+1). In other words, it seems that the probability distribution is determined by the ratio between the amount of money in one envelope and the amount of money in the other. This seems deeply unintuitive.
I have spent a couple of weeks thinking about this problem. I think the New and Improved TEP Equation involving the variable a is a very important step in the right direction but is only helpful in actual situations when one knows the value of a. For about a week I have been trying to find some way to reconcile the New and Improved TEP argument with the ICS argument in situations where we don't know the value a. It seemed to me that we could find an equation in which all occurrences of this value cancel each other out and in which the EV of switching is equal to the amount of money in the picked envelope (whether opened or not). I have not been successful. I could spend another couple of weeks thinking about it but feel it would be fruitless. I know that I have strung the reader along in that you may have hoped I would reveal the solution and am sorry that I have fallen at the last hurdle.
For what it's worth, I suspect that this paradox lies right at the heart of Decision Theory. This might be why it seems so insoluble, In a post a few months ago, "Rationality and Irrationality", I criticised Decision Theory on the grounds that I don't believe anyone makes decisions in the way Decisions Theorists prescribe, of if they do, only occasionally, when gambling. It may be the case that Decision Theory is itself incoherent. Either there is a serious problem at the heart of Decision Theory that this paradox reveals or, for some reason people have not so far as I know clearly articulated, Decision Theory does not apply with respect to the Two Envelope Paradox. Mathematical psychologists like Kahneman often bemoan the fact that the majority of people seem to him irrational in that they don't understand statistics but the most eminent and rational of mathematicians have failed to solve this particular paradox.
I shall leave my discussion of the Two Envelopes Paradox here.
Often in this blog I make corrections to previous posts. Usually I do this at the beginning of an essay but this time I am including them at the end. I actually thought the previous essay was quite good but there were a couple of points in it where I could have presented my argument better. In the essay I discussed Sapolsky's claim that desert cultures tend to produce monotheistic religions while rainforest cultures are much more likely to be polytheistic. I raised the objection that you could not be certain of a direct causal connection between the two, something he claimed. I said that the fact that most monotheistic religions can be traced back to the Middle East could be described as an "historical coincidence" but, of course, there are good historical reasons for the emergence of Judaism, Christianity, Islam, and the Baha'i faith, among others, in the Middle East. What I should have said was that any attempt to say that desert cultures somehow naturally engender monotheistic religions is far too simplistic to be true. I also said that the only way we could know if desert cultures somehow naturally engender monotheistic religions is to perform an experiment on human history in which we run through it on a number of different Earths. This was dumb – there is a simpler and more realistic alternative. The hypothesis that desert climates produce monotheistic religions could be strongly supported (if not fully proved) by showing that many monotheistic religions arose independently of each other in different desert environments. The problem with Sapolsky's claim, as I implied but did not express clearly, is that, so far as I know, all monotheistic religions are interrelated, usually sprang from each other. Furthermore there are counterexamples. Native Americans in North America often lived in arid conditions but did not devise a monotheistic religion. Similarly many Australian Aboriginals lived (and live) in desert environments but did not develop any kind of monotheism. Sapolksy's claim was stupid and I wish I had replied more clearly to it. There were some other things in the previous essay I wanted to clarify or correct but I shall not get into them here. It was the detail concerning monotheism that had bothered me the most in recent weeks.
I shall discuss religion more fulsomely in the next essay, an essay which won't concern mathematics and should be easier to read.
No comments:
Post a Comment