Thursday, 5 October 2023

Rationality and Irrationality

We generally trace Western philosophy back to the ancient Greeks, particularly the writings of Plato and Aristotle, and from Aristotle we get (although he never states this explicitly) one of the first and most influential definitions of the essence of mankind: "man" is "the rational animal". The ideal of rationality has been at the heart of philosophy ever since. Many philosophers have embraced the optimistic conviction that reason can arrive at truth, even concerning issues of morality.  Kant, for instance, argued that ethical principles can be derived from pure reason. Perhaps the most important ethicist of the twentieth century, at least in the Anglo-American tradition, John Rawls, said, "Human rights are not the consequence of a particular philosophy, nor of one way among others of looking at the world. They are not tied to the cultural tradition of the West alone, even if it was within this tradition that they were formulated for the first time. They just follow from the definition of justice." (I am here quoting from a very interesting Youtube clip by Jonas Ceika "The Problem With Human Rights.") In his very important and interesting book Reasons and Persons, the ethicist Derek Parfit uses the term "rational" repeatedly but, annoyingly, never defines it even though it is often crucial to the arguments he is making. The implication seems to be that the term "rational" is just transparent to everyone and so requires no definition. Of course there has always been an intellectual counterculture that has questioned the doctrine that rationality is at the heart of what it means to be human. Hume famously suggested that we cannot derive an 'ought' from an 'is' and, if we take this seriously, it can lead some of us to wonder if any ethical system can have a rational justification at all. Another subversive, although not a philosopher, Sigmund Freud, argued that often our actions arise from unconscious (i.e. irrational) motivations, and a couple of much later psychologists, ones with an entirely different theoretical bent to Freud, Amos Tversy and Daniel Khaneman, became famous for studies that seemed to show that people are often irrational. 

Richard Rorty distinguished between two different types of philosophical endeavour, the constructive and the destructive, or alternatively the systematising and the therapeutic. It is useful, if perhaps a little simplistic, to suggest that the first tradition is united by a faith in human rationality while the second takes a far more cynical view about whether humans are really rational. Famous exemplars of this second tradition include Nietzsche, Wittgenstein in the later part of his career, Derrida, Foucault, and Rorty himself. Naturally we cannot have a destructive form of philosophy without other philosophies to destroy, to critique or deconstruct. If the second tradition won out we might be left with no philosophy at all, and so, arguably, we need both. Another way of characterising these two traditions, the first assuming that humans are rational and that reason can provide a foundation for morality and an understanding of human nature, and the second questioning these assumptions, is to note that the first is, I would suggest, concerned with 'norms' and the second with 'descriptions'. The problem is that philosophers of the first tradition are not always clear as to whether they are proposing norms that people ought to adhere to or descriptions of actual people and actual practices. This problem has been there from the beginning. It is unclear whether Aristotle's definition is descriptive or normative – are all human beings rational all the time or is rationality a potentiality that humans have that other animals lack, a potentiality that humans ought to exercise but might not? This aporia or equivocation seems to me a serious defect in much systematising philosophical discourse. The problem of whether 'rationality' is descriptive of all humans all the time or a norm humans ought to adhere to but might not will be a significant theme of this essay.

In one of my recent posts, "Probability, Time, and Bad Science Part 2" I suggested that Postmodernism, a form of philosophy very much in the destructive tradition, had died a couple of decades ago. Of course I have not surveyed every philosophy department in the world, nor conducted research asking journalists and other influencers what philosophy books they agree with. My impression nevertheless is that the first tradition is once again in vogue, that a new of kind of 'rationality' has become dominant and, having filtered out of academia, colours much of what passes today for popular culture. A new kind of systematising is in vogue as opposed to the therapeutic tradition we associate with Nietzsche. I put the term 'rationality' in inverted commas because, as I will hopefully show, I do not believe it to be genuinely rational at all (although adherents to what I have termed the New Scientism, such as Steven Pinker, may believe that they have a monopoly on rationality). As my readers will know, early last year I embarked on a Masters of Philosophy. I have, perhaps temporarily, dropped out. I'm not going to go into all the reasons for this here but a part of the difficulty I faced is that, now being forty-four, I have settled opinions on a number of subjects. I don't believe in free will. I don't believe in Neo-Darwinism. I suspect that there may be some kind of supernatural dimension to the world but currently don't believe in an afterlife because I can't imagine what it could be like. I don't draw a clear distinction in my mind between philosophy, science, literature, and literary interpretation, a problem because the boundaries between different disciplines is very important in modern academia, even though arguably literary interpretation is closely related to philosophy of language, and quantum physics and the second law of thermodynamics are very relevant to any discussion of time and free will. I have an issue with Ethics that I will discuss later in this essay. Most importantly, I tend to be most sympathetic to philosophers in the therapeutic tradition, Nietzsche and the French Postmodernists, the philosophers I was exposed to many years ago when I was studying English; the philosophers I studied over the last year and a half however tended to be systematisers. Of course, systematising philosophy has been the mainstream form of philosophising since the Greeks and so it is unsurprising that it has reasserted itself. But, although the tide has currently turned and swept the Postmodernists out to sea, at some point I'm sure the second tradition will return.

Another difficulty I found with some of the philosophy I was exposed to concerns methodology. Philosophers often make claims without evidence. The way it seems to work is that philosophical arguments often depend on shared intuitions: that is, a philosopher will make a claim hoping that the reader will simply nod his head and say to himself, "Well, that seems right to me." The problem with this is that intuitions can be incorrect. Human beings do not have some kind of magical ability to just know that something is true (even though Kant and Rawls seemed to believe this); rather intuitions are shaped by culture and ideology and errors can creep in. How can we show that a commonly held belief is false? Sometimes during the course I backed up my claims by drawing on personal experience, something I do all the time in this blog, but this seems to be bad philosophy, even though it seems to me that when making a claim, a person should find something to support it. The final problem I had with philosophy, as I have indicated above, is that often it was impossible to know if a philosopher was making a descriptive claim or a normative claim, something that bedevils systematisers.

I can illustrate these methodological issues with an example. One of the most influential philosophers of language, Paul Grice, proposed that 'successful' speech obeys something known as the Cooperative Principle, more specifically that it should follow four maxims. The first is that a speaker should say no more and no less than what is required to be informative. The second is that the speaker should only say what she believes to be true and has evidence for. The third is that the speaker's utterances should be relevant to the purposes of the current exchange. The final maxim is that the speaker should be perspicuous. The first issue with Grice's theory is that it is difficult to know if Grice is describing universal characteristics of speech or rather how one ought to conduct oneself during a conversational exchange; a second issue is that Grice supports his argument with simple invented examples of speech use, such as "There's a bull loose in the field!" We are supposed to simply nod our heads and agree that the Cooperative Principle is credible either as a descriptive or normative account of something that underlies language use because it seems intuitively plausible, because we find the invented examples credible. Grice's success seems to be based on the historical accident that philosophers who read him and followed after did indeed nod their heads and say, "Well, that seems right to me." However if you actually consider most language use that people actually engage in, not just those speech acts which are intended to alert another person that a dangerous bull is about or to find out where in the South of France someone is living so the speaker can go visit her, speech acts which can be described as functional, it becomes much more difficult to view the maxims as universal descriptions or prescriptions. Bob might start a conversation with a friend by saying, apropos of nothing, "I read something interesting about Barack Obama last night." Bob says this for no other reason than because he wants to talk about it and hopes that his addressee will find it interesting. The addressee may listen because she does indeed find it interesting or just to be polite. It may make her think of something she has heard or read about someone else and will then take a turn telling a story. And so it goes on, each topic suggesting a subsequent topic, the conversation being something a little analogous to free association. In my view, a person maintains a kind of an interior monologue most of the time and most real world conversations can be considered, and I know this sounds paradoxical, shared internal monologues. Such conversations, and I claim conversations like this form the bulk of actual human language use, violate the Maxim of Relevance because the exchange has no purposes to which to be relevant, as well as the Maxim of Quantity. Additionally, we all know fabulists who often violate the second maxim, as does all literary fiction. It seems then that if we actually consider how we and others actually use language in the real world, that Grice's maxims are not terribly successful at capturing the nature of real conversation. We might then say that Grice's maxims are not intended to describe actual conversation but rather are normative rules that should guide rational language use – and of course we all ought to be rational. But if almost all actual language use violates at least one of the maxims, and if we want to justify the maxims by saying that they undergird rational language use, then we are forced to say that most people most of the time are using language irrationally. And such an argument would, I think, be self defeating. It seems self-contradictory to say that the Cooperative Principle is the rational principle behind all language use whilst conceding that most people most of the time do not obey it.

This is not to say that Grice's theory isn't important or that it might not have paved the way for theories closer to the truth. In some ways the Cooperative Principle does seem somewhat plausible to me. In the remainder of this essay however I wish to discuss four topics within modern philosophy, particularly I think as it comes out of American universities, that I do seriously disagree with. The first is the attitude of modern philosophers to 'schizophrenia', the second is the credence many philosophers give to evolutionary psychology or evolutionary biology, the third concerns 'Decision Theory', and the fourth concerns the modern impasse in ethics. What unites these modern tendencies of or schools within philosophy is a faith in rationality: they either assume that people are or ought to be rational, or they assume that when people are seemingly irrational there is a rational explanation. (The later assumption crops up when philosophers discuss schizophrenia or invoke evolutionary psychology.) They all arise directly or indirectly from the systematising tradition.

I have noticed that Anglophone philosophers sometimes refer to schizophrenia although, in what I have read, they don't have the foggiest idea what it actually is. About three years ago a friend of mine emailed me a philosophical essay concerning Ian Hacking's theory of 'looping' (although not an essay written by Hacking himself) that used schizophrenia as an example. (I shall not define 'looping' here because the reader can look it up on the Internet.) In this essay, the authors imagine a young man, call him Charles, who one day, for no reason at all, starts hearing voices. He sees a psychiatrist who diagnoses him schizophrenic. Subsequently Charles learns that schizophrenics supposedly have trouble looking after themselves and starts overcompensating, going to such obsessive lengths to keep his home and clothes clean that he alienates his friends and acquaintances. I thought this essay was utterly idiotic and having read it I ended up having an argument with my friend via email that ultimately led to us falling out entirely. This essay was an attempt to 'rationally' explain looping grounded on intuitions about schizophrenia that seem plausible to people who have never had a real conversation with anyone unlucky enough to be diagnosed schizophrenic. For one thing, it makes the serious error of supposing that people start experiencing psychotic symptoms for no reason at all; I believe, in contrast, that there are always proximate causes for the emergence of psychosis. Furthermore, if the argument of the essay were correct, we would expect many people who have been diagnosed schizophrenic to start exhibiting behaviours that are the opposite of the textbook signs and symptoms of schizophrenia that psychiatrists look for. This does not happen. It seems to me, rather, that often patients pick up on the idea that they supposedly have difficulty with self-care and, against their will, begin exhibiting a corresponding pattern of behaviour. The essay writers, I suspect, had completely misunderstood Hacking's theory. (I note here that, although I have not yet read any of Hacking's books, Hacking himself seems to be a credible philosopher. He was influenced by Foucault and for a time held an important position at the College de France. He seems to belong more to the therapeutic rather than systematising tradition in philosophy.)

If some philosophers have an interest in schizophrenia, it might be because modern systematising philosophy, particular as it comes out of Anglo-American universities, is concerned with how people ought to rationally behave, and schizophrenia is shorthand for irrationality. If man is the "rational animal", schizophrenics, because they are supposedly irrational, are in a sense not human. There is thus, among some philosophers, a fascination with schizophrenia. However, as I suggested above, most philosophers simply know nothing about it. I would like to suggest now that psychosis may often not result from irrational beliefs but rather from reason misapplied, that schizophrenia is philosophy seen in a funhouse mirror, and that perhaps the reason philosophers are at once fascinated and repelled by schizophrenia is because there is in reality a kinship between the philosopher and the schizophrenic. To support this contention I will say something again about my first psychotic episode.

As I mentioned many years ago in the post "My First Psychotic Episode", in 2007 I decided that the world was ruled by a conspiracy of closet homosexuals and held onto this delusion for about seven months. This was not an entirely irrational delusion but rather resulted from a deductive argument I came up with in April of that year which I spelt out in the other post and will spell out again.

P1. The world is full of closet homosexuals.
P2. Closet homosexuals can recognise each other but heterosexuals usually cannot tell that someone is a closet homosexual.
P3. Like the Freemasons, closet homosexuals want to help each other up the social ladder.
C. Therefore, closet homosexuals percolate to the top or, in other words, the world is ruled by a conspiracy of closet homosexuals.

This argument is valid. But it is unsound. It is very likely that all three premises are wrong. In particular, the notion that the world is full of closet homosexuals is very hard to prove or disprove because closet homosexuals obviously don't disclose their homosexuality on census forms. A little later in the same year I formulated another argument that was, I think, mostly valid but also unsound. I decided that heterosexuality was being systematically weeded out of the gene pool. I can present the argument schematically as follows:

P1. There are two types of people in the world, heterosexuals and homosexuals.
P2. The reason homosexuals are homosexual is that they carry the gay gene.
P3. Selfish-gene theory, as first proposed by Richard Dawkins, is true, meaning that people who carry the gay gene want to outcompete and outproduce the people who do not carry the gay gene. 
P4. The previous argument is also correct, meaning that closet homosexuals have power.
C. Therefore, closet homosexuals are conspiring to have more children than heterosexuals and so weed heterosexuality out of the gene pool..

This argument is not quite valid because it seems to be implying that homosexuals are deliberately conspiring to weed heterosexuality out of the gene pool whereas evolutionary psychologists tend to argue that the biological imperatives, supposedly given to us by evolution, of wanting to survive and produce as many offspring as possible, are deeply unconscious. However this argument is not quite irrational because many of its premises are held to be true by many people. They are assumptions or shared intuitions widely accepted even by philosophers that should I believe be rejected. The first premise may seem plausible to many people – in 2007 and still today, it has been widely believed that there is a sharp dividing line between gay people and straight people. This shared intuition is quite wrong. There are in reality many bisexuals in the world, as I pointed out in the post "Childhood Trauma and Bisexuality". The second premise also seems plausible until one thinks about it critically. Many people still believe in a gay gene: in 2012, Nature actually published an article claiming that the gay gene had been discovered, a study that was later debunked but which (as I know from a Youtube clip I saw years ago) at least briefly swept up Richard Dawkins himself at the time. In fact the idea of a gay gene is (if we think about it critically) quite irrational and more recent research has fairly conclusively shown that no such gene exists. Finally, Neo-Darwinism is accepted quite uncritically by many people but, as I have argued in a number of posts, does not stand up to serious scrutiny as a way of explaining human nature and the human mind. Yes, my conclusion that heterosexuality was being systematically weeded out of the gene pool seems quite insane but the premises of the argument I formed are ones still widely accepted today because few people bother to think through all their implications. In order to repudiate the logic that led to me becoming intensely paranoid way back in 2007, I have had to reject these premises, these shared intuitions, something I did a long time ago.

This discussion leads me to another criticism I have of modern philosophy: the credence given to evolutionary psychology and evolutionary biology. My current position is, I think, fairly clear. Although I believe in evolution, I do not believe in evolutionary psychology and have argued against it in other posts such as "Threading the Needle". Although the people who criticise Neo-Darwinism are often Fundamentalist Christians, I am not a Young Earth Creationist myself– I am not even a Christian. I do not subscribe to any religion at all. It is unfortunate that people assume that if a person opposes Neo-Darwinism this must be motivated reasoning resulting from prior religious commitments. There are in fact some philosophers who oppose Neo-Darwinism but who do not subscribe to any religion, one example being the recently deceased Jerry Fodor. In What Darwin Got Wrong, published in 2010, Fodor argued against natural selection on the grounds that too many phenotypic traits have no survival value and so consequently could not have been selected for. Fodor's argument against the Modern Synthesis is different than mine but there may be some truth to it – it is difficult for me to say because I have not read Fodor's book. Interestingly (and here I am quoting a New Yorker piece), Fodor said, "Neo-Darwinism is taken as axiomatic. It goes literally unquestioned. A view that looks to contradict it, either directly or by implication, is ipso facto rejected, however plausible it may otherwise seem.” I would put it differently. Many philosophers assume that Neo-Darwinism is the most rational explanation for, it seems, just about everything; to oppose Neo-Darwinism is to be (ipso facto) irrational. It seems, at least according to the New Yorker piece, that Fodor was more or less ostracised by the philosophical community because he dared to question the Neo–Darwinist orthodoxy.

I can provide a couple of examples of the way evolutionary psychology and evolutionary biology have seeped into modern philosophy. In Reasons and Persons, published in 1984, Parfit discusses the bias towards the future that humans possess – we apparently care more about future pains and pleasures than past pains and pleasures. Parfit says that this bias seems to him irrational but that we cannot do anything about it because it was "given to us by evolution". It is a throwaway remark. Parfit, although an interesting writer, is not particularly adept at dealing with scientific issues: the problem with his saying that the bias towards the future was given to us by evolution is that no mechanism through which this occurs is proposed. How can a sequence of nucleotides on a chromosome bring about a bias towards the future in the minds of humans (and perhaps in some animals)? Until the scientists can explain how a bias towards the future can be genetically coded for, we are entitled to doubt Parfit's claim. In Faces in the Clouds, published in 1995, Stewart Guthrie argues that animism is genetic. His argument is that, in prehistory, the safest survival strategy was to interpret environmental stimuli, such as the rustling of branches, as the result of intentional activity. The rustling could always be an indicator that a sabre-toothed tiger was about and it is better to be safe than sorry. The problem is again one of mechanism. How can a string of nucleotides on a length of DNA cause people to believe that there are spirits in the rivers and mountains? There is a second problem with Guthrie's theory. I am not an animist, nor is anyone I know. If animism is indeed genetic, if it is coded into all our DNA, why are there so few animists left in the developed world? If animism is a genetic adaptation, a gene or collection of genes selected for by environmental pressures during prehistory, we should still all be animists today.

Another way evolutionary biology has seeped into modern philosophy concerns the definition of 'disease', something I encountered in a course last semester. Many philosophers of biology, such as Christopher Boorse, want to objectively define disease naturalistically, as a failure of biological organs and systems to perform the functions given to them by evolution – that is, a disease is something that impedes a person's ability to survive and procreate optimally. One problem with the absolute faith in the explanatory power of evolution is that it becomes very difficult to explain why organisms undergo senescence and eventually die of old age, although some evolutionary biologists such as Bret Weinstein have made valiant efforts to account for senescence and inevitable biological failure. (I note here that although I disagree with Bret's view on covid vaccinations, he and his wife Heather Heying are still worth viewing on Youtube.) If we also consider animals like Black Widow Spiders, I think we are forced to conclude that if we have deeply unconscious motivations given to us by evolution, it must simply be to procreate, with survival merely being a means to this end. If this is so, if 'health' is simply the capability to produce as many offspring as possible and 'disease' is something that disrupts this telos, then we would be forced to define homosexuality, understood as most people understand it, as a disease, a mental illness. Moreover we would have to define shyness, a willingness to use contraception, and the inclination to become a Catholic priest as all being symptomatic of mental illness. So this naturalistic definition fails because it encompasses many dispositions that most of us would not consider diseases. Consider also the fact that Elon Musk, who has described himself as having Autism Spectrum Disorder, has ten children. Autism Spectrum Disorder is often considered a disease but Musk's 'mental illness' has not prevented him procreating well more than the average person. Thus the naturalistic definition of 'disease' is neither sufficient nor necessary and should be rejected.

One reason the naturalistic definition of disease became popular is that it arises out of the 'rational' systematising tradition within philosophy. Modern Anglo-American philosophers want to believe that the term 'disease' has a definite meaning and are arguing with each other about what this real meaning actually is. I believe however that Wittgenstein's 'family resemblance' theory of the meaning of words is more appropriate for the term 'disease'. The concept denoted by the word is loose and its boundaries are constantly changing as society changes. (Recall that Wittgenstein's later work was more in the therapeutic rather than systematising tradition.). The appeal of the naturalistic definition of 'disease' also stems from the blind faith philosophers and society generally have in 'evolution'. Many modern philosophers place the term 'evolution' at the centre of the system in the same way that medieval theologians placed 'God' at the centre of the system. 'Evolution' has become what Derrida called a 'transcendental signified' and is the heart of a new religion. To reject Neo-Darwinism today is to be a heretic. Furthermore it is to be irrational. Yes, evolutionary psychologists say, people may sometimes behave irrationally but they obeying a deeper rationality, the imperatives "given to us by evolution".

I won't again spell out my own views on Neo-Darwinism here but it is possible that science itself will end up refuting the claims of Richard Dawkins and his disciples. I can recommend several Youtube videos by Anton Petrov: "DNA Mutation and Evolution Are Not As Random As We Thought", "Mind Blowing Experiment Evolved Multicellular Life In Just 600 Days" and "Evidence for Unusual Chemistry and DNA Mutation Due to Quantum Effects". In particular, if we consider the last video (which describes how mutations usually seem to result from quantum tunnelling) in light of the idea that the supposed randomness of quantum mechanics may not really be random at all, it may lead us toward a better although far more spooky theory of evolution than the Neo-Darwinism currently in vogue: evolution may result from what Rupert Sheldrake calls "top-down causation" rather than simply natural selection acting on chance mutation.

The next popular subfield within philosophy that I wish to critique is 'Decision Theory'. I am here drawing from the Stanford Encyclopaedia of Philosophy 'Decision Theory' article. (I note that this article is complex and the subject has mainly been explored by mathematicians and sophisticated logicians, and that the version of decision theory I shall present here is perhaps an oversimplification.) The basic idea behind decision theory, as I shall present it, is that we make decisions by choosing options that maximise Expected Utility. When faced with a decision about whether to opt for A or B, I choose A over B because I believe that choosing A will make me happier, satisfy my desires more, or just generally be better than B. (In the theory the term 'utility' is left vague.) One issue that complicates the theory is that one cannot be absolutely confident that one's decision will result in one's preferred outcome and so we need to bring in probability: when a person has more than one option, she should consider not only the outcomes that result from her choice but the probabilities of the various outcomes. Suppose Sue decides to bet a dollar that Bruce will throw a head on a fair coin in the expectation that if she wins Bruce will give her three dollars: her decision is rational because the expected utility is 3/2 – 1 or 1/2, a positive number. In scenarios such as this, scenarios involving gambling in which utility can be measured monetarily, the outcomes are easy to quantify, but it is much more difficult to quantify utility in most other situations; nevertheless modern decision theorists seem to believe that we are somehow weighing up utilities and probabilities whenever we make decisions, that we assign cardinal rather than ordinal values to the different predicted outcomes. The logic seems to be as follows: humans are rational, Decision Theory is the most rational way to make decisions, so humans must be employing Decision Theory whenever we are choosing between different options. 

The issue with Decision Theory as with the other doctrines I have discussed is that it is a theory that seems plausible only so long as we do not consider the real world and how real people actually behave. Our credence in it is again based on reasonable-seeming shared assumptions that are probably wrong. Consider gambling. Every week my mother buys a Lotto ticket hoping to win millions of dollars. However the Lottery Commission makes more money (from ticket sales) than it doles out in prizes. The monetary prizes multiplied by the probability of winning each prize is always less than the cost of a ticket so the Expected Utility is always a negative number – so, therefore, it is irrational for my mother and hundreds of thousands of other New Zealanders to buy Lotto tickets. The same argument applies to casinos: in the long run the house always wins so the expected utility of going out for a night gambling, when measured monetarily, is always negative. It could be argued that buying a Lotto ticket enables a person to daydream about winning millions of dollars in the period before the draw and that this is also a form of utility that should be included as a positive entry in the ledger, so making the Lotto ticket purchase a rational decision. Similarly it could be argued that the excitement of gambling at a casino makes up for the fact that one will almost certainly leave with less money than one came into it with. However bringing in such considerations as a way to show that people are rational and employ Decision Theory when making such decisions seems suspiciously ad hoc.

Decision Theory has its roots in the theories of probability introduced by Pascal (famous for his eponymous wager), theories originally concerning gambling. I have just shown that Decision Theory has trouble with the types of real-world gambling real people engage in. It fails completely when we consider all the real world decisions we make where money is not involved, where the Expected Utilities are unquantifiable, where some of the outcomes are unknowable, and where we cannot even guess at the probabilities of those outcomes we can envisage. In 2006, I rocked up to bFM with the vague hope of resuscitating an old friendship and perhaps helping out at the station. I had no way of knowing what working at the station would be like, and not the slightest inkling that it would result in a serious psychotic episode and the derailing of my entire life. I did not contemplate this outcome at all let alone assign a probabilistic value to it. In 2013, I reentered the Taylor Centre voluntarily with the intention of getting it on the record that I am straight and always have been. I did not envisage that early the next year I would be put under a Compulsory Treatment Order that I am still under today. When I look back on this pivotal decision, I feel that I was driven to take this step, that I had no other choice but to go back to the Taylor Centre and ask to see any psychiatrist other than Antony Fernando. I did not then understand psychiatry and psychiatrists as well as I do today and even today I do not fully understand what psychiatric 'best practice' supposedly is. It feels as though when I made my 2013 decision I was being driven by Fate or some deeply subconscious impetus, what Georg Groddeck called "the it". Either way my decision was not 'rational' and especially not an exercise in Decision Theory: I didn't imagine all the possible outcomes, determine each outcome's Expected Utility, and assign to each a  probabilistic value. Drawing on my own experience in this way may seem like bad philosophy but I would ask you, dear reader, to critically look at significant decisions you've made in your own life. Suppose you have a fairly satisfying job. How can you be sure that, by hanging out a little longer and applying for other jobs, you couldn't have found another with better pay, fewer hours, and a more congenial work culture? Even if we consider the mundane decisions we make every day, it does not seem we fully envisage all the potential outcomes of our actions and assign probabilities to them, even if it were possible for mere humans to assign numerical probabilities to all potential outcomes. If Decision Theory does not actually capture the way real people make decisions in the real world, then it seems to be an abstract project that, while interesting, is useless, is devoid of any real world application (although economists like to imagine that homo economicus does make decisions in this 'rational' way). 

Academic philosophers who like Decision Theory may do so because their lives have been unhindered ascents up the academic ladder, from undergraduate philosophy degrees to positions teaching philosophy in Harvard and Princeton; it may be that such people, people who have never suffered serious misfortunes, believe that they have achieved success as the result of a series of rationally calculated decisions, that they had correctly estimated all the Expected Utilities and probabilities involved on every occasion, that there were never any potential Black Swan Events to derail them, and this is why they find Decision Theory so attractive. I would however reiterate my opinion, an empirical claim based on my own life and I suspect yours as well, that most of our decisions are not rational in the way that Decision Theorists claim that they are. Decision Theorists may seek to answer this objection by saying that their theory does not describe how people actually make decisions but rather how they ought to make decision. We here find ourselves wading into the same aporia or ambiguity that I have suggested bedevils Grice's maxims: is Decision Theory descriptive or normative? A conundrum for Decision Theorist is the Allais paradox (which I will not synopsise here because the reader can look it up online). The Allais paradox strongly suggests that either people are not fully rational or that at least one of the axioms of traditional Decision Theory is wrong. Many philosophers, rather than abandon the Aristotelian notion that rationality is essential to human-ness, have sought to amend Decision Theory to save the idea that people are rational.  Because people are essentially rational, they argue, Decision Theory should be reformulated. I would suggest that they should simply bite the bullet and admit that people are not rational. And if we recognise that almost all our decisions are irrational, the claim that people ought to make decisions with respect to Expected Utility, ought to be rational, becomes harder to maintain. (It is also worth noting that we could ask those advocates of Decision Theory that are also fans of Evolutionary Psychology how we could expect human beings who evolved to live in caves and hunt megafauna to 'rationally' assess the lotteries involved in the Allais Paradox.)

It seems that modern philosophy, as it is inculcated into students by philosophy departments in anglophone universities, is principally concerned with how people ought rationally to behave. It is unsurprising then that although some parts of philosophy have been encroached upon by other disciplines, a core topic which remains firmly within philosophy is ethics and a core sub-discipline is normative ethics (as opposed to practical ethics and meta-ethics, the latter of which I shall come back to in a moment). This is the next subfield within philosophy that I wish to discuss. The three main contenders within normative ethics are consequentialism (which used to be called utilitarianism), deontology, and virtue ethics. I won't define these three theories here because the reader can look them up online. The most popular of these three theories today, it seems to me, is consequentialism, the idea that we should act in such a way as to maximise positive outcomes. The question then arises: what do we mean by positive outcomes? Consequentialism trifurcates into three subdivisions: the proposal that we should try to maximise happiness (known as Hedonism), the proposal that we should try to maximally satisfy desires (sometimes called Subjectivism), and the proposal that there are objectively valuable things that we ought to try to maximise (known as Objective List Theories). A famous example of a modern consequentialist is Sam Harris (although Harris, because his Doctorate was in neuroscience rather than philosophy and because he is not an academic affiliated with Harvard or Princeton, is probably not taken seriously by professional philosophers). In The Moral Landscape, Harris argues that we should try to maximise 'well being' but, as I understand it, never satisfactorily defines 'well being' perhaps because he thinks the meaning of this term is obvious the same way Parfit thought the meaning of the term "rational" was obvious.

One of the most important early consequentialists was Jeremy Bentham. Bentham's "fundamental axiom", first proposed in 1776, is that "it is the greatest happiness of the greatest number that is the measure of right and wrong." Note Bentham's use of the word "axiom": an axiom can be defined as "a statement that is so evident or well-established that it is accepted without controversy or question". The problem here is that prior to Bentham and since him there have also been deontologists and virtue ethicists and so his "axiom" simply cannot be described as self evident or as something that is accepted without controversy or question. Rather it is an intuition that fans of Bentham share and that opponents reject. Once again we are supposed to just nod our heads and agree but not everyone has or does. Even within consequentialism there is insurmountable disagreement between Hedonists, Subjectivists, and Objective List Theorists and no clear way to resolve this dispute.

If consequentialism is indeed, as I suspect, currently the most most popular major theory within normative ethics, it may be because it dovetails so neatly with Decision Theory, and both fit so well with late capitalist consumerist society where increasingly everything has a monetary value.  Decision Theory says that rationally we ought act in such a way that will maximise Expected Utility and consequentialism simply adds to this the idea that we should try to maximise utility (say for instance happiness) in a general impersonal way. Parfit was a consequentialist although, at least in Reasons and Persons, he leaves the question of which of the three types of utility we should seek to maximise open. In Reasons and Persons, he advances a number of arguments in favour of impersonality and even argues, in the most important part of his book, that we should discard the idea of 'personal identity' altogether. The implication of this book is that we should simply seek to increase the happiness or desire satisfaction of all beings regardless of who these beings are or when during their existence they formed particular desires. Parfit's argument naturally leads us to consider whether we should also take into account the happiness or desires of non-human animals although Parfit does not address this issue himself, perhaps because he believes in the Aristotelian definition and sees humans and non-human animals as quite distinct in that humans are (potentially) rational and animals not. It is not the purpose of this essay to discuss Reasons and Persons in detail except that I will say that there must be a problem with Parfit's ostensibly rational argument if it leads to the conclusion that there are no such things as persons. It is an argument based entirely on 'reason' and sometimes outlandish thought experiments rather than on an understanding of real people and how real people actually behave in the real world.

How do adherents of different schools debate each other? How do they seek to bolster their own theories and debunk those of others? The way that normative ethics often proceeds is through the use of thought experiments, a famous example being the Trolley Problem. The fact that most people would prefer to divert a trolley onto a track that has one person tied to it rather than allow it to run over five is taken by consequentialists as evidence that consequentialism is correct and deontology false, even though we can modify the details of the trolley problem or invent alternative thought experiments that elicit responses that seem to support deontology instead. The methodology is interesting. The use of thought experiments seems to be justified by an implicit assumption that there is a universal moral instinct, a moral instinct that is rational, unified and coherent, which everyone shares, and which can be revealed through our responses to such thought experiments; the aim of normative ethicists is to show that this universal moral instinct aligns best with one of the three major ethical schools and, among consequentialists, with their preferred subdivision of consequentialism. This is similar to the way some philosophers have sought to establish the 'correct' definition of the term 'disease' as though its meaning exists objectively, independent of the way we have tended to use this word. However the fact that no consensus in normative ethics has been reached after centuries of disagreement, that we seem still to stuck in the same stalemate, strongly suggests that no rational universal moral sense exists.

This is not to say that all ethics is a waste of time. Meta-ethics is that part of ethics that assumes that people have a moral sense and seeks to establish what it fundamentally is. It asks: what does it mean to say some actions or people are 'good' and others 'bad'? It is concerned with what the term 'good' signifies; it is descriptive rather than normative. Among meta-ethicists there are some who believe that morality is real, objective, and universal; such meta-ethical theories provide a good foundation for the dialogue among normative ethicists described above because such theories suppose either that there is a universal moral instinct or that somehow morality can be derived from reason or empirical facts. However there are other meta-ethical theories which are subjective and relative. An example of a meta-ethical theory that is anti-realist is emotivism, a theory originally proposed by A.J. Ayer and elaborated by Charles Stevenson. According to this theory, ethical sentences express emotions. For instance, the sentence "Murder is wrong" simply expresses the speaker's distaste or abhorrence of murder. If emotivism is true, this suggests that moral discourse need not have a rational basis. I might have been conditioned as a child not to lie and so whenever I consider lying today I feel nauseous, even in situations where I will never be found out and when lying would have net positive consequences. If emotivism is true the moral instinct not only might not be rational but might be different for every single person. Another example of a meta-ethical theory that is anti-realist is universal prescriptivism, a theory originally proposed by R.M Hare. According to this theory, the sentence "Murder is wrong" should really be understood as an imperative, "Do not murder people!" If prescriptivism is true, it suggests that basically morality is a form of obedience to authority. The reason we recycle our plastic waste is not deontological –  we do not rationally consider this action in the light of Kant's Categorial Imperative. Nor is it consequentialist – we do not rationally compare all the consequences of putting our plastic waste in the blue bin with all the consequences of not doing so. A Virtue Ethicist might say that we recycle because we want to view ourselves as conscientious environmentalists and there is perhaps a little truth to this. But really we recycle because many of us have been brought up by our families to recycle and because we feel that we are expected to recycle by governments, family, and peers, and if we fail to conform to the behaviours expected of us, the behaviours we 'ought' to exhibit, we feel at the very least a nagging feeling of guilt.

It might be possible to sketch out a theory of what morality is rather than what it ought to be. The cells of a human body all have functions and all participate in the proper operation of the body. Similarly, perhaps, human society can be considered a type of meta-organism in which all humans are participating in the proper functioning of this meta-organism. Moral discourse in all its forms, whether exhortatory, legalistic, or in the form of gossip works to encourage people to conform to others' expectations about what is appropriate for the particular positions within society they occupy and for their roles as generic citizens of the state. We obey the rules, rules often established by authorities but also sometimes emerging from our peers, because we are subunits of a larger superorganism – consider, for instance the way twitter mobs descend on people for minor infractions of political correctness. In this sense morality is indeed prescriptive. However we also internalise these rules so that when exposed to someone who has broken a rule or when we consider breaking a rule ourselves, we feel an emotional reaction. This sketch is imperfect because it seems to suggest that morality ultimately has no higher warrant than the interests of society itself, treated as something unitary, and so does not enable us to adjudicate between the moral claims of different societies, to explain how moral disagreement can arise among members of a particular society, or account for the ways moral norms change over time. However it is not the purpose of this essay to explore this theory in detail. My intent here is simply to say that it is more useful and interesting to try to describe what morality is than to wrangle endlessly about what it should be.

In this essay, I have sought to show some problems with modern philosophy. I have diagnosed this condition as being the result of a resurgence of 'rational', 'systematising' philosophy. Such philosophy is defective because it does not clearly distinguish between norms and descriptions, and because it relies on shared intuitions that are often wrong. Grice's cooperative principle, although plausible seeming, does not capture the nature of real language use in the real world. The descriptions of schizophrenia presented by some philosophers may seem plausible to people who know nothing about it or who rely wholly on the DSM, who have never spoken to anyone diagnosed schizophrenic, but are usually complete fictions. Evolutionary psychology seems plausible because it arises from a simple theory, Neo-Darwinism, that philosophers assume to be axiomatic, even though experience of the real world and increasingly science itself show that evolutionary biology needs to be completely rethought. Decision Theory seems plausible because it just seems the most rational way to make decisions even though real people in the real world never make decisions in the way Decision Theory prescribes. And finally Normative Ethics is problematic because although each of the three main theories seems rational to some people, there is no way to adjudicate between the three, and none of the theories really captures the way people conduct themselves morally in the real world. The common theme of my criticisms is that these supposedly 'rational' theories are untethered from reality. We might consider the conspiracy theories I entertained for a period way back in 2007. Those theories were in a sense 'rational' but were also untethered from reality. 

In this essay, I have perhaps made the same mistake Parfit made – I have used the word 'rational' repeatedly without defining it. I cannot propose a good definition here except to say that insofar as philosophical claims follow deductively, inductively, or abductively from shared intuitions, they are only rational if the shared intuitions are correct and if these shared intuitions are grounded in reality, i.e. a person's experience of the world and those psychological, moral, and metaphysical facts that can be gathered from science, history, biography, and literature. Yes, even the most informed person can make mistakes but the errors I have mentioned in this essay seem so flagrant that, if we say that rationality requires empirical support, these theories must in this sense be regarded as actually irrational.

I'll finish this essay by relating modern philosophy to the wider world. We seem to be living through a crisis of meaning-making. Conspiracy theories at least as crazy as the ones I entertained in 2007 are embraced by huge groups of people. Climate change denialism runs through society from the bottom to the top. Here in New Zealand the major party of the Left has rejected a capital gains tax even though such a tax is both sensible and necessary. There is a possibility that Trump could be reelected and even a possibility that he might end up running the United States while under house arrest in Mar-a-Lago. And in philosophy departments across the anglophone world, students are taught Decision Theory and consequentialism and theories of human nature derived from evolutionary psychology as if these theories are as factually grounded as chemistry, as if these theories are indeed all rational. Is this a sign of a disconnect between academic philosophy and the wider world or is there somehow a connection? Perhaps people sense that what we accepted as true twenty years ago is now in doubt. Perhaps there is a sense that the modern return to systematising philosophy is hollow at its heart.  People are scrambling to find alternative mental schemas through which to interpret the world and there is a splintering associated with this crisis of meaning making. I concede that in this blog I have often tried to show that many commonly shared intuitions are incorrect and so this blog may be part of the problem rather than the solution. It is, for instance, conceivable that to argue that there is no such thing as free will is harmful even if it is true. It may be though that what will arise from the current crisis in meaning-making will be a new shared paradigm that is more true than what preceded it, more pragmatic, or both. There is a question about what role academic philosophy will play in the future. What I would like to see is a philosophy that builds on Postmodernism rather than rejecting it and retreating to older theories that have no relevance to the modern world. I want to see a philosophy that is concerned with the truth, whatever that is, rather than with vacuous argumentation. What I want to see is a philosophy that is genuinely rational rather than just 'rational'.

[A house-keeping note. Readers may have noticed that I have unpublished the previous post. This is not because I said anything untrue in it but because I was stepping on someone else's toes. An interesting problem related to autobiography is how a person can tell his or her life story without mentioning other people. This is something that may be worth discussing in a later post.]