Rational Irrationality and the 'Paradox' of Climate Change

What if Global Temperatures Rose by 4 Degrees Celsius?
Rational Irrationality and the 'Paradox' of Climate Change - Mathew Humphrey


In 2006 Dale Jamieson published a paper in Climatic Change entitled “An American Paradox.” The paradox in question concerned the attitudes of the American public to global warming. Most Americans, it seemed, self-reported as environmentalists, believed that climate change was an environmental bad, and said that they were willing to pay to mitigate it. When, however, specific policies with definite costs were placed before them, their support for mitigation faded away.1


I agree with Jamieson that this is, on the surface, a paradox. The propositions that citizens are (a) willing to pay to mitigate climate change and (b) not willing to pay for any specific measure to mitigate climate change cannot both be true simultaneously. The paradox may be resolvable if citizens favor some other, unspoken policy to tackle climate change that has not been presented to them in survey questions. For the reasons we will see below, however, this seems unlikely.

As Jamieson notes, this paradox is not confined to America; similar findings have been reported in the UK,2 with one commentator describing the phenomenon as “cheap talk environmentalism”3 Lieserowitz reports that Americans seem concerned about global warming but, “paradoxically,” seem also to view it as less important than almost any other issue. Sterman and Sweezey also find that there is a “contradiction” in people’s attitudes to climate change: they express concern but then oppose policies designed to mitigate its effects. Pidgeon et al. report a UK survey in which 69 percent of respondents thought climate change best managed through behavioral change, but where only 8 percent thought that responsibility for this change lies with “individuals and families.” Lorenzone et al. report wide awareness of climate change in the UK, although it is being given a low priority by the public, and Bazerman notes that whilst climate change is almost impossible to ignore, given its media profile, both politicians and the public are failing to respond.4

A common thread connects the views being put forward here. Climate change now has a high public profile, being featured in many media stories, and highlighted by films such as Al Gore’s An Inconvenient Truth and Hollywood’s The Day After Tomorrow. Populations report awareness of it, and tend to express concern about it in the abstract. Nonetheless, in terms of behavioral change, or in their willingness to pay for specific mitigating policies, most people are reluctant to make concrete commitments.

How can we explain this? The existing literature has a number of sophisticated explanations that stress the kinds of cognitive and judgemental errors people make when they rely on rough and ready heuristics in order to make decisions. There are what Bazerman calls “predictable errors” in the way that people process information.5 People make mistakes, and what renders climate change particularly difficult to deal with is that the problem-structure involved tends to bring multiple sources of error into play simultaneously. The systematic biases that people make will almost ensure that people demand the wrong amount of climate change mitigation. The existing literature focuses on the associative and affective side of human motivation, which is distinguished from the analytical/ rational abilities that we have. The latter are to some extent channelled by our initial affective response in making calculations about climate change.

In this paper I will explore the argument that people make errors about climate change mitigation because of irrationality and systematic bias. The next section offers an account of the systematic bias explanation. Rather than seek to challenge that explanation, my argument will be that it can be placed upon a rational foundation. Ignorance about climate change, coupled with strong views that “something should be done” and a contrary unwillingness to pay for mitigation, is in fact a good example of rational irrationality.6 That is, if we can accept the notion that irrationality can itself have a rational basis, we can at least partially explain the “paradox” of climate change using a rational actor model.

The Associative/Affective Account

Explanations of public attitudes to climate change tend to focus on the way in which people process information, and the “down and dirty” heuristics that we tend to use in order to save time and mental effort. Our responses to phenomena are often intuitive, and Jonathan Baron makes the point that “people’s intuitions about matters of public policy often lead them to favor policies that are sub-optimal due to systematic biases in turn caused by our common heuristics.”7 These include “naturalism” (we will tolerate bad outcomes from natural causes but not from human causes, even when the costs are the same), “the polluter pays” (people prefer that a company clean up its own waste, even if the same resources could be spent cleaning up more dangerous waste from a now defunct company), “undoing” (a related but different principle: undo damage done rather than spend those resources on doing something else, even where that “something else” brings more benefit), and “parochialism” (the tendency to favor a group that people see as including themselves over helping strangers). Experimental results, on Baron’s account, show that with respect to global warming, people prefer undoing, are parochial, and show some degree of naturalism. People may also be prone to wishful thinking when estimating the future effects of climate change.8 Max Bazerman has a different list of cognitive heuristics leading to predictable error with respect to climate change: a predilection for “single cause” frames (we look to identify and treat a single root to what are likely multi-causal problems), our “positive illusions” (which lead us to underestimate problem severity), egocentrism (meaning we interpret events in a self-serving manner), operating with a status quo bias, and our tendency to respond only to problems that are experienced directly or through vivid data.9 Furthermore, we operate with excessively high discount rates for future harms, many politicians have incentives to ignore climate change, and those with entrenched fossil-fuel interests have incentives to obfuscate the issues. 10

Accounts of other defective heuristics abound in the literature. For example, fatalism (what will be, will be); blame shifting and scapegoating; “pattern matching” (this lies behind the finding that people often believe, falsely, that stabilizing emissions will quickly stabilize the climate); initial evidence effects (initial evidence gathered either for or against something then biases the reception of evidence pointing in the other direction); and “single-action bias” (people are excessively reassured by taking a single action against a problem, however complex that problem may be).11 Furthermore people do not update their beliefs in the light of new information (violating Bayesian rationality), and they hold strong beliefs on the basis of scant evidence.12

The overall message of this research is clear. Our thinking about climate change is distorted, in that we do not make a rational and analytic assessment of its likely costs (especially to distant others), its long-term effects, the most cost-effective way of counter-acting it, the extent to which we should effectively “insure” ourselves against non-linear environmental change, or the relevant balance between mitigation and adaptation. It seems then, that we are doomed to respond irrationally to climate change, which entails that the position of the median voter on this issue is likely to be far from the optimal point. The systematic nature of the biases we show are important here. If the biases were random, we might expect the median voter to be close to the well-informed position. This is the “wisdom of crowds,”13 a large group of people asked to guess the weight of an object tend to be close to accurate at the median point, with a normal distribution around that point. We don’t tend to systematically under- or over-estimate this, unless there is some kind of optical illusion involved.

Rational Irrationality and Climate Change

Economists, in particular, do not like to attribute social outcomes to irrationality, and, similarly do not like the idea that biases are systematic. This is especially true of the “rational expectations” school of economic thought. One problem is that the assumption of irrationality appears as a kind of Deus ex Machina of social explanation, and potentially it could be used to “explain” many suboptimal social outcomes; deviations from the optimum are simply explained by deviations from rationality. Isn’t it better if you can explain outcomes without this assumption, given that we know that most people are rational (i.e., hold beliefs for adequate reasons) most of the time? Every time someone drives his car he puts his life in the hands of the rationality of many other people; we don’t expect others to have the wrong belief about which side of the road to drive upon. As Caplan points out, rational expectations, as another kind of heuristic device, “makes a lot of sense.” “Who has not said something like ‘As price goes up, sellers increase their production?’ Yet this elementary claim assumes that objective facts and subjective beliefs about price move in the same direction.’14 A claim of irrationality suggests, alternatively, that the objective facts and subjective beliefs come apart, such that sellers would interpret rising prices as falling prices. Perhaps one seller might do that, but would many? If so, it would pay you to enter this market.

We should not, however, be too quick to assume that all informational examples mirror that available to sellers in markets, this information is both (on certain assumptions) immediate and directly relevant to the interests of the seller. Information about climatic change is not directly relevant to the interests of the citizen in the same way. This is true even if, ultimately, climate change proved to have utterly disastrous consequences for the citizen. That would only show that the effects of climate change were directly relevant, not information about it. The problem here is that even in the light of accurate information no individual citizen can make any difference to the outcome, as no single citizen can alter the path of climate change through his vote, or by changing his pattern of consumption. The collective action problem applies here, in that the effects of n-people and n-1 people on the progress of climate change are indistinguishable.

This lack of individual efficacy relates to a phenomenon slightly more familiar than rational irrationality, that of rational ignorance. Perhaps the foremost exponents of the rational ignorance hypothesis were Joseph Schumpeter and Anthony Downs.15 A succinct account is offered in Capitalism, Socialism, and Democracy in relation to politics: “…the typical citizen drops down to a lower level of performance as soon as he enters the political field. He argues and analyzes in a way which he would readily recognize as infantile within the sphere of his real interests. He becomes a primitive again. His thinking becomes associative and affective.”16

Schumpeter goes on to assert that this low level of performance is all the more shocking when observed amongst the well-educated, professional class, which it will be even when good information is plentifully available. We should not be shocked when even these people are ignorant of events which do not directly affect them (his favored example is foreign policy), as the incentives for them to gain good information are simply lacking.17 Rational ignorance is a marginal phenomenon, for any additional unit of time, there is (almost) always a better use of it than in gaining information that will be of no consequence to one’s direct interests. Climate change fits very well as the kind of problem about which we should expect rational ignorance. Although it may have profound consequences, it is relatively long-term, with widely dispersed and myriad points of emission, often deeply embedded into the socio-economic structure of societies (e.g., power generation) and with global and unevenly distributed consequences. No individual can expect to affect the outcome by changing his behavior, and so all are embedded in a “tragedy of the commons” type collective action problem.

Rational ignorance cannot however, on its own do the work required in order to explain the “paradox” of attitudes of climate change. We might predict from rational ignorance that people would be poorly informed about climate change, and this expectation seems to be confirmed by a number of surveys. To take one example, many (though not all) surveys show that significant numbers of people believe that a major driver of climate change is the hole in the ozone layer.18 The rational ignorance hypothesis would not, however, lead you to expect the additional finding that people express such a strong view that we should “do something” about climate change in the abstract, but then not be willing to back it up when offered specific, and to varying degrees costly, policy proposals. To put this another way, we should not expect systematic bias about climate change as a result of rational ignorance, but rather some random distribution of views, some holding it to be a serious problem, and some not.19

Caplan notes that if ignorance was the only problem, then “sufficiently large doses of information would be a cognitive panacea.”20 This has to be finessed somewhat; even freely available information may not be absorbed by the rationally ignorant, as the assimilation of it would require mental energy that might be better spent elsewhere. That quibble aside, rational ignorance in and of itself gives us no reason to anticipate any particular shape to people’s biases or views on a topic. It may be that people will hold no definite view, or have strong views one way or another on any given topic. What we can expect is that whatever views people do have, they are likely to be ill-informed, but we can deduce no more than that.

For Caplan, in order to understand whether a systematic bias, such as we have seen with respect to beliefs about climate change, is rationally irrational, we have to consider the costs of being mistaken. Individuals are taken to have a standard demand curve for irrationality, so when it is cheap they will demand more of it, and when it is expensive they will demand less. How expensive irrationality can be expected to be is a function of the nature of the decision being taken and the expected impact of the outcome on the agent. Rational irrationality and rational ignorance map onto each other here, as generally it is over a subset of the problems for which we can expect rational ignorance that we can also expect rational irrationality, although they remain distinct phenomena. We can, recall, expect rational ignorance where the pay-off for getting good information (or carefully processing information that is freely available) is very small. We should not, however, necessarily expect this to translate into not having a view, possibly a very strong and passionately held belief, on the topic at hand. There is no necessary correlation between information and opinion, and Caplan’s insight is that when the costs of being wrong about something are zero or close to it, then there is also little to prevent the articulation of strong but ill-informed views. This is all “irrationality” means in this context, the holding of definite beliefs with inadequate justification. If one believes that one’s life-threatening illness is best cured by sitting under a crystal and chanting, one can expect to pay a high cost for the belief. If, however, one is convinced that climate change is caused by the thinning of stratospheric ozone, or indeed that the science of climate change is part of some elaborate anti-American hoax, such a belief is unlikely to impact negatively upon day-to-day life, at least for most people.

Even if, however, irrationality is cheap, why believe something rather than nothing? Why say anything at all, or hold beliefs and preferences, about an issue that has no direct impact on one’s life?21 This is a reasonable question, to which there are two possible responses, one of which concedes more than the other to the implied objection. On the concessionary side, much of the “evidence” with regard to public attitudes to climate change is gathered in response to direct questions via surveys. It may be, as Sunstein and Jamieson both suggest,22 that many people actually have no stable or well-thought-through preferences in these areas in the absence of being asked such a direct question. When faced with a question from the lips of a surveyor, they might feel that some sort of response is appropriate, and so articulate a view that may be more-or-less made up on the spot. Empirically this information about unformed preferences is pretty inaccessible, although questions relating to problem salience may get some traction on whether people actually think about an issue prior to being asked about it.

The non-concessionary answer is that people have preferences over the content and intensity of their own beliefs.23 We want certain statements about the world to be true, and when error costs are low irrationality is cheap, and it would only take a small pay-off to encourage citizens to ‘consume’ some. If there is any further reward in terms of social interaction in having views on political matters, or in particular kudos to having “strong” views, then this may be sufficient to give irrationally held beliefs a warrant of rationality.24 It is intuitively plausible to think that this is often the case. Big political questions are often the subject of heated discussion, and in the field of social interaction, having a view on climate change, immigration, or the wars in Iraq and Afghanistan will, presumably, often lead to exchanges that participants find enjoyable. The level of enjoyment, it is worth remembering, would not have to be far above zero to pay off. This social interaction may even act as a constraint of sorts on the degree of irrationality that can be expressed without loss. A view that climate change is caused by aliens may get short shrift in most settings, and so irrationality may be bounded by some notion of plausibility if it is to be rewarded.

When it comes to the paradox of climate change, the notion of rational irrationality provides a potential explanation. When asked in the abstract if climate change is important and whether “something should be done” about it, the question operates in the costless sphere in which it is rational (or at least not irrational) to be irrational. Therefore many people will feel comfortable answering in strong terms, and this may involve demanding costly action, as the majority seem to,25 or taking an extreme opposing view, denying the existence of climate change in the face of the scientific evidence. The systematic nature of the biases expressed come from the kind of heuristic short-cuts discussed in the cognitive literature on attitudes to climate change. When, however, respondents are presented with specific and costed policy options, the question frame ceases to be in the realm of costless abstraction and comes down to the concrete field where costs matter and where the price of irrationality increases significantly. Will you give up your car? Will you cut down on the number of flights you take? Would you pay more for your petrol? Would you install a solar panel and wind generator? These specific questions have direct consequences, and although it would still be costless to answer “yes, I would give my car up” and simply be insincere, people actually appear to take the costs on board and offer different answers to this kind of relatively concrete question.26 At this point the rationality of irrationality fades away, and the costs and benefits of any action become much more salient. What is not clear, at least from the evidence I have seen, is what specific reasoning might underpin the refusal to accept costs. It may be that people appreciate the scale of the coordination problem here, and resist becoming “suckers” in the language of game theory. Why give up my car, or take fewer flights myself, if this will make no difference to the overall pattern of climate change? Even die-hard campaigners such as George Monbiot recognize this logic: “What is the point of cycling into town when the rest of the world is thundering past in monster trucks? By refusing to own a car, I have simply given up my road space to someone who drives a hungrier model than I would have bought. Why pay for double-glazing when the supermarkets are heating the pavement with the hot air blowers above their doors?”27

Alternatively, it may be that people are following what Ingolfur Blühdorn calls “ecological correctness”28 in suggesting that climate change is serious and worthy of immediate costly action. This could be a form of words they feel duty-bound to espouse, although with little enthusiasm or commitment, but that’s fine when the articulation of the view is costless. Again, however, when costs become clearer and the pay-off matrix changes, then we might expect people to express a different view. An appropriately designed survey may give us some indication as to why those who respond positively to taking action against climate change are reluctant to accept specific costs, but that is beyond the scope of this essay. What I am trying to establish is a possible explanation for this phenomenon that turns on the “price” of irrationality.

The Implications of Rational Irrationality With Respect to Climate Change

It is important to note that nothing in this analysis impinges on the question of whether governments should take preventative action with respect to climate change. This is a question about collective rationality at the global level, about how best to balance competing demands on scarce resources and in particular the extent to which we should effectively “insure” ourselves and future generations against non-linear catastrophic environmental change.29 The question under discussion here is one of individual rationality, and one of the particular problems for individuals with respect to climate change is that they are trapped in a nested tragedy of the commons (it operates at the level of the individual, the institution, and the nation-state. All can have strong incentives to defect on cooperative solutions). Thus the claim about irrationality is not a claim that it would be irrational for society as a whole to invest large amounts in climate change abatement. For the purposes of this paper I remain agnostic on that question. What would be irrational, on a standard game-theoretic account of rationality, is for any individual to accept a cost when that cost would have no impact on the process of global warming because other people cannot reasonably be expected to also play their part. There can be exceptions to this, if one can reasonably expect “exemplar” effects, in which others will eventually follow the role of a moral leader. Then, accepting costs might be rational, or necessary in order to avoid accusations of hypocrisy for those trying to get others to change their ways. As Monbiot’s plea above implies, however, moral leadership effects do not seem to be strong with respect to climate change.

This distinction between social and individual levels may also help to explain the paradox in some cases. As Sagoff noted some time ago,30 people may switch between citizen-thinking and consumer-thinking in response to different problems. Perhaps when asked about climate change in the abstract, people frame their responses in a collective manner. It would make sense for us to pay a high price to mitigate climate change. Asked, however, if one would give up one’s car, the response switches to the individual level. Why make my life harder? What benefits would that bring?

This putative solution, however, could only work for some types of questions. On others, people are clearly asked about the impact of taxes designed to deliver the collective level good. If they understand the question correctly, they should see it in terms of their contribution to a collective outcome that they have already strongly endorsed. The respondent does not now have to fear the free-riding of others. In these cases the paradox appears to remain, and here rational irrationality provides a possible explanation. When the cost of the initial response becomes apparent, the price of irrationality goes up, and people want less of it.

Again, it is important not to over-simplify. We have hypothesised a degree of rational ignorance with respect to climate change31 and this entails that people are unlikely to be well-informed about its consequences. A number of the features we have noted come together here, in particular the apparently distant nature of the effects of climate change, the complex causal processes intervening between CO2 emissions, climate, and weather events, disagreements amongst “experts”32 and the incentives of vested interests to send out confusing signals.33 This entails that, even if systematic biases are leading to skewed demands for climate change abatement, inserting costs into the equation may change the outcome but not necessarily improve its rationality, as a known cost is now being set against unknown benefits. People may balk at an additional $1,000 dollars a year in taxes to abate climate change, having previously endorsed strong action, but this may well be just a function of the change in the question frame, rather than any rational cost-benefit analysis. Here a definite $1,000 cost is being set against what is likely to be completely unquantified—the benefits to that individual of climate change mitigation. The insertion of concrete costs does add to the amount of information available to the respondent, but in an asymmetrical fashion. It does not show that the changed response is in itself more appropriate to the problem, as the respondent is very unlikely to know what benefits that $1,000 climate change tax would confer.

The notion of rational irrationality may then offer an explanation for the “simulative politics” of climate change,34 in that it leads us to expect people to hold strong and systematically biased views on the basis of inadequate justification when the cost of being wrong is low, and to expect a more measured view when the cost of being irrational is higher. Thus people demand “strong” action against climate change in the abstract, but shun concrete policy proposals that have costs attached. What we cannot deduce, however, is that the answer that is no longer infected by rational irrationality is itself more rational. Rational irrationality operates under the shadow of rational ignorance, and so the extent to which the changed response could be said to be more rational is dependent upon the total change in the stock of information drawn on by the respondent. Merely adding a concrete cost to mitigation, without exploring the costs of climate change itself, just adds a different form of asymmetry to the calculation.

Furthermore, whether the initial response is skewed in favor of demanding “too much” climate change mitigation will depend upon the effects of the systematic biases listed above. Many of these (such as status quo bias, over-optimism, parochialism, pattern-matching and single action bias) suggest that too little will be demanded, and this may also help to explain the low costs that people are actually willing to incur. On the other hand, if Caplan is right and people are systematically over-pessimistic (one assumes the claims of systematic optimism and pessimism cannot both be true simultaneously) then we might well demand too much.

Rational irrationality may then offer a partial explanation of the paradox of climate change, in that it offers an account of why responses change when up-front costs are included in the information given to respondents, without assuming that people are just irrational per se. People may have good reasons for bad reasoning. What it does not show is that the initial responses with regard to climate change abatement are themselves wrong (I may hold a true belief for bad reasons) or that the “with costs” responses are more rational. This, however, offers an interesting avenue for further research.


Mathew Humphrey is Reader in Political Philosophy at the University of Nottingham. Publications include Preservation versus the People? Nature, Humanity, and Political Philosophy (Oxford, Oxford University Press, 2002) and Ecological Politics and Democratic Theory (London: Routledge, 2007).