On the credibility of extreme event attribution

An interesting paper – Processes and principles for producing credible climate change attribution messages: lessons from Australia and New Zealand – from the antipodean masters of the art looking at some of the issues with turning Extreme Event Attribution (EEA) into an operational service. It’s beautifully written, clear, concise, well illustrated, and provides lots of food for thought.

The paper considers the whole process from when an Event hits a trigger through to communication. Importantly, they consider not just the weather, but also the impacts. There’s a nice1 flow chart that sets out a standardised procedure, which they argue is necessary for some kind of objective assessment. In particular, the triggers should not be chosen in a such a way as they favour a particular outcome2.

They also helpfully adumbrate the purposes of EEA messages:

  1. to raise awareness to a general audience about the effects climate change is already having
  2. to inform analysis of economic risk and the social cost of carbon
  3. to inform decisions about building climate resilience; and
  4. to address questions of climate change liability, Loss and Damage (L&D) and climate justice

Impact attribution has the further uses:

  1. Infrastructure and planning – including implications for building codes, building design
  2. Human health – adaptation measures and health care to manage impacts of heat and other extremes, e.g., ambulance providers can use an estimate of how many additional ambulances were required during a heatwave or fire event for planning future operations to meet their legislated standard of service.
  3. Emergency response – scope and scale required to address emergencies, e.g., firefighting including national and international resourcing.
  4. Biodiversity, socioeconomic and flow-on or secondary impacts (e.g., vector-borne diseases)

I’m always a little bit sceptical of such lists. It often feels like number 1 -awareness – is the most important and the rest while being far more important, can all exist without anyone doing an EEA study. Some of the machinery is the same, but the timeliness requirements are generally much looser and most of these proposed applications would benefit from a more rounded holistic view than is allowed by the laser focus on a single happening. Also, I’d rather bad things didn’t have to happen before we think about how to deal with them.

They also suggest the regular use of causal networks to help map out the relationships between climate change, variability, preconditions, proximate causes, meteorological circumstances and the final event. I always find these diagrams helpful for seeing the assumptions3 laid out clearly. In particular, it’s good to see which connections are more or less certain4. This is important for getting the communication right. All EEA statements are conditioned upon something, but some are more contingent than others. In the “storyline” approach, for example, one calculates things conditioned upon the specifics of the synoptic situation – the “weather” in a sense. This has the advantage of fixing the bit that is most uncertain – the specific dynamics of the events – but that’s also its disadvantage. Whether such conditions are made more or less common by climate change is also important. A lot of thought goes into this aspect with pros and cons of using different approaches, but they key thing to remember is that it all depends on what question you are asking and the method you choose limits what questions you can ask. It’s a bit circular.

It would have been helpful to see a plausible causal network for a really complex situation like a poorly forecast high-impact compound extreme event which included other human drivers of the impacts e.g. flood defences (or lack thereof) and issues of preparedness, early warning failure, and so on.

The other thing that potentially limits the question you can ask is the counterfactuals considered in the process. Traditionally, these include running things with and without human influence. The world without human influences is the key counterfactual, but even the simulated worlds with human influences are counterfactual. They help to fill out the probabilities of different things occurring or not. There’s an asymmetry here, which is that we only observe the real world and, more importantly, triggers only happen in the real world. I’ve talked about this before.

We only study events that have happened, but not events that almost happened, or didn’t happen. To understand the full range of impacts and their relationship to human activities (and liability, effectiveness of action, and all those other things) we need a wider range of studies, including studies triggered by events that happened in counterfactual worlds. I know that might sound a little weird, but if a particular set of weather conditions would have led to a severed cold spell in a world without greenhouse gases, in a world with them there might not have been a cold spell of sufficient severity to trigger a study. If we want to understand the effects of climate change it is useful to know about these missing events.

Similarly, it would be useful to have another counter-factual which is the world without climate action to date. As emissions reduce, it will be helpful to see what we have avoided. The counterfactual world without climate influence is an unattainable one. We left it long ago and will likely never return. Quite what we learn by comparing the world as it is now to the humanless (or at least human-influence-less) planet is an interesting question. An informal quiz of people suggests that there is an expectation that climate action will somehow return us to this state. It won’t.

Whether such considerations fit nearly within the proposed schema, I don’t know. Little is said, as I mentioned before, about the data and model runs that feed this kind of process.

Another important aspect is the gap between what can be said and what people are expecting to hear. While the paper talks at length about getting the communication and caveats right, a lot of reporting and discussion of EEA boils down to “was it us?” The first things to go are always the caveats. There is a question about whether results are “good” enough to be communicated and whether that will do more harm than good (in some sense). I feel like a final step is needed, which is a review of the whole process after each use, lessons learned etc.

When it comes to the communication, the authors take guidance from epidemiology where the FAR approach to EEA originated. They consider it to be a useful analogy still, but it has its limitations. For one, the example that smoking causes cancer (which is a binary outcome) is rather different from whether rainfall at a particular station exceeded a particular value (which is only binary by virtue of an arbitrarily chosen marker on a continuum of rainfall totals.

Finally, the central assumption on which EEA is based is that climate can be decomposed usefully into discrete Events. I’m not sure that this is the case.

-fin-

  1. There are some confusing features: the process is triggered somehow, but data considerations don’t appear till step 3. I feel like external dependencies are an important part of this kind of process, so it would have been nice to see these things expressed explicitly. ↩︎
  2. The issue is whether later considerations undo all the careful work at this stage. Sure one can have triggers for all kinds of events but if later considerations (such as complexity and lack of confidence) rule out communication then one might get the same outcome anyway, particularly if one takes into consideration that only event that happen can be triggers. Subjectivity can hide behind objectivity, popping out at the last moment. ↩︎
  3. One assumption, for example, that leaps out of the diagrams in the paper is that climate change and variability are starting points (I don’t know the correct graph theory term) neatly separated from each other. Someone else might be tempted to start with external forcings (or even prior to that, why not – the ultimate cause of rising CO2 concentrations is that political action hasn’t yet managed to stop them) and connect those to both climate change and climate variability. Of course, one has to simplify at some point to get an answer and this is a fairly common simplification. ↩︎
  4. One might quibble that confidence is shown in two different ways in the graphs, which are handled quite differently, reflecting different depths of uncertainty. ↩︎


Leave a comment