Tags

,

Blanusa, M.L., López-Zurita, C.J. & Rasp, S. Internal variability plays a dominant role in global climate projections of temperature and precipitation extremes. Clim Dyn (2023). https://doi.org/10.1007/s00382-023-06664-3

It’s a paper on the relative uncertainties in predictions (not projections as the title suggests as it treats scenarios as an uncertainty) of extreme events arising from: internal variability, model uncertainty and scenario uncertainty. The extreme events are once in a decade daily precip or Tmax values and they are asking about the predictability of the exact number of events over a particular time window. Unsurprisingly, internal variability is a large component of this mix in the near future. For precip it remains a large component in many regions out to the end of the century. The intro says “If internal variability makes up a large fraction of the total variability, even a significant model improvement would only lead to a minor reduction in total uncertainty”. As internal variability constitutes a large fraction of the variability, they conclude that there is less to be gained from reducing model uncertainty and that ensembles are important. I agree with the latter claim.

I have one major overarching criticism, but upfront I’ll say this is a well written paper, clearly expressed and illustrated and it looks like a good deal of thought went into assessing sensitivity to various choices made, at least within the boundaries set by the original question.

However, I’m not sure what to make of it all. Their general findings are obvious in a sense – if you want to have the most noise and the smallest signal, you look at the smallest possible area and a variable with inherently high variability like extreme rainfall. Then you say, no signal here! That doesn’t mean that there isn’t a clear signal when aggregating over a larger area (indeed, they find the relative balance of uncertainties changes this way if they aggregate in time), or asking a different question.

The introduction starts with an illustrative example of a decision maker who wants to know how many extreme events there will be in the coming decade and notes that “even in a stationary climate, the outcome is uncertain”. Well, yes, that is how it is. It then goes on to compare the situation to weather forecasting where natural variability, chaos, or what have you, will get you every time. I’m not sure that’s quite the right way to approach this problem.

This is one of those situations where one person’s noise is another’s signal. Internal variability isn’t necessarily “noise” or “uncertainty”. In this case, internal variability is, in some sense, an indicator of the risk. While you can’t predict exactly how many events will occur or when, you still know something about the distribution of events – its mean and standard deviation, for example – and that allows you to assess the risk of this type of event. Assessing risk doesn’t require predicting every single event, but it does require knowing something about their frequency. These frequencies and hence the risk might be shifted by climate change and so we are likely more interested in uncertainties in changes in the mean and standard deviation of that distribution (rather than the exact number of events). Therefore, model uncertainty and scenario uncertainty remain, but what was previously uncertainty due to internal variability has become one of the parameters of interest. It will still be uncertain, but the balance will be, in many cases, totally different.

In addition, if the decision maker is a government and the decision they have to make is how to set targets for greenhouse gas emissions (or what have you) then “scenario uncertainty” ceases to be an uncertainty, as it more properly reflects a choice: what future do we want? The choice of scenario could be informed by the change in risk for each scenario, which leaves only model uncertainty (and any other uncertainties associated with estimating that*).

Depending on the question asked, the relative roles of these different aspects of the problem and their relationship to what might be termed uncertainty can change. It also suggests that one should be wary about discounting the importance of reducing model uncertainty based on one use case as it is more pertinent for some questions than for others.

To be fair to this paper, the above criticism might have been levelled at the original paper on partitioning the uncertainty in this way (Hawkins and Sutton 2009). It’s perhaps more obvious here because individual extreme events are unpredictable on a time scale of days to weeks, let alone years to decades. While one might have a chance of predicting annual global mean temperature, there’s no hope of predicting individual local extremes, so why bother framing the question this way?

The Hawkins and Sutton paper did look at another important factor, spatial aggregation. The scale at which you are doing this matters. While the mayor of a city might be interested in the risk for that particular city, there are decision makers who will be focusing on larger areas – water catchments, states, districts, countries, etc. – or some kind of distributed problem – power generation, distribution, railways, infrastructure etc – in sectors which can be affected by extremes, but where we can’t ignore the spatial aspect of the problem. I presume that there is some spatial correlation in these kinds of events, which means that risks are correlated. To what extent that correlation affects parameters relevant to the kind of decisions that need to be made is also important. Take a large enough region and some of that noise might even get averaged out. One-in-ten-years at a point, might be once-a-year over a whole country. If there’s spatial correlation, once-in-a-hundred years might be too frequent for planning purposes because it could mean that every single location gets hit by a one-in-a-hundred-year event at the same time. Again, it comes down to what the precise question is.

By asking a relatively narrow question, the authors got a result that doesn’t generalise usefully. Qualitatively, this is a criticism that could be levelled at most studies, but the question posed here is rather a specific one and perhaps of limited interest.

-fin-

* the original partition doesn’t quite cover everything

Advertisement