Unveiling the unveiled

A paper by Storto and Yang claims that “Acceleration of the ocean warming from 1961 to 2022 unveiled by large-ensemble reanalyses“, which is a bit odd given that acceleration of ocean warming is a fairly robust result. Still, an interesting paper. I still think that ocean warming is accelerating, but now I feel less certain about how that happened. Also, I have questions1.

The paper presents the results of a large-ensemble ocean reanalysis. It’s not what I’d call large (32 members) except that these things can be large (lots of data, huge resource cost) and small (statistically speaking, or counting on the fingers of four octopuses) at the same time by different measures. The ensemble is designed to explore some key uncertainties in the generation process with the aim of reassessing “the [Ocean Heat Content] trends and their uncertainty as estimated by reanalyses, compare them with objective analyses, and draft a hierarchy of sources of uncertainty“. The name of this ensemble is CIGAR – a magnificently contrived acronym from “Cnr Ismar Global historicAl Reanalysis”.

The estimated ocean warming and acceleration come out very similar to the consolidated GCOS analysis described in von Schuckmann et al. 2020. However, the series are visually quite different, so the agreement on warming rates seems to be rather coincidental. They both warm overall, but CIGAR has a period up to 2002/3 with relatively little warming followed by a jump and then rapid warming thereafter. The recent warming in the Argo period is much higher than in the GCOS estimate.

The extent to which the two are exactly comparable isn’t clear. The GCOS estimate is for areas deeper than 300m between 60S and 60N. No such restriction is mentioned in this paper, so there are perhaps colocation issues particularly given the high rates of warming in the Arctic Ocean north of 60N and the large opposing trends in the Southern Ocean south of 60S. Differences in the global mean larger than the combined uncertainty estimates suggest a significant and irreconcilable discrepancy, but if they used the 2020 GCOS analysis then the uncertainties are considerably smaller than in more recent updates e.g. von Schuckmann et al. 2023.

They explore some of the analysis choices in a series of experiments e.g. using climatology as a background field, using simulated fields as a background, changing the assimilation window, and fiddling with the horizontal correlation scales. Some of these experiments look more like the GCOS analysis than others, but there’s definitely quite a lot of operator degrees of freedom. The uncertainty in CIGAR is larger than for the GCOS analysis. It’s never clear how one should interpret such differences given the meta-uncertainties. This is a more “structured” ensemble than the GCOS ensemble of opportunity, but it’s still relatively small.

Comparison is tricky because only the summaries are shown. It would be nice in these situations to see individual ensemble members of both the CIGAR and GCOS ensembles. The mean ± standard deviation approach can cover a multitude of sins with the ensemble mean averaging out a lot of interesting and/or idiosyncratic behaviour. See e.g. differences between ocean analyses on the Met Office dashboard. Some of these look more like CIGAR, some look much less like CIGAR. The extent to which a common dataset underlies these similarities (EN) is an interesting question.

One thing concerned me in their ensemble set up (the only thing I could credibly be said to have a relevant opinion on). They use two very old SST data sets: HadISST1 and COBE-SST-1. Both of these data sets are known to underestimate long-term warming and both rely on bias adjustments that are one or two generations behind the latest versions. The underestimate of long-term warming is partly due to the old bias adjustments but also (probably) due to analysis choices with both damping variability back to the prior in data sparse regions. In the southern ocean, one must also bear in mind that HadISST has distinctly unrealistic variability prior to the inclusion of satellite data in the early 1980s. The southern ocean is a repeating pattern combined with whatever variabilty comes from the sea ice reconstruction which is, itself, based on very little actual information pre-satellite era. In this respect neither choice for the ensemble is a good one. COBE-SST-2 would be considerably better than COBE-SST-1 but still not cutting edge. HadISST2 (if you can get a hold of it) would be better than HadISST1. Neither SST dataset corresponds to the one used in their atmospheric forcing dataset which comes from ERA5 (a combo of HadISST2 and OSTIA, I think). Quite what this does to the reanalysis, I do not know.

In their hierarchy of uncertainty, two things come out and are highlighted in the abstract:

The uncertainty of regional trends is mostly affected by observation calibration (especially at high latitudes), and sea surface temperature data uncertainty (especially at low latitudes)

Observation calibration relates to how the sub-surface observations are bias-adjusted and having a significant spread from this makes sense. Differences here ought to map onto differences between “objective” analyses that go into the GCOS estimate, another case where a more detailed intercomparison could have been helpful. Sea surface temperature data uncertainty relates to the choice to SST dataset (as mentioned above) so SST choice is clearly important for some regions. How certain this hierarchy is, I don’t have a feel for. With 32 ensemble members, splitting the dataset in two (for example to compare two SST datasets) leave 16 members in each group for estimating the variance, which seems a relatively small number.

Interestingly, they perform a data reduction experiment, dropping out progressively more and more of the Argo data – 50%, 75% and even 90% reductions. They say this leads to insignificant differences in ocean warming and acceleration relative to the full analysis. This is interesting and suggests we could do without 90% of the Argo network2.

-fin-

  1. As usual, when I have questions, I rarely know what to do with the answers. So, this is more of an open ended wonderingment on the general subject – no paper answers all questions, which is fine – and not an “I demand answers now!” type of thing. ↩︎
  2. Joke, please don’t do this. ↩︎


Leave a comment