I don’t travel well. Last week, I travelled badly to Asheville in North Carolina for the fourth JCOMM Workshop on Advances in Marine Climatology – CLIMAR IV. CLIMAR workshops alternate with MARCDAT (Workshop on Advances in the Use of Historical Marine Climate Data) workshops so there’s a marine climatology meeting about once every three years.
This was the fourth of these workshops that I’ve been to and it was hugely enjoyable. There’s something simultaneously relaxing and stimulating about being surrounded by people who understand and care about the minutiae of marine data.
My particular area of interest is sea-surface temperature data sets. Progress in this area has been rapid. Three years ago, the community was just starting to get to grips with the imprint of changing measurement methods (and their associated systematic errors) on the global SST record in the post-World War 2 period. Now, there are at least four analyses of varying stages of development which deal with this. Unsurprisingly, the analyses that have got as far as generating a fully-adjusted data set differ in the exact nature of the estimated systematic errors. The differences aren’t large in the grand scheme of things, but they are significant (in the lay sense) to those who care about marine data. More work is needed to disentangle what this means and gain a clearer understanding of how measurements were made and what effect that had on observed SSTs.
A lot of work has been done on estimating the uncertainties in the data sets and characterising the kinds of errors one finds in in situ measurements. This kind of work is important for making good decisions about how to build a robust, ‘healthy’ observing network and for understanding past climate change. The metrics currently used to assess the health of the observing network tend to rely on counting the numbers of drifting buoys deployed, or observations made, but these are, in a sense, proxies for the more fundamental metric of how accurately we can measure various marine parameters. By the current metrics, the observing system has been ‘improving’ over the past decade. However, for many variables, such as marine air temperature or humidity, uncertainty has actually increased over the same period due largely to the decline in the Voluntary Observing Ship fleet.
The way that uncertainties are presented has also been developing. A number of groups have explored the use of ensembles of data sets. These take various forms e.g. parameter perturbation experiments where poorly constrained parameters get waggled around (to give a range of bias adjustments say), or, where people have used statistical reconstruction techniques to fill data gaps, sampling from posterior probability distributions to give realistic SST fields. There was a really neat example of the latter case, in which a seasonal forecast system was initialised using an ensemble of initial SST conditions drawn from the posterior distribution. The hindcasts performed better than an equivalent set initialised using lagged ensembles.
The use of ensembles divides opinion. Some (like myself) think it’s a great idea particularly for dealing with weird error structures. Others are more sceptical of the approach, the most common question being which member of the ensemble is best? A similar split is seen when discussing differences between data sets (e.g. between two SST data sets). Some see differences as something to be eliminated. Others see differences as symptomatic of ‘structural’ uncertainty. Either way, we currently see interesting differences between SST data sets that we need to understand.