Will we reach 1.5°C in 2023 or 2°C on Tuesday?

The short answer to this question, at least if we’re thinking in terms of the Paris Agreement, is no. The Paris Agreement limits at 1.5°C and 2°C above pre-industrial are typically considered to be defined in terms of long-term averages – say 20 years, as used by the IPCC – so a single year (never mind a single day) can’t exceed the threshold no matter how hard it tries.

A slightly, but decisively different, question is whether the global annual average temperature in 2023 will be 1.5°C above the pre-industrial average or more. To answer this question, we need to rewrite it slightly and replace “pre-industrial” with “1850 to 1900”. We have global temperature datasets going back to 1850, but not to truly pre-industrial times. If you want to know what the different between the two is, see Hawkins et al. 2017 (tl;dr: not much, and uncertain).

The answer to this question is maybe.

First, let’s start with an even easier problem though.

Global temperatures in 2023 relative to a more sensible baseline

Figure 1 shows monthly global temperatures relative to the 1981-2010 average1. To calculate this, we consider one month at a time and take the average anomaly for that month between 1981-2010 and subtract it from that month in all years i.e. for January we calculate the average anomaly of Januarys between 1981 and 2010 and then subtract that average from all Januarys in the dataset. We do this separately for each month and each data set.

Figure 1: (top) Monthly global average temperatures relative to a 1981-2010 baseline from seven listed datasets (January 2014-October 2023). (bottom) Monthly global mean temperature anomalies (relative to 1981-2010) x-axis runs from January to December. Each year is colour coded according to the period as indicated in the diagram. 2023 is shown in red. There is one line per dataset and seven datasets as in the lefthand panel. Note, the two plots have very different y-scales.

The datasets are closely clustered showing that we have a pretty good idea of how global temperatures have changed since 1981. There are lots of weather stations, buoys, and ships2 during this period. Still, you can see that different datasets give us slightly different answers (Figure 1). The differences tell us something about the uncertainty. What exactly it tells us is harder to say.

Global temperature in 2023 has increased rapidly at the same time as La Niña has given way to El Niño3. Monthly temperatures started off relatively cool (within the context of the warmest decade on record) but increased rapidly, with every month from June through to October being a clear record i.e. a record in all of the seven datasets I consider here. These seven datasets are show in the table (note how often HadSST4 appears in the righthand column, it’ll come up later):

DatasetStart yearSST dataset
HadCRUT.5.1.0.01850HadSST4
NOAAGlobalTemp v5.11850ERSSTv5
Berkeley Earth1850HadSST4
Kadow et al.1850HadSST4
GISTEMP1880ERSSTv5
ERA51940HadISST2/OSTIA
JRA-551958COBE-SST

The average of the monthly global means for 2023 has already put 2023 ahead of annual averages for 2016 and 2020 the joint warmest years on record. 2016 was the previous record holder in every dataset but GISTEMP. The spread between the annual averages (Figure 2) is a bit smaller than that for monthly averages and certainly smaller than the long-term changes in the data. Again, this is telling us something about the uncertainty.

Figure 2: Annual global mean temperature anomaly from 1850 to 2023 relative to the 1981-2010 baseline. Each dataset uses its own 1981-2010 baseline.

There are a few ways can quantify the spread between datasets which for 2023 vary between 0.64 and 0.74°C above the 1981-2010 average (at the time of writing, some data sets run to September, others to October). We could use the range (0.1°C), or something based on the standard deviation (0.03°C). There are fancier methods to combine information from the datasets, but it’s not clear that they give a better answer4 so I eschew them here.

We have a range of estimates, so for any year, there will be a spread of estimates for what the global mean temperature is. In a sense, this is our best case. Moving to a 1991-2020 baseline makes little difference to the range or standard deviation which or both the same to two decimal places.

Moving to an 1850-1900 baseline

The next step is to shift the baseline backwards to 1850-1900. There are a number of ways we could approach this.

Method 1 – individual baselines

We could calculate a separate offset for each data set calculated from that data set. The problem is that this excludes three of the data sets we were using because they don’t go back to 1850, but that’s fine. If we do that, we get Figure 3.

Figure 3: Global annual mean temperature anomalies relative to 1850-1900. Each dataset has been rebaselined using its own 1850-1900 average.

Some things to note:

  1. The agreement between datasets looks surprisingly good between 1850-1900. This is because the datasets are all fixed to have the same mean (zero) over the period 1850-1900.
  2. The spread between datasets is much wider for 2023.
  3. Much of that spread is systematic. In the 2000s, it looks like an offset between datasets. This arises because the datasets have different amounts of long-term warming. These differences are caused by how the datasets handle things like systematic errors in the measurements, or filling long-term gaps in the data.
  4. There is some “bunching”. The NOAAGlobalTemp datasets seems to be off on its own, whereas the other three datasets form a little cluster. This separation suggests that a key uncertainty is how to handle systematic errors in sea-surface temperature measurements. NOAAGlobalTemp uses one method, and the other three data sets use another (see the Table of datasets).
  5. Some datasets are close to 1.5°C above the 1850-1900 average in 2023.
  6. Some datasets are far away from 1.5°C above the 1850-1900 average in 2023.

Method 2 – pick your favourite offset

Rather than allow each data set to use its own baseline, we could choose to use a single baseline calculated from one of the datasets and then align all the other datasets to it in the modern period. For example, we could choose the Berkeley Earth dataset and calculate the difference between 1850-1900 and 1981-2010 and then apply that offset to anomalies relative to 1981-2010 in all the different datasets.

An obvious problem with that is that the answer then depends very much on which dataset you choose (Figure 4). If we use NOAAGlobalTemp as the baseline (left) then 2023 is well below 1.5°C and if we use Berkeley Earth as the baseline (right) the range for 2023 overlaps with 1.5°C.

Figure 4: Comparison of global mean temperatures using an 1850-1900 to 1981-2010 offset calculated from NOAA Interim (top) and from Berkeley Earth (bottom). The y-axes are the same in both plots for ease of comparison.

Method 3 – use the IPCC estimate

The IPCC have helpfully computed offsets between 1850-1900 and different periods. They computed the best estimate of the difference between 1850-1900 and 1981-2010 to be 0.69°C. We could just add this on to all the anomalies relative to 1981-2010 that we have just calculated. If we do that, we get Figure 5. This is identical to Figure 2 except that the y-axis is shifted by 0.69°C. Everything else remains the same. You can see that despite the shift, all the estimates remain below 1.5°C above the 1850-1900 average.

The IPCC also provide an uncertainty estimate to go with the offset, which takes the range of the four data sets from the lower end of their collective uncertainty envelopes to the upper end of the same. This amounts to a range of around 0.24°C although the range is not symmetric around the best estimate. We can then combine this with the more modest uncertainty in changes since 1981-2010. The upper end of this range for 2023 is likely to be above 1.5°C meaning that it’s possible 2023 exceeded 1.5°C.

Figure 5: Spot the difference with Figure 2. Global mean temperature again, but this time the baseline is 1850-1900. Anomalies for each dataset are calculated relative to 1981-2010 and then an offset of 0.69 is added to get an estimate of changes since 1850-1900.

Some people like this approach because, intuitively, one might expect averaging across the different datasets to reduce the error in the estimate. However, we know that the datasets are not independent which limits the extent to which errors will cancel.

Indeed, HadCRUT5 and Kadow et al. are based on the same base dataset, a non-infilled version of HadCRUT5. Berkeley Earth uses HadSST4 (albeit in modified form) which is the ocean component of HadCRUT5. There’s more independence between this group of three and NOAAGlobalTemp, as reflected by their relative separations in Figure 3, but both NOAAGlobalTemp and HadCRUT use datasets of night marine air temperature to adjust for systematic errors in the sea-surface temperature measurements. In other words, there’s a clear dependence shared by even the two most disparate data sets. The IPCC estimate is therefore skewed 3 to 1 towards the HadCRUT-related datasets, which tend to warm more than the single NOAAGlobalTemp dataset. It’s also skewed 4 to zero towards datasets that are homogenised relative to night marine air temperature, which has its own raft of issues.

Further, possibly small complications

So far, I have been using the global temperature series as calculated by the dataset providers themselves. The IPCC in their calculations recalculated the global series from the original gridded datasets in a consistent way. This makes a relatively small difference for all datasets except Berkeley Earth (Figure 6). As we saw above, Berkeley Earth has the largest long-term increase of the datasets used here. When recalculated the IPCC way (by first calculating hemispheric averages and then taking a simple mean of the two hemispheres) Berkeley Earth warms less long-term and is closer to the other datasets. There are arguments to be made for both approaches, but we don’t have to pick one or the other we could just call it all structural uncertainty.

Figure 6: Global annual mean temperature anomalies relative to a 1850-1900 baseline for two versions of Berkeley Earth, the version provided by Berkeley Earth (green blue) and the version calculated by IPCC (red). Each dataset has been plotted on its own 1850-1900 baseline.

IPCC also introduced another complication. Some of the datasets used in this assessment use sea-surface temperatures over the oceans and air temperatures over land. The reanalysis datasets, however, represent air temperatures over both land and ocean areas. Climate models suggest that the latter combination (air temperature everywhere) should warm slightly faster than the former (air temperature over land plus sea-surface temperature). Direct observations are not consistent with this (see IPCC AR6 WG1 for a lengthy discussion), but there are significant uncertainties and there are few marine air temperature measurements in the tropics and southern hemisphere. IPCC dealt with this by effectively expanding the uncertainty range.

The lack of marine air temperature measurements in the modern era might strike you as strange, but air temperature measurements over the ocean are largely made by ships and fewer and fewer ships are making meteorological measurements, see Berry and Kent (2016). To the best of my knowledge, the situation has not improved in the seven years since publication and has likely gotten worse.

Other methodological stuff

For these methods, I’ve used annual averages (rather than monthly averages) to calculate the baselines relative to 1850-1900. One could instead set the baseline separately for each month. I haven’t done that because it adds noise to the estimates for incomplete years, but this goes away when the year is complete and the result comes out the same as using the annual average so it’s less relevant to the main point. I look at this in a bit more detail below, but not much.

Summary and conclusions

The question of whether the annual temperature anomaly for 2023 is likely to reach or exceed 1.5°C above the 1850-1900 average depends strongly on how you calculate changes from the 1850-1900 baseline, which dataset, or datasets you choose and how you combine the information. Depending on how the calculation is performed, one or more datasets might exceed 1.5°C, or they might not.

We’re unlikely therefore to see 2023 as a year that is conclusively above 1.5°C. At most, we might be able to say that it is possible that 2023 was above 1.5°C or that a particular dataset or datasets support that conclusion.

As time goes on, and if the world continues to warm, then we will eventually see years that are decisively above 1.5°C and others that are clearly below, and some that are not definitively one or the other. This happens because we do not know global mean temperature perfectly and the problem is compounded by having to reference changes to a baseline calculated from the earliest, sparsest data. However, even if we were measuring relative to a modern well-measured baseline, there would still be some indeterminacy.

The spread between datasets is over 0.2°C which is of a similar order of magnitude to year-to-year changes in global temperature and also to decade-to-decade changes (warming has been around 0.2°C/decade since the late 1970s).

Thinking longer-term

The first month, and year that reach 1.5°C above the 1850-1900 baseline are often seen as indications that we are approaching the threshold: not important in themselves but signifying something altogether more worrying. However, as we have seen, there is considerable uncertainty.

Because the differences between datasets are systematic, there would be similar differences between multi-year averages, such as the 20-year averages used by IPCC to establish threshold exceedances (Figure 7). As a result, similar difficulties are likely to attend the assessment of whether multi-year averages exceed 1.5°C. For any particular threshold, there will likely be a period of time when it is not clear whether we are above or below it. You can get an approximate idea of that by calculating the time difference between when Berkeley Earth crosses a particular temperature and when NOAAGlobalTemp does. It’s about ten years. Currently we’re in that ten-year-long state of not knowing for sure whether we passed 1.0 °C or not.

Figure 7: 20-year average global mean temperature anomalies relative to the 1850-1900 average. Each data set is plotted relative to its own 1850-1900 baseline.

It’s also important to note that this is not the only uncertainty regarding the assessment of long-term change. In Figure 7, the final data point corresponds to 2013.5, the midpoint of the most recent twenty-year period, 2004-2023. IPCC assigns the timing of a threshold crossing to the midpoint of the period. We don’t currently have a good way of getting a timelier estimate, such as one that corresponds to the current year. Trewin 2022 suggests that a shorter period, 10-years, would give a reasonable estimate of the final 20-year average, which halves the delay, but that’s still a delay and it might not be accurate at precisely those times when we need it to be, such as when the rate of change is changing.

To get closer to the present requires a crystal ball, or else a forecast of what will happen. One could use decadal forecasts, model projections (suitably forced) or some other method, perhaps one of the regression techniques that people have come up with. Combining these prognostications with the observations could bring the 20-year curve up to the present day, but it won’t diminish the uncertainty associated with the observations and the use of an early baseline.

Tackling the uncertainty

Given the lack of data in the 1850-1900 period and large changes in observing systems over the past 175 years, it is challenging to reduce uncertainty. One clear way to do this is by the rescue and digitisation of records from paper archives.

Currently, billions of meteorological measurements are stored on paper and other perishable, non-digital media and are unavailable for our calculations. Getting sustained funding to discover, catalogue, image and digitise these hardcopy records has proved very difficult. Indeed, the scale is rather overwhelming. Citizen science projects (such as Weather Rescue) have successfully digitised large numbers of observations, but to complete the task in this way could take decades, which is time we don’t have. Aside from the rate of change of the climate, many archives are deteriorating. The ideal solution would be to build software that can automatically read and convert tabulated, often-handwritten information from digital images, but such a capability does not yet exist.

A less direct route to reducing uncertainty (but still a vital one) is to focus research on understanding the systematic errors, particularly those in the marine data, currently one of the largest components of uncertainty and one of the biggest differences between datasets on long-time scales5[5]. More research here might actually expand the uncertainty range, swapping unquantifiable epistemic uncertainty (unknown unknowns, to get Rumsfeldian about it) for a more tractable kind of quantifiable uncertainty.

We need to tackle both to reduce uncertainty in long-term changes in a meaningful way.

The other thing to think about here is the actual relationship between temperature and impacts. Impacts are currently difficult to monitor in a systematic and consistent way. To understand the effects of climate change (now and in the future) it is essential to monitor changes in the affected systems (including drivers and the impacts themselves) and update our understanding. Many people have recently suggested that even though global temperature is changing at the “expected” rate, the impacts are more severe, or occurring earlier than expected. It’s not always clear on what basis they are making this assessment and it’s important to make such claims from a strong evidentiary basis. Our perception of such things can be badly out of kilter particularly if it’s based on what gets reported in the news. Obviously, it would be difficult to monitor impacts relative to an 1850-1900 baseline, so it seems strange to do so for temperature. Your heretical thought for the day is this: why not just pick a more recent point in time than “pre-industrial6[6]”? I can’t see it happening, but it would reduce uncertainty and forestall many future arguments.

This discussion has now strayed quite a long way from the original question, but the original question only exists because we care about the multi-year changes, and we only care about them because of their anticipated and actual impacts. The 1.5°C and 2°C limits in the Paris Agreements are not the points at which everything bad happens. The IPCC report notes that the risks are higher at 1.5°C than now and higher at 2°C than at 1.5°C. That is the risk increases with increasing temperatures. We might not know exactly where we are, but we know that temperatures and hence the risks are increasing, there is very little uncertainty about that.

A sadly necessary appendix on daily temperatures

A lot of interest has been paid this year to days that nominally exceeded first 1.5°C and then 2°C above the 1850-1900 average. We’ve seen already that monthly temperature anomalies are more uncertain than multidecadal and annual anomalies, so it might be reasonable to assume that we’re not sure whether a month has an anomaly in excess of 1.5°C. And if months are more uncertain than years, then days are probably even more uncertain again. This is probably the case. It’s hard to say because we lack multiple reliable estimates of daily global temperature anomalies. We have a few reanalyses, but only two that we really trust for long-term stability: JRA-55 and ERA5. Only one of those is in the habit of putting out daily global temperatures (ERA5) and neither goes back to 1850. With only two datasets, we can’t get a really good handle on what daily uncertainties in global mean temperature look like and no one had really given that a lot of thought until this year. Even then, most people gave it no thought at all. The website most people are getting their daily global temperatures from has a big warning sign which, paraphrased, says “don’t do that” when it comes to long-term changes.

The upper panel of Figure 8 shows the daily differences between ERA5 and CFSR, the dataset used by the Climate Reanalyzer. I rebaselined both datasets rather crudely, by subtracting the 1981-2010 average for each day separately. You can see that there are various obvious discontinuities, some of which can be associated with changing processing in the CFSR reanalysis. In addition, there is a clear annual cycle in the differences (which peaks at different times of year) as well as day-to-day noise. It illustrates, quite neatly, the range of different uncertainty behaviours we must be aware of.

The lower panel of Figure 8 shows differences between ERA5 and JRA-55. Note that there is variability in the differences at a range of time scales as there was for CFSR, but they are overall smaller and there are fewer large, abrupt changes. In the past 40 years, the mean difference between ERA5 and JRA55 has shifted by around 0.1°C, which is pretty good relative stability considering, but would be in addition to any uncertainty that came from joining a reanalysis to a pre-industrial baseline.

Figure 8: Daily differences between daily global mean temperatures from (top) ERA5 and CFSR and (bottom) ERA5 and JRA55. Each dataset was rebaselined to the period 1981-2010 before differences were taken.

Unlike for monthly+ temperatures, there’s no traditional datasets to compare to. There are daily land temperature datasets built in the traditional manner which could be welded to a daily satellite sea-surface temperature dataset, but they still wouldn’t go back to the 1850s. Going back to the 1850s is important, because for daily temperatures, we really want to know the shape of the annual cycle (monthly too, for that matter). Differential rates of warming in summer and winter could flex the whole shape of the annual cycle so adding a simple offset to the daily temperatures isn’t possible. The known biases in the earliest data also have distinctive annual cycles, so the systematic uncertainty would also be a systematic uncertainty in the shape of the annual cycle.

Copernicus  have a method which uses estimated monthly offsets from three datasets and then the “adjustment is chosen empirically to have a smooth daily variation” which more or less matches the monthly values. I don’t know what this means specifically, but Figure 9 shows the offsets between 1850-1900 and 1990-2020 (the baseline currently used by Copernicus for their monitoring) including their daily version. There’s an annual cycle, some noise, and offsets between datasets in the monthly averages already, and fitting a smoothly-varying daily model to that is unlikely to reduce uncertainty.

Figure 9: Monthly offsets in global mean temperature between 1850-1900 and 1991-2020. Solid lines are my estimates of the changes, dashed lines are from Copernicus. The dotted red line is the daily offset used by Copernicus. Note there is no Copernicus equivalent to Kadow et al.

On the other hand, daily global temperatures are more variable, so the size of the signal is perhaps larger. A daily temperature anomaly in excess of 2°C above 1850-1900 was declared on 17 November (actually, 2.06°C) and long-periods of days exceeding 1.5°C have been announced throughout the year. If the uncertainties are comparable to monthly temperatures, then the 2°C figure may be in doubt (a margin of 0.06°C is much smaller than the uncertainty from the baseline) but it would still likely have exceeded 1.5°C. However, we don’t have a good handle on how accurate the daily global means are even relative to a modern baseline, so (as above) we need to be cautious about uncertainty and what the available information is telling us (or not).

Assuming a uniform uncertainty of 0.12°C (Figure 10) and that this is wholly systematic then the number of days exceeding 1.5°C in ERA5 is anywhere between 141 and 789 and the number of days exceeding 2.0°C anywhere between zero and seven at the time of writing (the upper limit including days in 2016 and earlier in 2023). The true uncertainty is likely more complex than that, but it gives a good idea of the uncertainty in these counts.

Figure 10: daily global temperature anomalies from ERA5 relative to an 1850-1900 baseline plotted as a function of day in year, which runs from 0 (Jan 1st) to 364 (Dec 31st). The black line is 2023. The red line is 2016 and the blue line is 2020. 2016 and 2020 were the two warmest years on record. The grey lines show every year between 1940 and 2022. The shaded areas show a range ±0.12°C either side of 1.5°C and 2°C.

The last question to ask (and maybe it should have been the first) is, what does it mean? We know that temperatures are increasing in the long-term and this long-term change in temperature has been linked to a range of increasing risks, but what does it mean if the global-average temperature exceeded a particular value for a couple of days? I would venture to suggest that it does not mean much at all, at least in a physical sense. Variations in daily global temperature anomalies from one day to the next are driven in a large part by weather. It is well known that monthly, annual, and even longer periods of time are not reliably indicative of long-term changes. Daily changes in temperature are even less so. About two weeks and half a degree of global mean temperature anomaly separate the two maps in Figure 11. Their actual global temperatures happen to be the same to one decimal place.

Figure 11: daily temperature anomalies for (top) 4 November 2023 and (bottom) 17 November 2023, relative to the 1979-2000 baseline from the Climate Reanalyzer showing data from the CFSR reanalysis.

Many people have suggested that the significance is symbolic, which I don’t doubt, or that such things serve as a warning, or a milestone7, or something. But by freighting these uncertain, daily numbers with so much meaning I’m concerned that there is a risk of setting false expectations of precision for arguably more important things. By suggesting that we can know that daily global temperature exceeded a particular threshold by a few hundredths of a degree for a few days, we set the expectation that we can know an annual or decadal or multi-decadal averages to this precision. Currently, that is just not the case, and on those numbers, very much depends.

-fin-

PS (added 23 November 2023): Of course, questions around 1.5°C are on everyone’s mind at the moment and there’s loads of interesting stuff flying around. I can’t really capture even a small fraction of it, but some things did catch my eye.

First was a paper called “Getting it right matters“, a sentiment with which I heartily agree. It lists all the ways that the Paris Agreement limits have been interpreted and digs into the political-legal underpinnings. The key sentence that I jotted down is “it is clear that the temperature references in the Paris Agreement can only be understood as changes in climatological averages attributed to human activity excluding natural variability” (my emphasis gradient). This highlights an aspect which I didn’t even touch on: attribution to human activity. In this reading, uncertainties from the attribution to human activity would also be important. As of IPCC AR6, human attributable warming and observed warming are pretty much the same so it’s currently a distinction without a difference, though that won’t always necessarily be the case.

Second, Glen Peters had a nice post on his substack about understanding the Earth’s Energy Imbalance and what that means for future warming and keeping temperatures below 1.5°C. I mention it in part because he was kind enough to link to my article, but also because he referred to it as “playing games” with the definition. What I’ve presented here could be used that way for sure – climate science has a long history of uncertainty abuse – and I think therefore it’s important to raise such things before they become an issue, manage expectations, and so on (I’ve commented on similar topics before); playing games is what I want to avoid. While talking about uncertainty, it’s also worth highlighting that when we talk about scenarios for keeping below 1.5°C, they typically come with a probability and rarely a high one. I’m not sure that’s widely understood and I’m not sure what it all means…

PPS (added 26 November 2023): This issue gets raised periodically. I found a post from the Climate Lab Book looking at how best to visualise the uncertainty from late 2017. There are some interesting ideas there too. I think an element that’s not much appreciated is that we have a very small number of global temperature datasets (and an even smaller number of independent datasets), so the addition or removal of a dataset, particularly one like HadCRUT which others depend on more or less directly, has the capacity to shift the baseline significantly. This tendency has been interpreted as shifting the goalposts or changing the definition rather than what it is: simple updating of our scientific understanding. Working out the best way forwards will require a much wider discussion, but I think a useful thing we can do as scientists is explore the practical consequences of different possible choices.

-finfin-

  1. We could use 1991-2020 instead, but 1981-2010 is the period I’ve used for continuity with analyses in previous years. It doesn’t make a lot of difference. ↩︎
  2. Though the number of ships making measurements has been declining. ↩︎
  3. Note the sneaky wording. The two things happened at the same time, but there’s no consensus on whether the switch from La Niña to El Niño is enough to explain the surge in temperatures. Possibly not. One usually expects a few months lag between SST changes in the tropical Pacific and the global temperature response which didn’t happen this year. Other things probably had an effect possibly including and possibly not limited to: aerosol changes from shipping fuel, Saharan dust, changes in solar forcing, the rapid drop in Antarctic sea ice, weather… ↩︎
  4. The fancy answer gives a different best estimate which, by design, falls between the existing estimates and the uncertainty is probably an underestimate. ↩︎
  5. Not even that long. The first consequential uncertainty in marine data going back in time occurs in the early 1990s. ↩︎
  6. Not an especially original heretical thought, I should add. See e.g. Hawkins et al. already referenced. ↩︎
  7. In uncharted territory, probably ↩︎


6 responses to “Will we reach 1.5°C in 2023 or 2°C on Tuesday?”

  1. […] met het feit dat we nog bepaald niet op koers liggen om het doel van ‘well below 2°C’ – als gemiddelde over lange termijn, dat wordt nog wel eens over het hoofd gezien – te […]

  2. […] cet excellent article du climatologue John Kennedy, il faut toujours vérifier l’origine des données car elles n’ont pas toutes la même […]

  3. […] and/or are tasked with providing a specific answer to the question1. I’ve rambled on this topic before and before […]

  4. […] early with a paper by Blair Trewin. As 2023 proceeded, this question become unexpectedly pressing. The question of whether we exceeded 2C on a particular day isn’t one I’d anticipated, but came rushing to the fore. This was part of an […]

  5. […] cet excellent article du climatologue John Kennedy, il faut toujours vérifier l’origine des données car elles n’ont pas toutes la même […]

  6. […] they be? Third, how are you defining global temperature here – a decade, a year, a month, a day? Fourth, is the question about peak temperature or the temperature at 2100? Fifth, what are the […]

Leave a comment