A slight temperature

A tweet of mine went mildly viral0. There was a paper in PNAS about nasty surprises the climate might have in store for us1. It was reported in several places with the dread phrase: some scientists think.

This phrase irks me, falling somewhere below “some scientists believe” which suggests at least a bedrock of conviction on the scientist’s part, and “some scientists say” which implies that said scientist had reached the stage of articulation that involves another human being – someone who might laugh and tell other people behind their back – and which usually brings a mercifully-early end to the worst excesses of what “some scientists think“.

“I’m a scientist”, I thought. “I think. But this thinking has not always been followed by acts or words of great wisdom2. Often, the opposite happens and this might be a good way to illustrate the emptiness of the phrase”. Then, I thought, “What’s the stupidest thing I’ve ever done?”

I sat there for a long time and came up with a very long list4. It started to feel like my life had been a series of astonishingly foolish acts5, interspersed at distant intervals by the kind of moments from which ordinary people – good people, worthwhile people – somehow construct the everyday fabric of their lives. However, no single act stood out as clearly more stupid than all the others.

So, almost at random, I chose the orange story. An orange is relatable – I wouldn’t have to explain the temperature-dependent friction of the special rubber you get on climbing boots, the relative flash points of solvents, or how I came into the presence of so many high-tension power supplies – and the narrative was, given the situation and that I survived, necessarily compact. It also proceeded in clear stages, which made it twitter friendly and meant I could write it in bits as I waited for my computer to finish chunks of processing.

It turned out to be far more twitter friendly than I would ever have thought.

The majority of people responded with laughter, which was at least my intention, although it’s clear not everyone was laughing at the same time or at the same thing6. Some folks said it had cheered them up in a way that was needful or rare, which caught me by surprise and made me extremely happy. Others expressed concern at my almost-fate, which was touching, though it was often followed by a guilty admission that they had found it very very funny. I forgive those people and absolve them of that guilt to the extent that I am able.

I caused some people anxiety, for which I am very sorry.

I’m not sure what to say to those who read the thread as some kind of prompt or instruction manual. Just no: don’t do that. A lot of people shared their own stories. I couldn’t reply to even a tiny fraction of those, but I read lots of them and… oh my. How homo sapiens survived this long I will never know.

It also sparked a divertingly matter-of-fact side conversation amongst the kind of people who do that sort of thing recreationally. The tone was surprisingly similar to that of the medical professionals who responded.

Some people said I was stupid7. I have no sound defence on this point other than to say, well yes, that’s the point of the story. I’m not that stupid all the time. Others took it to mean all scientists (even all experts) are stupid all the time, which indicates a complete failure of communication on my part as well as a very basic logical error on theirs.

My target was the journalistic shorthand “some scientists think” which can be a way of sensationalising the contents of a scientific report without providing the necessary background, context and caveats or the grounding voices of other scientists who think differently (and perhaps more sensibly). At best, it’s one of those empty stock phrases from which 90% of content is cobbled together, usually in a tearing hurry. I appreciate that journalists are frequently on impossibly tight deadlines, not always experts in the field, and that, even in the best of situations, they are at the mercy of the clickbait goblins8 who write headlines. It’s a hard job – writing articles not the goblin thing – but once you’ve seen it done well, it makes you sad to see it done badly.

A couple of journalists did respond and some passed it on.

Anyway, the viral stage seems to have passed, though my notifications are still broken. Three concrete things I learned. One, always proofread tweets with the same care you’d give any publication. You never know when a quarter of a million people will read them. Two, some folks like to read their tweet threads unrolled in a more standard blog format. WordPress will do this semi-automatically if you copy and paste the first tweet in the series and it will spare hundreds of people the need to ask the thread reader app to do it. Three, mention that something is a thread in the first tweet, otherwise people won’t realise it’s a thread.

Fin.

PS: I almost9 forgot those who offered me advice for what to do10. While I believe they meant well and I don’t wish to appear wholly ungrateful, there are a few considerations. First, I can only imagine that in the situation, someone approaching me purposefully with a steak knife and a calculating eye (let alone the sudden heavy hand on my shoulder) would have done little to calm me and what unfolded in an essentially sedentary mode would have occurred at a flat out sprint. Furthermore, (and here the fault is mine, I suppose, for skimping on details) this all happened (second) on a Scottish mountainside (and third) over twenty years ago. It is not an ongoing situation for which I am canvassing twitter for advice and I am not planning on doing it again.

Fin Fin

0. At least by my usual standards. Ordinarily, a tweet of mine will get a handful of likes. I am happy with this state of affairs.

1. “Climate endgame” was the phrase. In chess, the endgame is the tedious bit when most of the pieces are off the board. Often, one or another player who believes they have a material advantage (or else are short of time, or lost among the not-quite-infinitely-branching thicket of possibilities) will hasten the onset of the endgame. It can go on and on and on and on can the endgame. Many endgame situations have been “solved”, which is to say that for a range of situations the outcome is – bar mistakes – a foregone conclusion. I’m not sure all of these meanings and connotations were intended when the phrase climate endgame was deployed.

2. My decision to tweet the outcome of this thought process being, perhaps, a case in point3.

3. I don’t think with footnotes, but they do lend themselves to the act of thinking about thinking, as well as endless procrastination.

4. There were categories (electricity). The categories had sub-categories (high and low voltage electricity). I was even starting to consider some kind of cross-referencing scheme (electricity and heights, electricity and things I have put in my mouth).

5. For years, my hobby – the thing I did to relax and unwind – was climbing without ropes.

6. There are as many interpretations as people. More, probably.

7. One person said it several times. I liked all of their tweets, which sent them into a lather.

8. I believe clickbait goblin is the correct term. Journalists probably call them something else in public – subeditor or somesuch.

9. i.e. did.

10. The most practical being, don’t put a whole unpeeled orange in your mouth.

Some scientists think

When an article says "some scientists think" then remember this: I, a scientist, once thought I could fit a whole orange in my mouth. I could, it turns out, get it in there, but I hadn't given sufficient thought to the reverse operation.

I should also, on reflection, have practiced in private. I had an audience, which grew as my initial satisfaction at an hypothesis well proven, slipped rapidly through stages of qualm, disquiet, then alarm (mild through severe) and ended in full blown panic.

When one panics, one's muscles tense, which is of course, the opposite of what I needed here. I had been quite relaxed at the start, but now I couldn't get a finger between the orange and the very taut edges of my mouth.

Above and below, the orange, which was now under some pressure, deformed to make a nearly perfect seal against my teeth. I hadn't previously been aware of how much oxygen one needs to consume an orange, but I was made aware of it now by its sudden and ongoing lack.

I forgot for a moment that I had nostrils and tried to breathe in hard through my mouth. I have big lungs. When the doctor tested my lung capacity, I blew the end clean off the cardboard tube.

I've always been vaguely proud of that; mostly for want of more tangible achievements and because I am, when all is said and done, the kind of person otherwise predisposed to shove a whole orange in his mouth without cause.

Those enormous lungs – my pride and joy – expanding in this moment of crisis to their fullest extent, had created a hard vacuum behind the orange, which, at that point imploded.

From now on, things which had been unfolding at an almost leisurely pace, started to happen rather fast. So, I will take this opportunity to say that no one had actually tried to help me up till now. This was not for lack of opportunity.

Later, someone mentioned the kind of details – veins like worms scribbling incomprehensible messages across my forehead, eyes popping out as if on stalks, laced with tiny red veins – which one can only truly apprehend at a distance that wouldn't have made help impossible.

But back to the imploding orange. Although it didn't diminish appreciably in volume upon implosion, the released juice vaporised, turning into a burning acidic cloud that instantly flooded my lungs.

My lungs very sensibly responded by collapsing rapidly aided by an involuntary and powerful spasm from my diaphragm.

The vapour and oily zest from the orange's skin mixed with mucus scoured from my lungs (that spread flat, we must remember, would cover a tennis court) as well as the last of my residual oxygen, exited now through my rediscovered nostrils as a magnificently abundant yellow foam.

And, having a volume in excess of what could easily egress at speed via those narrow tubes, it also squirted out through nearby exits, including around my eyes.

Even that wasn't enough and the build up of pressure finally proved too much for the orange, which left my mouth like grapeshot from a cannon, like the superluminal jets generated by matter falling towards a black hole at relativistic speed.

Temporarily blind and gasping in my own private world of consequences, I was unaware of the cone of devastation that I had unleashed upon the unluckier segment of my audience, occupying roughly one steradian of solid angle to my front.

When I finally recovered my senses and the cycle of whooping inhalation and coughing fits had exhausted itself, I was greeted not by the concern that I felt such a brush with death merited, but with a disgust that later reflection suggests may not have been wholly unwarranted.

So, anyway, whenever you read "some scientists think", think about me and recalibrate the lower end of your expectations accordingly.

Originally tweeted by John Kennedy (@micefearboggis) on August 3, 2022.

And isn’t it pythonic?

I’ve spent the better part of way too long, learning to python. I quite enjoy programming in python (as long as I can rein in my urge to make new classes) and the ready availability of advice§ is a bonus, but it’s the stuff that goes with it that irks me.

The irking starts when you try and install python. There are – I counted – twenty-seven million ways to do it. Every time you google “how to install python”, you’ll get another article or youtube video with a completely new method. At first, it doesn’t matter*: you download python – the latest version, because why wouldn’t you? – and everything works nicely.

Then you want to actually do something, which usually means installing a package. So you go back to google, back to the twenty million ways to do anything. Some of them even work. You end up installing various bits of software named after animals**, then something requires you to install C++ and suddenly you are trying to compile things in programming languages you were hoping to avoid when you took up the programming languages you were hoping to avoid when you opted for python. A bit more google fu and you realise that there is a Better Way.

The Better Way means you have to uninstall python, or install a different version because the package you want only works with a version of Python that, at this point, is a dried-out snake skin covered in dust. Then you realise that one of the animal-themed programmes you foolishly chose has hijacked what the computer’s idea of python is.

So you uninstall everything††. And start again. Older. Not wiser.

After a few weeks of this, you find your own way of doing things that sort of works and you have to resist the urge to write a very definitive blog post about how this is definitely the way you should install python. You know that you’d only be adding to the utter chaos, and yet, the temptation is strong. Even then there’s a nagging feeling that you still have no idea how all of this works.

You tentatively try installing a few packages and it seems to be working OK, but writing code in Notepad isn’t working for you. In some of the videos, you remember fancy text editors that made all the different bits of code exciting different colours with a Matrixy black background. There are a bunch of different options, so you download a promising one and, realising it’s already installed, you fire it up. However, this one won’t talk to the version of python you have installed. It’s pining for an installation of python you removed*** a hundred years ago.

You try a few more, then you realise, you have to uninstall everything and start again.

Now you have your IDE and python. You can install packages and miraculously you haven’t yet expended all your energy so you want to do something useful with it all. Being a conscientious programmer you want to use version control and you want to test your code.

Version control is a no brainer: it’s git or nothing. You start reading about git and your spidey sense starts tingling when everyone wants to explain how git works. 90% of the time they want to offer you the “simple” explanation, or explain it in a window of time that even the busiest individual could eke out of their day. This might seem rather innocent – helpful, even – but remember: no one ever wants to tell you how windows works or taps (say), they just show you how to do what you want to do. The spider sense starts throbbing like a three day migraine when they say, “but you don’t need to understand graph theory” to use it. Eventually, you realise that Git’s just one of those things you have to use because everyone else uses it****.

The beauty of git is that you can claim to use version control while largely carrying on as you did before version control. You make a big change to your code and everything goes wrong, so you want to go back a step. In days before git, you’d just look in the directory called “archive_7_new_TODAY” and copy the files you wrecked. With git all you have to do is copy the current contents of the directory to “archive_7_new_TODAY” and utter the git command you wrote on a post-it note that copies everything from the repository, overwriting your mistakes. Congratulations, you are using version control.

It should be that easy, but it’s not. First, you forget (or suppress the memory) that you downloaded git before and that you set up your credentials for a github repository that you have long since forgotten. Every time you try to get some code from github, no matter what it is and no matter what it claims to be doing, it downloads the exact same ancient repository. This time, uninstalling and reinstalling git don’t work, so you have to search for the pesky file somewhere on the file system. Deleting this fixes things, or, at least, allows you to get deeper into trouble…

You want to write tests. Once again, there are options – pytest and unittest chiefly. Your notes from the python course you attended recommend unittest so you go with that, but you soon find out that, in the aeon of time that has since elapsed, everyone started using pytest because it is the Better Way. Everyone that is, except every single question and answer on stack overflow. Nevertheless, you can muddle through. That is, until you get to mocking, at which point half the answers to questions just recommend rewriting your code completely anyway, the other half don’t work and the remaining half trick you into using pytest-mock, which adds a layer of complexity to everything†††.

By this point, you may even have some code that you want other people to use although it’s possible that in the intervening time several civilisations have risen and fallen. Whenever in time you find yourself though, responsible sharing means documentation. As with everything else, advice – particularly that given to beginners – is all over the place. There are numerous different ways of setting documentation up (in a number of different ad hoc markdown languages) and no one ever says why you might want to do it that way, or whether you are compelled to. Even then, it’s not clear how documentation works with everything else. If you use sphinx***** it spams a whole bunch of output into your repository, which seems like it needs to be tidied up. Git starts highlighting all the automatically generated files in suspicious I-don’t-know-what-this-is red. You have an explore, and there are plenty of others. The IDE has generated a bunch, python has cached stuff everywhere, pytest has been busy. There are builds and eggs and a whole bunch of other stuff, which is when you find out about .gitignore.

.gitignore tells git to ignore things. I found some example gitignore files for python. They are hilariously long.

And finally, there’s the packaging – tying the whole project up with a little bow and a thoughtful handwritten note. As with every other aspect of this process, there’s a long queue of people who will solemnly tell you *this* is the way to do it and you will, regretfully, have to reorganise all your code once again. You also have to add a bunch of stuff. What you have to add can be pieced together only slowly in response to baffling absences. Why is none of my code accessible? Where did the data files go? No, really, where the hell did they go? What do you mean dependencies? The answer to some of these questions is to put an empty file in each directory. This feels like a rather handy stand in for the whole process. Unlike git, which needs to be told to ignore almost everything, packages ignore everything as a default and can be grudgingly persuaded to acknowledge the existence of things like the code, and the data.

The final product might not even have ten lines of actual functional python code, but it took you a month to piece together. Despite this, you still have the sneaking suspicion you didn’t quite do it right. You know that some things could be more neatly stowed and for some reason autodoc has gone completely berserk and seems to have opened an annex of L-space, but frankly you are exhausted and full now of regret. Was it all worth it, you ask yourself? Consolation comes from the thought that next time, it will be so much easier.

Haahahhahahah h hahaaahh hhahaha hahahaa.

Oh, you poor chump.

§ Advice that often turns out to be pitched at someone who knows slightly more than you do, or slightly less, leaving the crucial piece of information unstated because it is too trivial to mention, or too complex for you right now.

* I’d just say, avoid the ones where the person says “ignore all the other ways to do it, this is the proper way”, because you will shortly find yourself in a situation metaphorically, if not literally, like those movie scenes in which someone has to choose between cutting the yellow and blue wires before the dramatically large and visible counter reaches zero, or they’ll nonchalantly say something about hand editing the registry. After all that, it still won’t work, and nothing else will.

** pandas I can live with, but spider? Nopenopenope. And the only thing I know about anacondas is they’re thick as tree trunks and like to hug… you to death at the bottom of a muddy river before swallowing you whole. The name may not be wholly inappropriate.

There are many Better Ways. This was just the first. What you don’t realise at this point is that there have been so many Better Ways, but sadly, when an Even Better Way is found, no one tidies up the internet.

†† You even consider buying a new computer, rather than try and work out where the programs you installed squirrelled away their configurations.

*** and killed with fire.

**** Everyone uses it because programmers use it. Why programmers use it is something I’d best not explain because I might need their help in the future.

††† The documentation refers to it as “lightweight wrapper”, which sounds great until you realise that’s like wrapping decorative tissue paper round a cannonball. The cannonball is still a cannonball and the tissue paper, while making everything look pretty, just makes the cannonball slightly harder to handle.

***** the animals are mythic at this point, and I note here that the sphinx was a tricky sod.

It’s just not Normal

Normals, normals, normals. A climate normal is an average of a particular quantity – temperature, rainfall, humidity, whatever – over an extended period of time, usually 30 years. If you’ve ever been on holiday to a foreign or unfamiliar place and wondered what to wear, you’ve probably made use of a climate normal for the city or region you were visiting – how warm is it in May? how much rain can I expect? do I need to pack an umbrella or oilskins? – without realising it. They have all manner of other uses too and are widely used throughout climate science/services/communications/etc/etc.

WMO Guidelines about the use of “normals” stipulate a double approach: 1991-2020 (or whatever the latest thirty-year period ending in a ‘0’ year is) for almost everything and 1961-1990 for long-term climate assessment. I’ve always found this a slightly confusing combination as it means 1991-2020 is to be used for climate monitoring and 1961-1990 is to be used for long-term climate monitoring. 1961-1990 also excludes a wide range of data sets from use for “long-term climate assessment” – reanalyses and satellite products don’t all extend back to the early 1960s but are vital sources of information. Actually… the use of 1961-1990 doesn’t exclude them because in practice people – being people – use whatever’s practical.

Preserving 1961-1990 has something going for it nonetheless. It’s helpful to have a fixed point, specified by convention and general agreement, against which change can be measured in perpetuity (or until, as the guidelines say, there is a compelling scientific reason to shift it). A fixed point might be desirable because a shifting baseline, particularly one that is as up to date as possible, can appear to erase global warming. When there are suddenly lots of blues in the maps where once there was red, the impression of warming is lessened (a point to which I shall return). It’s value is in consistency over time. Much effort has been invested across National Meteorological and Hydrological Services to gather and process normals for the period 1961-1990 as well as producing products (maps, datasets, reports, etc) that use this baseline. Updating these to a new baseline leads to extra work for NMHSs and other organisations.

Against this, there are of course other arguments for gradually deprecating 1961-1990 in addition to the aforementioned exclusion of useful data sets. I list a few below:

1. long-term climate assessment isn’t something that stands separately from all other uses of climate data. Using a separate baseline for this one particular use, and a different baseline for all other uses can and does lead to confusion and inconsistency. This defeats the purpose of standardisation. I’ve worked on things where we had to provide information on three different baselines (1850-1900, 1981-2010 and 1961-1990) just to manage all the different anticipated use cases and satisfy all interested parties (potential or actual). Which brings up another difficulty: it rapidly becomes unmanageably confusing, particularly when dealing with multiple data sets of varying length, which may or may not overlap the chosen baseline periods and one is then compelled to provide all other related information on the same range of baselines, which…. ugh.

2. For assessing global temperature change, the current favoured baseline is 1850-1900, which aligns (nearly) with the Paris Agreement’s use of change since “pre-industrial” conditions. One might question whether this is strictly a baseline, or whether it is merely a number that can be calculated from global temperature data sets. When I say that, I can hear the high-pitched sound of hairs splitting, but it is a period used in a very specific way that is rather different from other uses of normals. The use of 1850-1900 as a “baseline” has led to many people asking what the local temperature changes are relative to this period, often in the form of “has country X exceeded 1.5C yet?”. For some parts of the world we just don’t know because there are so few data. But it’s also not a meaningful question in relation to the Paris Agreement, which refers specifically to the global mean and also to long-term climate change. All of these words – global, long-term, change – are important and doing a lot of work behind the scenes that is often glossed over and which is also hard to apply to local temperature change. Even for global temperature, only two regularly updated data sets extend back to 1850. For other variables, 1850-1900 is a complete non-starter.

3. Earlier baselines are more uncertain due to sparser coverage, less advanced instrumentation, accumulation of homogenisation uncertainty, and so on. Expressing something as anomalies relative to an early and very uncertain baseline can therefore give the impression that recent anomalies and hence recent change are more uncertain or variable than they are. This is obviously true for a very early baseline like 1850-1900, but it is also true (at least for global temperature) for 1961-1990, a period that saw large and less-well understood changes in the way that ocean temperatures were measured.

4. Many of the potential problems of retiring the 1961-1990 baseline are problems of communication and/or related to the way that climate is talked about and presented. The first thing to note is that there is now a solid base of expertise that can be drawn on to help understand the difficulties and advantages of making such transitions. A number of institutions and NMHSs have made the shift between baselines.

In other cases, there are simple changes that can be made. One example, is the choice of charts to present large-scale averages of quantities. For example this plot from Copernicus…

Here, the baseline (1991-2020) is strongly emphasised. There is a horizontal line at zero and the colours and bars emphasise this at every single point. Changing the baseline on this graph gives a very different visual impression – more blues, less reds or vice versa – which is entirely due to the form of the graph. You can see this by flicking between baselines on one of the Copernicus Climate bulletins. In shifting the baseline, nothing important has changed, so a graph format that makes it appear otherwise might be considered a poor choice. In contrast, another plot from Copernicus largely avoids this problem by showing temperatures as a line graph instead:

In this case, the left and right-hand y-axes show anomalies relative to two different periods (1991-2020 on the left and 1850-1900 on the right), emphasising the fact that the choice of baseline is somewhat immaterial for the important content of the graph, which shows clearly the long-term change. (That’s not to say that the choice of baseline is completely immaterial in this case).

5. Which leads on to a related point, which is that baselines are, in a sense, arbitrary. There is nothing special about 1981-2010 (for example) other than it ends in a ‘0’ year and it was at one time designated as a standard by the WMO. That’s not say that such a designation is not useful or without any advantage (see above), simply that there was nothing special about the climate of 1981-2010 (or 1991-2020) that makes it better suited for this purpose than any other period. If communications around climate information are not provided with sufficient context, it can be very hard to extract any meaning from them. If I tell you that the average temperature anomaly in a country for 2021 was 1.3degC above normal, it’s probably hard for you to make sense of that. What is normal, you might ask? How does 2021 compare to other years? How much do temperatures vary? Is there a long-term change in temperature? What is it? How does 2021 relate to that? Context is key. If the baseline – the arbitrary baseline – is doing a lot of work in the comms then it might be time to reconsider what information is being shared and how. Note, here that 1850-1900 comes with its own context which is provided indirectly by the Paris Agreement. Even there, we should be careful about quoting individual years relative to 1850-1900 as the Agreement refers to long-term change and there are concerns that this can mislead.

6. As we shift from a situation where the focus was on convincing people that climate change exists, to one where the emphasis is more on adaptation and mitigation, the role of normals can and will change. The use of a single period will help with integration between products – monitoring reports, seasonal forecasts, decadal forecasts, etc. A more up-to-date normal period will be more relevant as things like the recent rate of change, the rapidity of changes and interactions with natural variability will become the focus. A more modern baseline is also generally thought to be more relatable. That is, people naturally compare current events to their memory of an extended, but recent, past and not to some now-remote earlier period.

7. Last, and certainly not least, as this is central to the practicality and fairness of the guidance: 1961-1990 excludes some countries that don’t have good long term digital records. When we surveyed countries during the development of guidance on calculating National Climate Monitoring Products (NCMPs), a larger number of countries (particularly those who didn’t already generate NCMPs) said they could make use of a more modern baseline than they could 1961-1990. The insistence on using 1961-1990 for assessing long-term change is liable to leave grey areas of “missing data” on the map where there needn’t be. The data are not necessarily missing, they may simply have been excluded by the choice of baseline. As has been noted in the context of attribution studies, it is often the areas with the least data that are most vulnerable to climate change.

In summary….

So ends my lengthy ramble about normals. It’s a topic that creates a lot of discussion (to which, of course, I have just added) and little in the way of solid conclusions. I’ve been following the process of creating practical guidance for over ten years and in that time I have spent many, many hours – days even – discussing, pondering, reading and writing about normals. To me, it seems that the chief value in specifying a standard, is to maximise consistency between different parts of a larger system – in this case a sprawling ecosystem of climate services and products – so that they all work together neatly without extensive interactions. To that end, preserving a special use case for 1961-1990, injects an inconsistency into proceedings.

The current guidelines require a “scientific” reason for effecting a change. The above points may, or may not constitute “scientific” reasons for simplifying the dual-normal system to a single-normal system. As the original reasons for preserving 1961-1990 were not especially scientific, it seems an unfair hurdle to have to cross.

The practicalities, however, cannot be wholly ignored. Guidance is only useful if it can be followed. I do think that some of the practical comms-related reasons for maintaining 1961-1990 can be managed otherwise, by thinking more carefully about what it is one actually wants to show. What is harder, perhaps, to countenance, in so far as guidelines are binding, is the extra work then required to update existing products.

To this end – the difficulty associated with updating products – some thought must be given. What changes need to be made to make this a simpler task? How can the generation of products against different baselines be automated or otherwise eased? In the software that was produced to calculate NCMPs, the baseline was user specified (though it defaulted at the time to 1981-2010). Nonetheless, many people faced with the possibility of running the software expressed something like dread at the thought of updating to a modern baseline. So, another question, why the dread?

Some other thoughts: what are the “scientific” reasons envisioned by the drafters of the guidance? What software is out there to make the re-calculation of baselines a simpler task? What guidance can be given to help with communication aspects? Are there libraries for making better plots?

More questions than answers, as ever.

The 7590 most puissant researchers of the year

Our biannual list recognises the most influential researchers, the true pioneers of science, the giants on whose shoulders the rest of us stand like so many flakes of dandruff, the ones who made the most impact, the ones that were on the telly all the time, who had a scienceornature paper (or two or three) and got interviewed about that thing with the hyperventilating press release, who got the most citations, or caused a stir, the movers and shakers who moved things and er… shook, because we felt like they didn’t have enough attention already.

Continue reading

The library where I grew up

The library where I grew up was like a building from the future, with exciting organic shapes cast in concrete and a brightly coloured spiral staircase. Every wall was covered in shelves so sunlight crept in at ceiling level through a narrow strip of window. On summer afternoons, the sun came through at an angle like laser beams that made the dust motes boil in the still air.

Continue reading

The alleged global warming hiatus

Theodore G. Shepherd has a new paper “Bringing physical reasoning into statistical practice in climate-change science” which looks at the disconnect between physical reasoning and statistical practice and bringing the former into the latter. It rails against Null Hypothesis Significance Testing. There’s a grand tradition of papers doing this and they are invariably interesting to read even though the message is usually a variation on a well known theme. This paper talks more generally about statistical rituals of which NHST is but one and Shepherd notes that he’s occasionally to be seen performing a ritual or two himself (who isn’t?):

I suspect I am not alone in admitting that most of the statistical tests in my own papers are performed in order to satisfy these rituals, rather than as part of the scientific discovery process itself.

It outlines instances where rituals can get you into trouble (multiple testing, base rate neglect, prosecutor fallacy etc.) and this is all fair enough and all familiar ground. The interesting variations involve delving into the assumptions of NHST. Science can be complex, particularly climate science, so care should be taken not to over simplify. At the same time, complexity isn’t necessarily to be favoured.

It’s all very complex.

Or simple.

Continue reading

Was 2020 the warmest year on record?

In which I continue an occasional series (2014, 2015) on whether particular years are the warmest on record.

A question I have been asked many time since about March 2020 was whether 2020 was going to be the warmest year on record and then, once the new year was rung in, whether 2020 was the warmest year on record? In a sense, the answer – the correct answer I would aver – hasn’t changed since March: it was maybe then, and it is maybe now. Some years are too close to call and 2020 was one such.

Continue reading