Open Mind

GHCN: preliminary results

February 23, 2010 · 27 Comments

I’ve already mentioned how I’m processing the GHCN raw data. So far I’ve only computed results for grids in the northern hemisphere, one third of the way around the globe, heading east from the prime meridian. It’s less than half the globe, but I thought I’d advise readers of the results so far.


I combined the individual grids into an area-weighted average. All the grids are 10 degrees (of latitude) tall except the northernmost, which are 20 degrees tall. But rather than count those grids as 20 degrees tall when computing area, I only counted them 10 degrees tall. After all, there are few stations north of latitude 80N and they generally don’t extend far above 80N, so this will slightly underestimate the areas of the northernmost grids rather than overestimate them.

These averages cannot be considered as high quality as the data produced by GISS or HadCRU, for many reasons. For one thing, my grids are large so the gridding is much more coarse. For another thing, I’ve just taken the GHCN raw data as is, I haven’t checked for discontinuities or outliers, and I haven’t applied any adjustments. Some adjustments accentuate a warming trend (like time-of-observation bias) while others reduce it (like UHI adjustments), but denialists generally criticize all adjustments roundly, and one of my purposes is to see what you get without any.

It should also be remembered that the averages I get are for land only (no sea surface temperatures) and for the northern hemisphere only (so far!). Land warms faster than sea, and the northern hemisphere faster than the southern, so we should expect faster warming from what I’ve got so far than for the globe as a whole with proper adjustments and a finer grid (like GISS or HadCRU).

I also want to address the issue of station dropout. It has been suggested that the large reduction of reporting stations which culminated in 1992, preferentially retained urban stations, which is simply false. It’s also been suggested that it preferentially retained warm stations and that this has introduced an artificial warming trend, which is nonsense. But let’s see how the stations which were retained compare to those which were not. Hence I’ve also computed, for each grid, separate time series using only those stations which dropped out by 1992 (when the station dropout was pretty much complete), and using only those which continued reporting after that. In addition to separate averages for each grid, I’ll also be computing separate “pre-cutoff” and “post-cutoff” area-weighted averages of the grids. The result is monthly averages for the entire region processed so far, but I’ll plot annual averages to make the graphs clearer. So, here are the result using just the grids I’ve processed so far.

For the grand (area-weighted) average here’s the time series of annual averages:

We see a pattern which is similar to that for the globe as a whole. This limited result shows, since 1975, warming at 0.0365 deg.C/yr, fully twice the rate for the global land+ocean average.

Here’s the comparison of the temperature history using just pre-cutoff data (in blue) to that using just post-cutoff data (in red):

The two sets give very similar results. We can also compute the difference between the temperature histories using pre- and post-cutoff data during their time of overlap. Here’s the post-cutoff data series minus the pre-cutoff data series:

There are some difference, notably from about 1910 to 1930, but there’s no significant difference in the overall trend between the two sets, or between the modern trends (up to 1992, when the pre-cutoff data end).

Of course, I’ll continue processing grids until I cover the entire GHCN, and report the results for the entire globe. But so far, the results flatly contradict the claim that the station dropout has introduced an artificial trend into the global temperature time series. We are not surprised.

Categories: Global Warming
Tagged:

27 responses so far ↓

  • J // February 23, 2010 at 2:41 am | Reply

    Nice work! I get the feeling that you’re having fun with this. I hope you’ll keep it up. The comparison of dropout data is fantastic.

  • carrot eater // February 23, 2010 at 3:29 am | Reply

    Can you clarify which method for combining stations you finally settled on? Your own optimal method, or one of the other ones?

    And at the risk of being pedantic, the time of observation adjustment can go either direction.

    The worst outliers are already removed from v2.mean anyway.

    [Response: I used the optimal combination method. Yes TOB adjustments can go either way, but my understanding is that in practice they predominantly increase the warming trend.]

  • Joseph // February 23, 2010 at 3:52 am | Reply

    Yep, that looks pretty fun.

    But rather than count those grids as 20 degrees tall when computing area, I only counted them 10 degrees tall.

    Is there a good reason for that? The tallness of the grid row probably doesn’t matter too much except in terms of grid resolution. And if there aren’t many stations there, I can see why you want to do that. But then, grid cells have different surface areas obviously, so I imagine temperature means are weighted for surface area. The warming in the northern-most and southern-most latitudes won’t get counted to the extent it should.

    Oh, I think I see why you do that. Since there aren’t many stations at those locations, those huge grid rows might introduce too much noise.

    That almost argues for splitting the grid in such a way that each grid cell has approximately the same number of stations in it.

  • mazibuko // February 23, 2010 at 2:30 pm | Reply

    Interesting stuff. I wonder if you saw this yet?

    http://wattsupwiththat.com/2010/02/20/spencer-developing-a-new-satellite-based-surface-temperature-set/

    Roy Spencer was suprised that station dropout didn’t increase the warming trend.

    Be interested to hear your thoughts on this…

    Thanks.

    • Gavin's Pussycat // February 23, 2010 at 5:35 pm | Reply

      I’m surprised that Spencer is surprised… he should know better. Moving in the circles he moves in is clouding his judgement it seems.

      And taking d’Aleo-Watts seriously is a travesty. Come on Roy!

      I’m a bit divided about this. We need Spencer and Christy nominally credible — or should I say, visibly sane –, as witnesses that the UAH record should be taken seriously as one independent instrumental time series. The way this is going, we may be losing them :-(

      • Ray Ladbury // February 23, 2010 at 8:24 pm

        Just curious if anyone has commented in detail on Spencer’s AGU presentation? He seems to want to change the definition of sensitivity.

    • Marco // February 23, 2010 at 5:42 pm | Reply

      The hilarious part is that Watts, who hosts this piece by Spencer, claimed, not even a month ago, that there was a deliberate removal of stations to obtain a higher warming trend.

  • Andy // February 23, 2010 at 4:53 pm | Reply

    You may or may not have time to publish some of your novel research. So, how will future journal publications cite blog posts? Have you or others given any thought to this?

  • Geoff // February 23, 2010 at 7:59 pm | Reply

    How are you doing the area weighting Tamino? Does each grid box get weighted by its land surface area? By its total surface area? Or by something else?

    Your comment about the northernmost grid boxes seems to mean that everything north of 70 N effectively gets half the proper weighting – is that right?

    [Response: Grids are weighted by their total surface area. Grids 70N-90N are only weighted according to the area from 70N-80N. So few stations are north of 80N, and even those are not far north of 80N, so I only counted them as covering 70N-80N.]

  • AndyB // February 23, 2010 at 8:02 pm | Reply

    The curve is fascinating. I’d love to see it snipped out according the the early 20th warming, the post-war period, and the modern warming period a la your post from a a year or so ago on the trends in GISS, CRU, and NCDC. I’m particularly interested in the lack of 1945 decline in temps a la Thompson et al’s Nature paper from 2008. I know the post-1945 was a SST-driven blip in the global record but the lack of it shows up very nicely in your NH land graph. Keep it up!

  • Marcus // February 24, 2010 at 7:10 pm | Reply

    So, do you include any of the following adjustments from the following figure? http://www.ncdc.noaa.gov/img/climate/research/ushcn/ts.ushcn_anom25_diffs_pg.gif

    If not (and I think you don’t), and if one assumes that the adjustments are legitimate, then would the implication be that you’re underestimating warming by about 0.25 degrees C?

    [Response: I'm using the GHCN raw data. No adjustments.]

    • carrot eater // February 24, 2010 at 11:31 pm | Reply

      That plot is limited to the US, anyway.

      If globally, errors are random, then you don’t actually need to make any adjustments; they’ll all cancel out. As of now, that’s roughly what happens in GHCN.

      Errors can be non-random if they correlate: if a bunch of stations all move from city to rural at the same time (cooling), if a bunch of stations all have cities grow up around them (warming), a concerted shift in observation time in a country (in the US, cooling), a widespread change in shelter or instrument type, etc.

      In the US, these add up to a cooling bias, but don’t assume that’s true globally.

      • Marcus // February 25, 2010 at 5:26 am

        Ah, thank you for that clarification. I don’t suppose that there exists a plot for GHCN the way there does for USHCN?

        One of these days I should go look at the raw data and especially the metadata, so I can understand how the various TOBS and such adjustments can be made…

        -Marcus

    • carrot eater // February 25, 2010 at 12:19 pm | Reply

      Well, first you’ll be surprised to know that your plot for the US is outdated anyway. The TOB step remains, as does (I think) FILNET, but SHAP and MMTS are gone. The latter two are replaced by a method described by Menne and Williams (2009), and the code is available online.

      The USHCN is helpful in that it provides data for each station raw, after TOB, and final, so you can specifically see what TOB does.

      Now, for GHCN, they don’t do anything that requires metadata. So no TOB step, nor anything like the erstwhile MMTS step.

      The closest graphic easily available for GHCN is under Question 4, here.
      http://www.ncdc.noaa.gov/cmb-faq/temperature-monitoring.html

      But they’re in the process of applying Menne/Williams to the GHCN as well, so you can expect some things to change a bit as they do that. Since they’re changing it anyway, I personally wouldn’t invest a ton of time into learning about the current GHCN adjustments, unless you’re really curious.

      • Marcus // February 25, 2010 at 3:31 pm

        Ah. Thanks again. Unfortunately, I may need, for work reasons, to delve deeper into these GHCN questions… though its possible that the responsibility will fall to other people in my group (or that my group will be able to outsource this directly to NOAA and/or NASA).

        -Marcus

  • dko // February 25, 2010 at 1:45 am | Reply

    This may be a bit off-topic — but why not use standard GIS software to display, interpolate, and analyze data? The one I use has a wide variety of interpolation techniques, projections, and statistical funtions built in. You can clip to any boundary you like, including country boundaries or lat/lon grids. I use a desktop GIS but there are Web server applications available.

    Surely this would greatly simplify your project. The complex coding has been done for you and they are designed to handle huge databases.

  • Gavin's Pussycat // February 25, 2010 at 1:16 pm | Reply

    dko: but this project isn’t about getting the results, but about demonstrating how it really works and how straightforward this is. Like a clockwork where you can see all the camwheels etc.

    Do you seriously expect denialists to believe anything coming out of something as complicated as GIS software?

  • Halldór Björnsson // February 26, 2010 at 8:55 am | Reply

    dko: Even if you wanted to interpolate this using your favorite GIS routine, you still would have to deal with missing stations, gaps etc. The standard way to do temperature interpolations is to interpolate the residuals from a model (typically a linear model with altitude, latitude, etc). In building the model you really have to get to grips with all sorts station peculiarities (wrong altitude is a favorite).

    But you are correct that once you have the interpolated map calculating the average is really easy. I don’t use GIS, but I am sure there is a routine for areal averaging.

  • drj11 // February 26, 2010 at 4:05 pm | Reply

    Great!

    I recently completely a parallel analysis for the entire globe using Clear Climate Code’s ccc-gistemp. Enjoy.

    • carrot eater // February 26, 2010 at 4:13 pm | Reply

      Awesome. For those sticklers who think that somehow, using the actual GISS algorithm would make some big difference, there they go.

Leave a Comment