The Granularity of Climate Models

As an engineer schooled in the slide rule era, I have been trying to educate myself about the nature of climate models. The details of the specific construction of the models have been hard to find. I am old enough that I have been through the evolution of computers and calculators from those earliest HP scientific models that cost $400 in 1972 dollars. Computers are not electronic brains -- they are just very fast electronic slide rules and adding machines. So I want to figure out the thinking process behind the models.

As a summer intern in the thermodynamics department at Grumman Aerospace, I did computer modeling of the air conditioning system of the A-6E aircraft. Getting that program to run took a lot of debugging, but when it finally started to go, it used two minutes of computation time on Grumman's biggest computer before the time allotted to a summer intern expired. They needed that computer for other things, such as landing men on the moon! The summer intern's project would have to wait. But I do have an informed curiosity in the creation of these climate models. It's not a pretty picture.

Now that I have found some of the details, I think I can provide a layman's guide to the developing issues about the models. Much of the current debate centers on the validity of the surface temperature issue.

Historic surface temperature data is maintained by a few government institutions, of which the Hadley Climate Research Unit (CRU) is one. It was the home of the now-infamous Dr. Phil Jones. The CRU itself is an offshoot of the British Meteorological Office (the Met Office). It was formed because weather is not climate, and the climatologists felt the need to separate the two. The layman should note that climate models are very different from the weather models that you've seen on TV. Given that the Met Office is the parent, its veiled rebuke of the climatologists at Hadley should be taken very seriously. Their very first recommendation is jaw-dropping

The proposed activity would provide:

1. Verifiable datasets starting from a common databank of unrestricted data at both monthly and finer temporal resolutions (daily and perhaps even sub-daily) ...

Let us take up the Met Office's challenge and see if we can figure out how to create that "common databank of unrestricted data" as of Day One, Hour One. Here is a description of the Hadley HadCM3 coupled atmosphere-ocean general circulation model.
Step 1 The Atmosphere Model

Atmosphere model (HadAM3)

HadAM3 is a grid point model and has a horizontal resolution of 3.75×2.5 degrees in longitude × latitude. This gives 96×73 grid points on the scalar (pressure, temperature and moisture) grid; the vector (wind velocity) grid is offset by 1/2 a grid box.[4] This gives a resolution of approximately 300 km, roughly equal to T42 in a spectral model. There are 19 levels in the vertical.

Okay, we are going to start by building a three-dimensional matrix (longitude x latitude x altitude) of three variables (pressure, temperature, and moisture) that is 96 x 73 x 19. The results are typically displayed as the surface level only, so the surface level is only 96 x 73, which means that we need 7,008 surface temperatures. That would be fine, except that the maximum number of surfaces temperature in the databases was about 6,000 and has been culled to only about 1,200. You can read about it in this paper by Joe D'Aleo and Anthony Watts. Now, it turns out that the Hadley database is for land-based surface temperatures only, so we might reduce our demands from 7,008 surface temperatures to about 2,100 because the land surface of the earth is about 30% of the total surface. The existing 1,200 surface stations is only a bit more than half of the land surface total required.

Eminent climatologist Dr. Roy Spencer has prepared a map of the distribution of reporting stations, showing visually the paucity of reporting stations in the Southern Hemisphere. Note that for practical purposes, the continents of South America, Africa, Australia, and Antarctica are missing. And we haven't even touched on the 70% of the earth's surface covered by oceans.


So right off the bat, we find abject failure in adequacy of accurate data collection. If the 1,200 stations were evenly redistributed, the area of the average box would have to nearly double. The total number of grid boxes is an indication of the grid's granularity. The less boxes, the more granular the model becomes.

To match the number of grid boxes to the number of stations, we would need to reduce it to an 80-by-50 matrix. This would form 4,000 boxes -- 1,200 on land and 2,800 on the sea. The grid box size would increase from 3.75 degrees longitude by 2.5 degrees of latitude to 4.5 degrees longitude by 3.6 degrees latitude.

To give the reader an idea of how big the box has become, consider that Robert T. Merrill puts the median radius of an Atlantic hurricane at 2.4 degrees of latitude. The new grid box is now so big that it could nearly swallow the median size Atlantic hurricane whole, all the way to the outermost radius of closed isobar. This would reduce an Atlantic hurricane to just one average temperature, pressure, and humidity value in the model. That would be absurd on its face.

An existing grid box in the Hadley climate model is 3.75 degrees in longitude by 2.5 degrees in latitude. Let's forget the quibbling argument over whether the Urban Heat Island Effect is biasing the results and look at the global question of whether any one set of single values of temperature, pressure and moisture adequately describes a grid box.

I've selected the grid box that includes Miami, Florida. Here are the reference longitudes and latitudes for its corners and for its geographic center

NW Corner   Long -83.75    Lat 27.50

SW Corner    Long -83.75    Lat 25.00

NE Corner    Long -80.00    Lat 27.50

SE Corner     Long -80.00    Lat 25.00

Center           Long -81.875   Lat 26.25

This website allows us to find data within a given radius of the geographic center. Using Lat 26.25N and Long 81.875W and a Distance of 100 NM gives us list of stations safely within this one climate grid box. It is a circle of 200 NM (230 statute mile) diameter.

This website gives as a satellite view of the area sized to approximate the margins of the grid box. The site allows the user to move the cursor and get his lat/long, so one can outline the grid box for oneself.

1) The first challenge is to figure out whether this grid box belongs in the land-based dataset or in the ocean dataset. Strictly on an area basis, it seems that there is a bit more ocean then there is land. Following that logic, the residents of Miami are the 21st century's residents of Atlantis.

2) The water surface topography includes, but is not limited to, the Gulf Stream, the Everglades and the Gulf of Mexico. Each brings different characteristics to the Incoming Solar Radiation/Outgoing Thermal Radiation balance. Sunlight can penetrate to much greater depths in the relatively still, clear, and deep Gulf of Mexico. Its penetration in the Everglades is limited by the opacity of the water and its shallow depth. And the Gulf Stream is a conveyor of stored heat collected elsewhere in the tropics and released in Florida. The water surface temperature will be different for all three, especially over a twenty-four-hour span where the heat capacity of the water creates the land breeze/sea breeze cycle.

Trying to distill all that texture into one grain of data in the global climate model is a fool's errand.

It is because of all those gaps in existing data collection that climatologists chose to start doing data-"homogenization." They needed numbers to put in the empty grid boxes, so they averaged values from surrounding grid boxes that could be hundreds of miles away and filled them into the empty boxes.

But the climatologists have attempted to make up for their lack of fine enough granulation by vigorous temporal calculation. The description of the model continues: "The timestep is 30 minutes (with three sub-timesteps per timestep in the dynamics)."

In plain English, that means that when they start running the program, they do dynamic sub-steps on a ten-minute interval. That's 144 dynamic calculations per day (6 per hour x 24 hours per day) based on a dataset that the Met Office someday hopes to have available on a monthly (or maybe even daily) temporal resolution. 

It is as though they have increased in their minds the accuracy of the model by the number of calculations performed. They didn't want to increase the number of grid boxes, so they recalculated the same data more frequently. If an accurate database and model were to be created, then they could start to validate it once a day. Put in the data for Day One, run the model 144 times and compare the result to the data collected on Day Two. Currently, the accurate dataset does not exist.

It is easy to see why the climatologists trained on electronic calculators put so much faith in their computer models, they do millions of calculations, far beyond what one person could do with a slide rule and calculator. But garbage in still yields garbage out. This fascination with an astronomical number of calculations reminds me of George Santayana's definition of a fanatic:

Fanaticism consists in redoubling your effort when you have forgotten your aim.

 - George Santayana, Life of Reason (1905) vol. 1, Introduction
U.S. (Spanish-born) philosopher (1863 - 1952)

I'll leave it to the readers to decide if they believe AGW alarmists to be fanatics.
If you experience technical problems, please write to helpdesk@americanthinker.com