BAS Main Index
  [Science]   [BAS home]   [Met home]   [Beowulf home] Antarctic Meteorology 


Verification of the Climate Model

By which I mean checking that the basics work OK.

There is some new stuff on the 64-bit opteron cluster.

Otherwise, all the stuff below is what we've had running for a while: hadXm3 on the minifer athlon cluster.

Coupled model

Bit comparison... yeah yeah, it does, at least within MPP. Ask if you're interested, I might add it.

See verify for confirmation that the coupled model is OK (at 64 bit).

Old verification: look at yabeo if you really must but note problem with increasing GHGs in control.

Atmos-only

Bit comparison

Bit comparison is a very basic test of the model computation. It checks that the results are *exactly* the same, down to the last bit, if the model is run with different numbers of processors or with non-MPP code. Of course, 64- and 32- bit results don't intercompare. Note that some scientific sections have choices for "fast but non-comparable" or "comparable" code: these are scientifically equivalent. Sometimes, the ones marked "fast but non-comparable" do actually compare anyway. The model bit compares completely at 64-bit under Fujitsu. It compares slightly less completely at 32-bit or under Portland.

In more detail:

On one processor, runs yaaug (MPP 1-proc) and yaauf (non-MPP) bit compare. With an extra mod (qtpos1a.upd) it bit-compares at up to 4x4 processors. A slightly more modern job yabba also compares.

Note that all the prognostic fields compare (as they must). Two diagnostics don't perfectly: "Field 176 : Stash Code 4204 : LARGE SCALE SNOWFALL RATE KG/M2/S" and another similar, whose name I forget. The problem occurs at the south pole.

Runs also compare across CRUNs.

Is the control climate equal at 32/64 bit and across compilers?

This is a bit more interesting than bit comparison. To check this, the model needs to be run out for long enough to see if it is statistically equal. Something like 30+ years is needed to do this well.

Checking 32 vs 64 bit (within Fujitsu)

This seems to be OK. See pictures below for 2 basic variables, MSLP (the most basic) and temperature at 300 hPa (chosen to be influenced by radiation).

See yabfe for more information on the exact run used.

In the pictures below, areas shaded-over are considered not-statistically-significantly-different.

MSLP
JJA DJF
300 hPa temperature
JJA DJF

Checking Fujitsu vs Portland

It would be nice to think that different compilers gave the same results. This appears to be close to, but not quite, true.

Here I'll compare the portland 32-bit model to the fujitsu 32-bit. I haven't run portland at 64-bit.

See yabcf for the Portland 32-bit run.

MSLP
JJA DJF
300 hPa temperature
JJA DJF
For MSLP, things appear to be OK.

For T300, the *differences* between yabcf and fe are quite small, about 1 oC at most. This is certainly less than, for example, the error in the model compared to say ERA.

However, the areas (un)masked (ie, and therefore significant) are rather large:

     Sig	Area Masked (DJF)	JJA
     ----       -----------------	--------
     0.01 	0.307938		0.261289
     0.05	0.478800		0.376754
     0.10	0.579491		0.458256
     0.50	0.853889		0.773654
and are clearly greater than chance. So it looks like Portland and Fujitsu *don't* fully compare.

However, if we look at errors against reanalyses, we see that the fujitsu-portland differences are about 1/5 of the model-observation differences.

JJADJF

Checking against the Hadley Center

Here I have no hadAM3 repeating-SST control, so I've used aawei, which is an AMIP-II run, and to compare I have yabha and b, which are 2 ensemble members of my AMIP-II runs. The errors are now larger, because the run period is now much shorter (only 16 years), but the end results *appears* to be OK, at least as far as MSLP is concerned.

Note, in particular, the large +ve diff in DJF between yabha/b over the North Pacific: if this was in the differences against the Hadley version, it would look suspiciously large, but because its in a-b we see that it is *not* significant. And indeed thats what the stats say.

Note that yabce (which these are children of) has a change to the fourier filtering that might have caused some effects; its doesn't show up here, though.

  JJA DJF
MSLP: yabha against AAWEI
MSLP: yabhb against AAWEI
MSLP: yabha against yabhb


Past last modified: 6/12/2004   /   wmc@bas.ac.uk

© Copyright Natural Environment Research Council - British Antarctic Survey 2002