How bad are these global forecast models?
When the same model code with the same data is run in a different computing environment (hardware, operating system, compiler, libraries, optimizer), the results can differ significantly. So even if reviewers or critics obtained a climate model, they could not replicate the results without knowing exactly what computing environment the model was originally run in.
This raises that telling question: What kind of planet do we live on? Do we have a Intel Earth or an IBM one? It matters. They get different weather, apparently.
There is a chaotic element (or two) involved, and the famous random butterfly effect on the planet’s surface is also mirrored in the way the code is handled. There is a binary butterfly effect. But don’t for a moment think that this “mirroring” is useful: these are different butterflies, and two random events don’t produce order, they produce chaos squared.
How important are these numerical discrepancies? Obviously it undermines our confidence in climate models even further. We can never be sure how much of the rising temperature in a model forecasts might change if we moved to a different computer. (Though, since we already know the models are using the wrong assumptions about relative humidity, and are proven wrong, missing hot-spot an’ all, this is like adding sour cream to a moldy cake. We were never going to eat it anyway).
This is what 90% certain looks like.
The cheapest way to lower global climate sensitivity might be to switch operating systems in climate lab computers.
The monster complexity of climate models means they never had a chance of actually solving the climate in the foreseeable future, but that makes them ideal to issue unverifiable pronouncements from The Mount. At the same time it cultivates the endless grant-getting-cash-cow, always in need of bigger computer arrays: more code, more conferences, more computers!
This study presents the dependency of the simulation results from a global atmospheric numerical model on machines with different hardware and software systems. The global model program (GMP) of the Global/Regional Integrated Model system (GRIMs) is tested on 10 different computer systems having different central processing unit (CPU) architectures or compilers. There exist differences in the results for different compilers, parallel libraries, and optimization levels, primarily due to the treatment of rounding errors by the different software systems. The system dependency, which is the standard deviation of the 500-hPa geopotential height averaged over the globe, increases with time. However, its fractional tendency, which is the change of the standard deviation relative to the value itself, remains nearly zero with time. In a seasonal prediction framework, the ensemble spread due to the differences in software system is comparable to the ensemble spread due to the differences in initial conditions that is used for the traditional ensemble forecasting.
Computers set up in large parallel clusters can produce a different result through many means:
Massively parallel computing cluster computers, which contain many networked processors, are commonly used to achieve superior computational performance for numerical modeling. The message-passing interface (MPI) is the most commonly used package for massively-parallel processing (MPP). The MPI parallel codes of an atmospheric model may not produce the same output if they are ported to a new software system which is defined as the computational platform that includes the parallel communication library, compiler, and its optimization level in this study.
At this frontier edge of numerical calculations, the finer details of how each system handles things like rounding and Fourier transform shortcuts can affect the end result:
Rounding error is another important consideration in atmospheric modeling that can arise due to various characteristics of the computer software implementing the model, including: 1) the rounding method; 2) the order of arithmetic calculations; 3) intrinsic functions; and 4) mathematical libraries, such as the fast Fourier transform (FFT).
Hong, S., Koo, M., Jang, J., Kim, J.E., Park, H., Joh, M., Kang, J., and Oh, T. (2013) An Evaluation of the Software System Dependency of a Global Atmospheric Model, Monthly Weather Review 2013 ; doi: http://dx.doi.org/10.1175/MWR-D-12-00352.1
h/t to The Hockey Schtick