Climate science relies on extremely nonlinear models. When you rely on extremely nonlinear models, the rules for statistical analysis change completely. In a linear model, you can expect a small change in inputs to result in a small change in output. In a nonlinear model, even a minute, undetectable change can correspond to any of three states. If the system is overdamped, it will behave similarly to a linear model and the small change will not affect results greatly. If the model is chaotic, it will oscillate between some odd number of quasi-stable states (this is possibly what we saw with the last decade's level or cooling temperatures, the system was perturbed to another state). If the model is underdamped, then you're in trouble - the result will be highly unpredictable.

So the statistical bar for nonlinear systems is much higher than that for linear systems. In linear systems, you need to statistically correlate results to observations, and if they correlate better than the alternative then your model is better than the alternative. I am not a climate scientist, in fact I am not a scientist at all - just a lowly engineer. I am not familiar with the climate science literature so it's possible that all I am about to say is common practice, although it doesn't show from the CRU code and data. And I am at something of a hobbyist level at this chaos stuff, so all this might be completely wrong, but my take is as follows:

With nonlinear systems, you need to do more work. You need to be able to statistically isolate each combination of variables and determine whether its effect is overdamped, underdamped, or chaotic. If it is underdamped, you can just look at results. Ie, "With the measured CO2 concentration over the past N decades, we can say that the relationship to temperature is a*log_b(CO2) with P% error bounds on a and b of alpha and beta."

If the variable provokes a chaotic response, you need to isolate the periodicity of the system under various perturbations. In other words, if you find that CO2 level in the atmosphere leads to discrete but repeating jumps in global temperature, you can predict the effect a perturbation has within a given range of inputs - you can never fully predict what will happen when the period doubles or halves, because it may go nonlinear at any new level of CO2 concentration. If you can find two period doublings in the data, then you might even be able to project it forward or backwards several period doublings, because the level of input at each doubling and the discrete levels of output follow a rule of proportionality. But again, this is highly speculative because

*chaotic systems can go nonlinear (underdamped) at any time*. In other words, if your input leads to chaotic outputs, you now have to bound your model to those regions where you can calibrate. At best you can venture a guess a period doubling or two ahead and behind, but again, the whole system could stop doubling in an orderly fashion and just go haywire. Obviously, this is a far greater task than with linear models - you go from correlating one set of outputs to one set inputs, to correlating every combination of input against every output and isolating discrete contributions for those combination. The statement you can make here is "For the last N decades of CO2 data, we can predict that the effect on T will follow one of the following curves [f1a(CO2),f2a(CO2), ... , fna(CO2)] within the range C1 to C2 with Pa% probability, and one of the following curves [f1b(CO2),f2b(CO2), ... , fnb(CO2)] from C2 to C3 with Pb% probability. We can also predict a relationship within the bounds [f1c(CO2),f2c(CO2)] for concentrations from C3 to C4, and [f1d(CO2),f2d(CO2)] for C0 to C1, with a lower degree of probability, and with an inverse probability of the system response devolving into noise."Finally, if your variable ends up provoking an underdamped response, you're almost, but not quite, SOL. Your output will fluctuate unpredictably - it is indistinguishable from noise. You can do a couple things with this: you can again bound the input to the range where you can isolate the calibration data, and prove that the nonlinear response is bound as well - ie, "With the levels of CO2 we have measured over N decades, we can predict with P% confidence that the nonlinearity from these concentrations in the model will be no greater than 0.1%C." The other thing you can do is various tricks to linearize the model. In this case, you must prove that your linearized response is correlated with actual data BUT -

*you cannot predict what will happen to the portion of the signal from your nonlinear variables outside the envelope of observation*. In other words, forecasting what will happen when CO2 levels reach 4X their current levels, and higher than at any point in the past where you have good data, is not meaningful.The only way you can get a confidence bound in this situation seems to be by sensitivity analysis of your entire model - run each variable independently, and in every combination, through the full range of error for the variable. You are essentially looking for where the output falls off a cliff - either the noise goes way up or there is a discrete change. Then you look back at your calibration data and find similar trends in the data, and determine the probability that the actual climate conditions when those trends happened were the going through the same state change as the input conditions that caused the similar trend in the model. There may be several "similar" trending events in the model and in the calibration data, so at this point you need a lot of both to get a meaningful cross-section with which to check the nonlinear model response.

In summary, probably the most important point about the nonlinear system is that you need much better data, with much more rigorous analysis, than for linear systems. It seems like some of the techniques used by the CRU researchers - interpolating data between stations, correcting for movement of stations, smoothing averages over time - were fine for linear systems but grossly inadequate for nonlinear systems. Interpolation and correction errors can grow wildly (or in discrete steps) in nonlinear systems, and it is precisely the instantaneous variations in the data that tell you when a system has gone non-linear. When dealing with highly non-linear systems, you should be extremely wary of any trend predictions... often the best you can do is "And if we reach this point the outputs go completely haywire and six slightly different models predict six hugely different outcomes, so it's probably a good idea to not get to this point if we can help it."

Climate scientist will protest that what I am asking is impossible, and rightly so - you need data to do any experiment, and saying your data is invalid because we can find some degree of nonlinearity in the system where an arbitrary level of error leads to arbitrarily unpredictable results is the same as saying your experiment is impossible. True enough.

Which brings us back to the first point - climate science getting more money. There will now be a huge bias against using legacy data sets, especially unpublished ones, and so we will likely see a return to the raw data and a more persistent push to make that data less error-prone. The fact that garbage in is so much more damaging with nonlinear systems means typical statistical tools used for linear systems are not valid, and data fidelity is of utmost importance. I hope that this leads to a new push in the climate science world - and the scientific world in general - to completely reevaluate statistical and data gathering procedures for highly nonlinear systems. I have always thought that climate science should be nonlinear physics that happens to be dealing with a climate dataset. We will deal with more and more nonlinear systems problems as we move on towards K1 level of civilization, and if the outgrowth of Climategate is a new set of scientific methods to deal with them then we have all won.

## No comments:

Post a Comment