In light of the recent underwear bomber episode, and the equally ridiculous and useless rules set up by our bureauverlords to protect their jobs (the answer to government failure is... more government!), I thought I would weigh in on how to really keep airplanes safe. Allow guns on board.
I am quite certain that terrorist incidents would virtually disappear if we did this - just let anybody on a plane without screening of any kind. On an average flight, you would be almost guaranteed to have several retired military, law enforcement, and good ol' rednecks strapped and ready to waste a prospective terrorist or hijacker.
The first objection is, as usual from people who take what the government says without question, "we can't do that! Everyone would be shooting up planes." Right, and if heroine was legal I guess you would go stick a needle in your arm tomorrow. There is a growing body of research that suggests violent crime rates drop significantly when gun laws are softened. It's the age old truth that criminals with guns can usually avoid cops, and they are not afraid of citizens with knives. Didn't 9/11 show us that our elaborate security theater had done nothing less than make use mortally vulnerable to such treacherous weapons as box cutters? And did it not also show us that, given either a reasonable chance of success, or an overwhelming reason to try, ordinary citizens are our best defense against random acts of violence? Yes and yes.
The second objection is more rational, which is "but don't guns put holes in planes?" That one suggests some caution. After all, if the brave and intelligent Dutch tourist who stopped pantsbomber had been packing a .44, the cure could very well have been much worse than the disease. In that case, how's this for a compromise. Offer a course for military, law enforcement, and maybe ordinary citizens to become licensed "reserve air deputees." Once licensed, these deputees would be allowed to discreetly carry weapons on their person onto commercial airplanes. Of course, there would be a restriction on type of weapon - high-caliber handguns, explosives, rifles, and shotguns would be of little use in a close-quarters airliner fight. However, small caliber and non-lethal weapons like .22's and similar low-power handguns, tasers, mace, etc, would be allowed on. The key word is "discreetly" - the RADs would go through the same screening process as normal passengers, they would be required not to show their weapon to anyone, and they would be required to have their weapons on their persons at all times. The point is to make sure a terrorist can't know how many RADs are on a given flight, or where they are sitting.
The course would teach people the dos and don'ts of airliner combat. The course would cover small-caliber handguns, as well as non-lethal weapons like mace and tasers. Users would be instructed in an airliner mockup about things like where a small caliber bullet is least likely to cause explosive decompression, whether bullets will penetrate different types of partitions, what kinds of proximity effects a cloud of mace has, and so one. It might also cover such things as basic hostage situations, simple forms of non-verbal communication, ways to improvise weapons and/or create diversions with things commonly available in an airliner bathroom or galley, ways to contact the ground from the cabin, and steps to take in case of loss of crew. The class need not be especially long or expensive - probably no more than two days or several two-hour evening sessions should do - one could commandeer a derelict DC-8 in Mojave and shoot it full of any number of holes.
The worst case scenario is that several terrorists would take the course and be allowed on a plane with a gun. However, this is still better than the current scenario. First of all, there would be the requisity background checks. Anyone who took the course would be automatically red flagged by the FBI, and any commonalities where several people on the same flight took the same course, or were funded by the same organization, or have sketchy backgrounds, would come out. This would have stopped at least the Reids and Abdulmutallabs. Second, the worst that would happen is they would get on with a small gun - there would still be screening for explosives, large caliber weapons, etc. All this would not necessarily have stopped 9/11 however. But, potential terrorists would have no way of knowing how many other passengers were carrying weapons, or where they were. This is the most important line of defense. Terrorists are willing to lose their own lives; and if there is a 3.7% chance that they will go to jail because an air marshall happens to be riding then so be it. But if those odds went to close to 100% that there would be at least one and likely several people with the weapons and training to make every attempt at martyrdom as pathetic as pantsbomber, regardless of what kind of heat the terrorists were packing, then they will move on rather quickly.
There could, of course, be automated rules in place to put the odds even more in our favor. There would obviously be flags raised when an aberrant number of RADs chose a given flight. RADs could show up to the airport, and be either deputized (allowed to keep their weapon on their person) or demoted (with their weapon placed in checked bags or in safe storage at the airport police station) at random for their flight that day, so that no one could ever know whether he would be allowed a weapon on board for a given flight. Likewise, we could limit the total RADs on each flight to no more than 2% of passengers, or no more than 5 total RADs, or so on. We could arrange it so that the RADs on a given flight had a wide variety of birthplace, hometown, religion, ethnicity, and education, to reduce the likelihood of collaborators. RADs could be seated together without their knowledge, with air marshalls secretly seated around them, in any suspicious case. But this is all just working the odds. The point is, the program would open the RADs up to so much scrutiny that it would probably be unprofitable for a terrorist organization to attempt infiltration.
Fact is, the cops are never there when you need them. At any level of spending that won't kill mass air travel, we are not going to cover enough flights with air marshalls to make terrorists think twice.
Early signs point to the Obama administration taking the Augustine Commission's advice and turning astronaut launch to LEO over to contract bidders like ULA and SpaceX. I'm sure the sausage making will be ugly and the result will be less than ideally appetizing, but it is the first big-ticket sign of a slow, necessary change in NASA that has been happening at least since SpaceX was founded in 2003(ish) and SpaceShipOne launched in 2004. Centennial Challenges, the COTS A-C program, and the ISS resupply contracts were earlier steps of increasing size. Humans to LEO would be a lot more long-term and carry a price tag that actually takes a bite out of key congressional districts. Of course, it adds the same amount in a much more efficient way to other congressional districts, but for some reason that is never included in the accounting.
Politically, space is not important. NASA is an $18B annual bargaining chip to buy off various constituencies. That's not surprising, in fact it is typical. It has led slowly and inexorably to a federal agency much like any other federal agency that is not directly accountable to the voters (ie, all of them). Absent other influences, this leads to one way of thinking:
1) People who fly are people who die. The general public is not interested or wowed by spaceflight, they could care less. Only bad things can happen for politicians when astronauts fly. To a lesser extent, this is also true of multi-million or -billion dollar satellites and probes as well. The ideal NASA program from a politician's point of view is one that spends the maximum amount of money on the minimum amount of flights. The result: massive overhead is designed into the programs. Overhead doesn't kill anyone. It doesn't do anything, in fact.
Absent other influences. There are always other influences, but until recently they have been small fries. It took enormous political capital to get Shuttle up, and no one is going to get behind something like that again. So what is currently leading the push toward commercial?
Former NASA administrator Michael Griffin saw it clearly when he railed about "the Gap" after the Shuttle was sunsetted. He reacted the wrong way, as do all politicians, by grasping tighter to the status quo and assuming he could steer funding and/or change the way a massive bureaucracy gets things done. But his assessment of the problem was correct. It is politically embarrassing to buy things from Russians.
Faced with this nationally important strategic fact, and a commercial industry ready and willing to come to the rescue, it seems likely that on the order of $5B will finally be skimmed of the NASA pork cream pie to actually get something done. The NASA boondoggle will still get their $5B annual to work on paper rockets - this time called "heavy lift vehicles" - that are not strategically important to most political careers. They can overrun the cost and schedule on these while never flying anything useful until the next sellable scam comes along. But the agency will finally be doing something useful with a real chunk of its cash, and that's a good thing.
Following up on two reality-TV stars crashing the state dinner a month ago, we now have a story of two tourists who showed up to the WH on the wrong day and were invited to join a Veterans Day breakfast.
I for one don't see that this is an issue. I can't imagine how it could be. Are we really pissed off that ordinary Americans were able to get some candid face time with the President? Have we devolved to royal courts? I disagree with a lot of the things Obama does, but this is nothing but good. I've said for some time that the whole standing army/Praetorian Guard is a sign that a person has way too much power. Secret Services in good times turn into SS and NKVDs in bad times. Look it up.
Climategate, paradoxically, may lead to a renewed push and more money for climate science, and this is a good thing. Hear me out.
Climate science relies on extremely nonlinear models. When you rely on extremely nonlinear models, the rules for statistical analysis change completely. In a linear model, you can expect a small change in inputs to result in a small change in output. In a nonlinear model, even a minute, undetectable change can correspond to any of three states. If the system is overdamped, it will behave similarly to a linear model and the small change will not affect results greatly. If the model is chaotic, it will oscillate between some odd number of quasi-stable states (this is possibly what we saw with the last decade's level or cooling temperatures, the system was perturbed to another state). If the model is underdamped, then you're in trouble - the result will be highly unpredictable.
So the statistical bar for nonlinear systems is much higher than that for linear systems. In linear systems, you need to statistically correlate results to observations, and if they correlate better than the alternative then your model is better than the alternative. I am not a climate scientist, in fact I am not a scientist at all - just a lowly engineer. I am not familiar with the climate science literature so it's possible that all I am about to say is common practice, although it doesn't show from the CRU code and data. And I am at something of a hobbyist level at this chaos stuff, so all this might be completely wrong, but my take is as follows:
With nonlinear systems, you need to do more work. You need to be able to statistically isolate each combination of variables and determine whether its effect is overdamped, underdamped, or chaotic. If it is underdamped, you can just look at results. Ie, "With the measured CO2 concentration over the past N decades, we can say that the relationship to temperature is a*log_b(CO2) with P% error bounds on a and b of alpha and beta."
If the variable provokes a chaotic response, you need to isolate the periodicity of the system under various perturbations. In other words, if you find that CO2 level in the atmosphere leads to discrete but repeating jumps in global temperature, you can predict the effect a perturbation has within a given range of inputs - you can never fully predict what will happen when the period doubles or halves, because it may go nonlinear at any new level of CO2 concentration. If you can find two period doublings in the data, then you might even be able to project it forward or backwards several period doublings, because the level of input at each doubling and the discrete levels of output follow a rule of proportionality. But again, this is highly speculative because chaotic systems can go nonlinear (underdamped) at any time. In other words, if your input leads to chaotic outputs, you now have to bound your model to those regions where you can calibrate. At best you can venture a guess a period doubling or two ahead and behind, but again, the whole system could stop doubling in an orderly fashion and just go haywire. Obviously, this is a far greater task than with linear models - you go from correlating one set of outputs to one set inputs, to correlating every combination of input against every output and isolating discrete contributions for those combination. The statement you can make here is "For the last N decades of CO2 data, we can predict that the effect on T will follow one of the following curves [f1a(CO2),f2a(CO2), ... , fna(CO2)] within the range C1 to C2 with Pa% probability, and one of the following curves [f1b(CO2),f2b(CO2), ... , fnb(CO2)] from C2 to C3 with Pb% probability. We can also predict a relationship within the bounds [f1c(CO2),f2c(CO2)] for concentrations from C3 to C4, and [f1d(CO2),f2d(CO2)] for C0 to C1, with a lower degree of probability, and with an inverse probability of the system response devolving into noise."
Finally, if your variable ends up provoking an underdamped response, you're almost, but not quite, SOL. Your output will fluctuate unpredictably - it is indistinguishable from noise. You can do a couple things with this: you can again bound the input to the range where you can isolate the calibration data, and prove that the nonlinear response is bound as well - ie, "With the levels of CO2 we have measured over N decades, we can predict with P% confidence that the nonlinearity from these concentrations in the model will be no greater than 0.1%C." The other thing you can do is various tricks to linearize the model. In this case, you must prove that your linearized response is correlated with actual data BUT - you cannot predict what will happen to the portion of the signal from your nonlinear variables outside the envelope of observation. In other words, forecasting what will happen when CO2 levels reach 4X their current levels, and higher than at any point in the past where you have good data, is not meaningful.
The only way you can get a confidence bound in this situation seems to be by sensitivity analysis of your entire model - run each variable independently, and in every combination, through the full range of error for the variable. You are essentially looking for where the output falls off a cliff - either the noise goes way up or there is a discrete change. Then you look back at your calibration data and find similar trends in the data, and determine the probability that the actual climate conditions when those trends happened were the going through the same state change as the input conditions that caused the similar trend in the model. There may be several "similar" trending events in the model and in the calibration data, so at this point you need a lot of both to get a meaningful cross-section with which to check the nonlinear model response.
In summary, probably the most important point about the nonlinear system is that you need much better data, with much more rigorous analysis, than for linear systems. It seems like some of the techniques used by the CRU researchers - interpolating data between stations, correcting for movement of stations, smoothing averages over time - were fine for linear systems but grossly inadequate for nonlinear systems. Interpolation and correction errors can grow wildly (or in discrete steps) in nonlinear systems, and it is precisely the instantaneous variations in the data that tell you when a system has gone non-linear. When dealing with highly non-linear systems, you should be extremely wary of any trend predictions... often the best you can do is "And if we reach this point the outputs go completely haywire and six slightly different models predict six hugely different outcomes, so it's probably a good idea to not get to this point if we can help it."
Climate scientist will protest that what I am asking is impossible, and rightly so - you need data to do any experiment, and saying your data is invalid because we can find some degree of nonlinearity in the system where an arbitrary level of error leads to arbitrarily unpredictable results is the same as saying your experiment is impossible. True enough.
Which brings us back to the first point - climate science getting more money. There will now be a huge bias against using legacy data sets, especially unpublished ones, and so we will likely see a return to the raw data and a more persistent push to make that data less error-prone. The fact that garbage in is so much more damaging with nonlinear systems means typical statistical tools used for linear systems are not valid, and data fidelity is of utmost importance. I hope that this leads to a new push in the climate science world - and the scientific world in general - to completely reevaluate statistical and data gathering procedures for highly nonlinear systems. I have always thought that climate science should be nonlinear physics that happens to be dealing with a climate dataset. We will deal with more and more nonlinear systems problems as we move on towards K1 level of civilization, and if the outgrowth of Climategate is a new set of scientific methods to deal with them then we have all won.