
© Feedbox
AI landing a Boeing 737
Conventional wisdom is that it is too early to speculate why in the past six months two Boeing 737 Max 8 planes have gone down shortly after take off, so if all that follows is wrong you will know it very quickly. Last night I predicted that the first withdrawals of the plane would happen within two days, and this morning China withdrew it. So far, so good. (Indonesia followed a few hours ago).
Why should I stick my neck out with further predictions? First, because
we must speculate the moment something goes wrong. It is natural, right and proper to note errors and try to correct them. (The authorities are always against "wild" speculation, and I would be in agreement with that if they had an a prior definition of wildness). Second, because
putting forward hypotheses may help others test them (if they are not already doing so). Third, because
if the hypotheses turn out to be wrong, it will indicate an error in reasoning, and will be an example worth studying in psychology, so often dourly drawn to human fallibility. Charmingly, an error in my reasoning might even illuminate an error that a pilot might make, if poorly trained, sleep-deprived and inattentive.
I think the problem is that the Boeing anti-stall patch MCAS is poorly configured for pilot use: it is not intuitive, and opaque in its consequences.
By the way of full disclosure, I have held my opinion since the first Lion Air crash in October, and ran it past a test pilot who, while not responsible for a single word here,
did not argue against it. He suggested that
MCAS characteristics should have been in a special directive and drawn to the attention of pilots.
Comment: The bogus specter of global warming has somehow endowed many young people into a sick rut of pathological persistence - where the only thing that will make them 'feel happy' is to become devoted acolytes of one of the most malevolent efforts at social engineering in contemporary history.