Heuer wrote an interesting book that has a lot to do with writing history. Psychology of Intelligence Analysis was designed to help intel analysts to "heal themselves" - to put down bad analysis. Since we have a lot of bad analysis in history, I thought I would share a couple of Heuer's observations with you.
Let me say up front that intelligence analysis is produced on deadline under intense time pressure and historical analysis is produced, if not at leisure, under circumstances that demand full justice to the record. The application of analysis, intel-style, to history is obviously wrong. That is our starting point. Heuer:
A systematic analytical process requires selection among alternative hypotheses, and it is here that analytical practice often diverges significantly from the ideal and from the canons of scientific method. The ideal is to generate a full set of hypotheses, systematically evaluate each, and then identify the hypothesis that provides the best fit to the data. Scientific method, for its part, requires that one seek to disprove hypotheses rather than confirm them. [Emphasis added.]The garden variety Civil War historian develops his storyline based on employing proving strategies, not disproving strategies, and the result has been 50 years of disaster.
Brooks Simpson was recently kind enough to comment on my Vicksburg series: we are both well aware of the three restoration feelers put out to McClellan in early 1864; Brooks also mentioned Lincoln's deliberations on replacing Grant with Butler. These incidents are disproving elements vis a vis narrative strategies such as Grant-the-inevitable and Grant/Lincoln, the partnership-forged-by-war, both staples of the Centennial school of history. Grant's position appears to have been in play in 1864. How do we get onto a false story track? Heuer has two interesting explanations.
First, he notices the tendency among analysts to resort to "satisficing." I once thought I knew the meaning of this term but I knew only the economics definition; in command theory, it means something else entirely. Heuer:
"Satisficing" - selecting the first identified alternative that appears "good enough" rather than examining all alternatives to determine which is "best."In the next post, I'll give some examples of ACW satisficing that is utterly out of control. But how do the smaller errors, the reasonable failures due to satisficing occur?
One way, says Heuer, is by failing to properly weigh data with high diagnostic value. If the doctor is diagnosing your illness and you show a fever, the fever has low informational value since it pertains to so many conditions. At the same time it is a real datapoint that can support a false diagnosis, say of flu. As your physician, I could select fever and a number of other low-value but real indications to deliver a false diagnosis. Or to craft a false storyline to drive my ACW narrative.
Satisficing has at least three shortcomings, Heuer notes. The researcher is focused on a single hypothesis; he has failed to generate (or entertain) competing hypotheses; and he is focused on evidence that confirms rather than disproves his central hypothesis.
In a fascinating little exercise, Heuer shows that most of the data collected in connection with a problem will support most of the hypotheses in play. The differentiator will be disproving data.
In Civil War history, over the last 50 years, disproving data is what causes us to pause in our reading and say, "How can that be if..." Unfortunately, there is an authorial tendency to "streamline" the narrative, tossing disproving data aside.
And so we get the dominant hypotheses that we as readers deserve.