Steve Kelman reports on innovations at DHS that could help analysts across government.
For the last two years, Tammy Tippie has been the performance improvement officer at the Department of Homeland Security Office of Intelligence and Analysis, coming to DHS from a 10-year career in the Defense Department. My wife met her recently through a friend who used to be a senior government official, and she suggested I should meet Tammy. Tippie was recently in Boston to visit the local fusion center for "connect the dots" intelligence collaboration, and we had lunch together.
(I had planned to suggest a Thai restaurant across the street from the Kennedy School, but asked instead what kind of food she liked. "How about Thai?" she responded. Clearly, we were off to a good start.)
It got better from there. This turned out to be the most interesting lunch I've had in weeks, because it opened my eyes wide to some fascinating recent developments in tech-aided performance measurement for intelligence analysts, which have been pioneered within the intelligence community by DHS.
As regular readers may be aware, since I have on a number of occasions blogged about this, I teach performance measurement in one of our flagship executive education programs at the Kennedy School, Senior Executive Fellows (for GS-15s and colonels). We always have a number of people from the intelligence community, mostly analysts, in these classes, and I will confess I have always worried that good performance measures are hard to develop for intelligence analysis, that I know little about the topic, and that therefore my classes were not adding value for them.
Thanks to Tammy, I now know better.
The key to the system is natural-language processing software. Every piece of reporting and analysis prepared by intelligence analysts at DHS is archived in a machine-readable database. What the natural-language processing software does is to "read" each document and give it a score based on how well it answers questions the kind of document in question is supposed to answer. The software must be populated with templates, developed by humans, of what a good answer is. But then the software cranks out an answer for each document, based on those templates.
Other tech-related elements of the performance measurement system include the ability to track how often other analysts click through to a document, how long on average the reader spends with the document, and how often it is forwarded to other analysts. The organization also has one low-tech customer feedback team that analyzes each document for how well it comports with "tradecraft" standards for the format and structure of an argument, and another that surveys customers on their satisfaction with every product.
Tammy's unit organizes a monthly cycle of four different meetings, one every week, to discuss the data. One discusses Intelligence and Analysis performance metrics, while another focuses on mission-support related metrics such as help desk or training. At some point, DHS should be able to look at time trends, to see if the software is showing improved document quality over time, but the system hasn't been going long enough as of now.
Two areas where natural-language analysis revealed frequent problems in analysts' reports were incomplete assessments of the implications of what would happen if the data on which the analysis was based proved to be wrong, and failure to sufficiently indicate the sources for judgments in the report so that readers could assess their credibility. The office then assigned mentors to the employees who consistently showed these shortcomings, coached them on how to improve, and assigned the best analysts to read the work done after the mentoring.
What Tammy told me was a revelation, frankly. These techniques could be used, I suspect, by most any organization whose employees prepare analyses and reports. I wonder if any blog readers' agencies, inside or outside the intelligence community, are using such methods as well.