The Lectern: New consensus on how to improve weapons acquisition?
Acquiring major weapons systems is hard. If solving issues that lead to cost growth, delays, and performance shortfalls were easy to solve, they would have been solved long ago. To look at any cost growth from initial estimates, and attribute it just to waste (or to fraud or abuse for that matter) is simplified to the point of being misleading.
The fundamental dilemma of weapons systems acquisition is that designing major systems typically involves research and development beyond the state of current technology. The uncertainty of development costs drives the government towards cost-based contracts (although recently there has been unfortunate talk about bringing back fixed-priced development contracts), but cost-based contracts lack incentives for cost or schedule control, which leads to rising costs and increasing timelines.
Take that and add on over-optimism in initial cost estimates, the lack of a commercial marketplace (and hence vigorous competition) for military systems, and relations between government and defense contractors that Steve Barr (the former Washington Post reporter) once characterized to me as resembling a dysfunctional marriage -- and you have some deep-seated problems for successful weapons contracting.
During the reinventing government era under Secretary of Defense William Perry, the Defense Department tried to improve weapons contracting by improving an area not subject to this dilemma -- lowering costs and improving schedules through greater use of commercial items in weapons systems. Anecdotally, this seems to have had some impact, although there has been a frustrating lack of research on the overall impact of the commercial item reforms on weapons systems acquisition.
Based on the tenor of discussions at last week's acquisition research conference at the Naval Postgraduate School in Monterey, there seems to be an emerging new consensus for how to improve defense acquisition. The basic idea (which will be familiar to those involved in major IT systems development) is to freeze, to the greatest extent possible, requirements fairly early, until the system's first production run (or, in IT terms, Release 1.0), and to hold upgrades until after the first release.
That way you save both from not having to redo requirements several times before initial production, as technology continues to evolve, and from not keeping other parts of a project on hold while a change is being worked through elsewhere in the system. This doesn't directly attack the basic dilemmas involved in trying to develop bleeding edge technologies, but it reduces the number of times one needs to do so to develop a new system.
This approach has been advocated by the Government Accountability Office for a while. It now seems to have been accepted both by key voices in Congress and by the Administration. The view at Monterey seemed to be that this makes sense as well.
One interesting thing I learned at the Naval Postgraduate School conference (this surprised me, but this probably just reflects my ignorance) is that many program managers would actually welcome a stricter attitude towards requirements changes. The constant demands to change requirements during development come from users, not program managers. That doesn't mean that slowing requirements creep will be easy -- users have a lot of influence on the acquisition system, and they should. So it will be interesting to see whether the policy consensus is able to become operational reality.
Posted by Steve Kelman on May 21, 2009 at 12:08 PM