the lectern banner

By Steve Kelman

Blog archive

Do we still need systems integrators?

Shutterstock image: wall of gears.

Starting with the blog posted today, David Eaves and I will be initiating an occasional collaboration under the auspices of The Lectern.

David is a new colleague at the Kennedy School, who comes to Harvard from earlier work at Code for America, advising several governments on technology and launching a successful start up in the municipal government space. At the Kennedy School, he teaches a course in digital government and directs the Digital@HKS program. I have had the pleasure of working with David this semester on a course he leads called “Avoiding Digital Disaster” on Healthcare.gov.

David is smart and he is fantastic. I suspect that once every few weeks we will have a joint blog, and on occasion (as this time around) a blog will be posted on The Lectern under his name. His first post, “Do we still need systems integrators?” reflects his intelligent, sometimes-controversial approach.

Blog readers, please welcome my friend David. -- Steve Kelman

* * *

Here’s an overly simplistic statement but generally good rule of thumb about IT in government: Systems integrators are a symptom of poor systems architecture.

This is to say that today’s systems integrators, the people governments ask to create software that (usually) enables us to extract data from one software system and then restructure it or create processes around it so that it can be shared with another software system, are being asked to do work that shouldn’t exist. They are symptomatic of a failure to realize that any large IT system could and likely will need to be a platform. They also are a window into a massive opportunity to reduce spending on government IT systems.

This wasn’t always the case. There was, for many years (decades!), a need for systems integrators. Building and designing a large IT system (for example, one that tracked travel expenses) was an enormous project, but also one that was generally siloed. Getting the travel expense system to talk to payroll… that was a giant project in of itself. Getting one large siloed system to talk to another large siloed system was a huge effort.

Indeed, prior to 1990s it was hard to even imagine persuading these large complex systems to talk to one another. And then we began to see that getting these large systems to talk to one another might be possible -- and that’s when the system integrator become necessary and important.

But by the time the 2000s rolled around, we were living in a world of the web, and people could see that standardizing the structure of information meant that one didn’t need to build custom software to connect one system to another. Instead, the right application program interface (API) would allow any system to grab the information it needed without requiring a system integrator to build custom-software connections.

If, from the outset of any project, you accept that your system will need to talk to other systems, you will design it so that data and information can be accessed via API. If you believe that it won’t, then you opt to not design the system to share information easily via APIs.

The latter choice will save you some costs in development and project management in the short term, but will constrain you in the long term. The former will create some minor additional costs initially, but provide you with significant flexibility in the future. But the bigger point is, this is a choice -- an architectural choice about the structure of the IT systems we build. Making the right one requires intentionality and foresight.

It is also one that does not happen easily or automatically. Indeed, many private sector companies are not good at doing this. This is not a case of the public sector simply “not getting it."

To demonstrate this, there is a marvelous story about how Jeff Bezos realized this in the early 2000s and literally forced his entire organization to move from siloed entities that required systems integrators (sometimes the system in question was simply two people emailing data back and forth to one another, sometimes it was a lot more) to one that was extensible and API powered. Steve Yegge, a former Amazon employee now with Google, accidentally “ranted” about it back in 2011 in a piece that is so good, I try to read it every 6-12 months.

The key point of the article is that the ultimate systems architect, Amazon’s CEO, decided there was a good architecture now available because of APIs. As Yegge describes it:

So one day Jeff Bezos issued a mandate. He's doing that all the time, of course, and people scramble like ants being pounded with a rubber mallet whenever it happens. But on one occasion -- back around 2002 I think, plus or minus a year -- he issued a mandate that was so out there, so huge and eye-bulgingly ponderous, that it made all of his other mandates look like unsolicited peer bonuses.

His Big Mandate went something along these lines:

1) All teams will henceforth expose their data and functionality through service interfaces.

2) Teams must communicate with each other through these interfaces.

3) There will be no other form of interprocess communication allowed: no direct linking, no direct reads of another team's data store, no shared-memory model, no back-doors whatsoever. The only communication allowed is via service interface calls over the network.

4) It doesn't matter what technology they use. HTTP, Corba, Pubsub, custom protocols -- doesn't matter. Bezos doesn't care.

5) All service interfaces, without exception, must be designed from the ground up to be externalizable. That is to say, the team must plan and design to be able to expose the interface to developers in the outside world. No exceptions.

6) Anyone who doesn't do this will be fired.

It really is hard to describe how difficult and far-reaching this mandate was, but its impact and vision cannot be understated. Bezos effectively forced Amazon’s disparate systems to both work together and to serve as platforms for further innovation and integration. It represents one of the most dramatic shifts from a siloed organization structure to a networked structure. It is a key part of what made it possible for Amazon to scale and serve as a global platform for retail.

What it painful? Was it hard? Absolutely. But it enabled the organization to offer both scale and innovation in manner few others have been able to replicate -- all because of an architectural choice. If you think that government should be able to operate with the scale and flexibility of Amazon, then Bezos' mandate and its implications for government are worth studying more closely.

And this brings us back to systems integrators -- because after the mandate, anyone designing a new system at Amazon would know that system should be designed, from the outset, to talk to other systems. Hiring a systems integrator would be an admission that you had architected your IT systems poorly and would be seen as a waste of company time and money.

This is how it should be seen in government. We should be building systems with the idea of sharing data and services by design, not something systems integrators must create later.

Now if you are a systems integrator and you’re reading this, have no fear. There are, happily, thousands (and more likely tens of thousands) of legacy government systems out there that will require your services so that they can be integrated with others’ systems in the coming years and decades. There will still be plenty of money to be made. However, no new system being built by government should require a systems integrator. It should adhere to Bezos mandate number 1.

Posted by David Eaves on May 01, 2017 at 1:10 PM


Rising Stars

Meet 21 early-career leaders who are doing great things in federal IT.

Featured

Reader comments

Mon, May 8, 2017 JJ Washington DC

Systems Integrators ( SI's )provide a lot more value than people realize. You have Govt, then you have product vendors. SI's are product neutral, honest brokers, and usually bring substaintial cost savings due to their resell agreements w/ many product vendors. Who manages all the billing from 100's on different product vendors. Who drives the innovation(a single vision) in partnership w/ Govt.. Moving forward as Govt starts to become consumption based and looks to unload their data centers and hw infrastructure (that is composed of many different vendors/platforms ); Who is going to buy that and lease it back on consumption based? an SI does that. A large SI will have to take on a lot of risk and hope they make it back on consumption. What about staffing key SME's for upcoming critical migration projects ( and managing the project success)..Direct SME from actual vendors (msft, VMware, etc) are in the mid 300's/hour. SI's can provide the same skill sets for 1/2 that. The list goes on.. This is just off the top of my head

Wed, May 3, 2017 Neil

I think the problem is that any *new* system does not stand on its own. So while your piece of the pie could be developed in the way you argue for, some portion of it, in order to work in the wider ecosystem, will have to handle integration. That (to me) implies someone will need to do that integration, and unfortunately, the government has little capability in that space.

Secondly, designing for integration is well and good, but the *possibility* of integration does not make it easy to do. Someone still needs to figure out what gets integrated, what data formats they actually need, and so on. I suspect the reality of a service approach, even for Amazon, was much messier than the perception.

Finally, the last paragraph buries the lede. In my experience there is very little money available for greenfield development. The lion's share is bolting new things onto the old, or upgrading pieces of the old.

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Please type the letters/numbers you see above

More from 1105 Public Sector Media Group