By Steve Kelman

Blog archive

Ash Carter on tech and the public good

gears on a blueprint (shutterstock.com)

I recently had a chance, at our weekly faculty research lunch, to hear a talk by former Secretary of Defense Ash Carter, who returned to the Kennedy School faculty after his service in DOD. Our offices are near one another, and we often run into each other in the hall -- both Carter and I like to squeeze in quick walks between meetings -- but this was a substantive presentation.

Carter’s topic was not defense or national security policy, or even defense management, but rather a different topic that has preoccupied him a lot recently, namely how can we work as a society to align what the tech world is doing as well as possible with promotion of the public good. I found his remarks very cogent and valuable.

The center of our national discussion on tech has been moving recently away from optimism -- this is a great thing for society, and we should unleash the tech world to do its thing -- to a much more skeptical view with a hefty dose of worry about bad things technology can bring. Carter shares some of the newer skepticism. I will confess that my intuition is to be somewhat less worried about the tech downside than Carter is, but frankly I trust his judgment on this more than my own.

Carter interestingly compared the challenges of change in our age with an earlier transition from an agricultural to an industrial society a hundred-plus years ago. Many countries didn’t make that transition as well as the United States did. As someone with lots of experience in public service, Ash attributed much of our success to government. The legislation and regulation of the progressive era -- ranging from antitrust to worker protection to social security -- took many of rough edges off that transformation and allowed us to navigate it successfully.

So maybe it shouldn’t be surprising that Carter sees a role for government in dealing with the current tech distemper. As he sees it, what is coming out of the tech companies on their own is statements of the type, “We’re kinda sorry,” but then not actually doing too much. He compared the ability of tech firms to quickly locate and take down content that might harm them commercially with problems doing similar things for online political incitement. He would like to see a reform era in terms of tech analogous to the progressive era in American history that helped us successfully manage the transition from agriculture to industry.

Carter believes that the provision of the Communications Decency Act of 1996 that frees producers of online content from any liability for what they produce -- which was enacted during the height of the “don’t regulate tech” era -- needs to be amended. It eliminated the ability of the courts to encourage industry self-regulation, and removing incentives for the industry to self-police. “This provision doesn’t meet a common-sense test,” he said, and suggested that antitrust has a role to play here. Though he didn’t favor breakup of the tech giants, Carter felt there were grounds for antitrust intervention even if one failed to demonstrate that the current situation generates direct economic harm (as opposed, say, to dangers to democracy or privacy), as more light-touch antitrust advocates suggest.

One of the questions from a colleague was whether Carter had given too sanguine a perspective on government regulation. In the experience of the questioner, regulators were often captured by those they were to regulate and produced regulations that created windfalls for regulated companies at the expense of the public good. Carter’s answer was that the question was not whether real-world regulation was perfect, but whether it was better than no regulation.

A good part of Carter’s talk dealt with problems with AI algorithms. If the algorithms steering people toward certain products make mistakes, no one is much the worse off for it. However, when algorithms are used to make life-crucial decisions, about policing or health treatments -- or in targeting the use of weapons in combat -- the costs of mistakes are much higher. 

Here Carter was very cautious about allowing actual decisions to be directed automatically by an algorithm. He noted that while he was secretary the department issued a policy -- which he said he wrote himself -- that DOD would not field autonomous weapons. Before a weapon may be launched, DOD policy requires human involvement (not “a man in the loop”) in making the decision.

In terms of how much and why algorithms might turn out to be flawed in their ability accurately to predict the effects of a certain course of action, Carter said that the way some algorithms work make it difficult to untangle how they reach their recommendation. This visibility needs to be built in, he said. Interestingly, though, he was more optimistic about the ability to develop more algorithms to check algorithms for use in the real world -- to essentially train AI to check itself. “We can use algorithms as a countermeasure to algorithms,” he said. And he felt that the path used in aircraft design of rigorous testing of designs before they are put into use was a good model for AI (despite the recent 737 Max problems).

“We are an inventive people,” Carter stated. If tech companies haven’t developed imaginative enough solutions to tech problems, whether they involve privacy, online hate, or algorithms, that is likely because they haven’t been pressed enough to do so. He wants to be part of the effort to coax this along, so tech can better serve the public good.

Toward the end of his talk Carter talked more generally about decision-making in government, with observations that apply to anyone called on in their job to make a lot of decisions. He noted that as secretary, there were many times when doing nothing was not a viable option, but it was by no means clear which of the many choices on the table was optimal. In such situations, he stated, as with thinking about government regulation, the right standard was not whether the decision was as good as some optimal one, but whether it was better than no decision at all.

This allowed him to do what he had to do -- make decisions -- while making the world better, though not ideal. This is good advice for anyone in a decision-making position.

Posted by Steve Kelman on May 08, 2019 at 6:21 AM


Featured

  • Defense
    The U.S. Army Corps of Engineers and the National Geospatial-Intelligence Agency (NGA) reveal concept renderings for the Next NGA West (N2W) campus from the design-build team McCarthy HITT winning proposal. The entirety of the campus is anticipated to be operational in 2025.

    How NGA is tackling interoperability challenges

    Mark Munsell, the National Geospatial-Intelligence Agency’s CTO, talks about talent shortages and how the agency is working to get more unclassified data.

  • Veterans Affairs
    Veterans Affairs CIO Jim Gfrerer speaks at an Oct. 10 FCW event (Photo credit: Troy K. Schneider)

    VA's pivot to agile

    With 10 months on the job, Veterans Affairs CIO Jim Gfrerer is pushing his organization toward a culture of constant delivery.

Stay Connected

FCW INSIDER

Sign up for our newsletter.

I agree to this site's Privacy Policy.