Census successfully uses handhelds to verify addresses

Agency's earlier difficulties provide lessons for others

The first phase of the 2010 decennial census -- verifying every address in the United States -- was completed ahead of schedule this month thanks to two key factors: Highly qualified, temporary workers were available because of the economic slowdown, and an armada of new handheld computers actually performed well.

How the Census Bureau's handheld strategy evolved

January 2002: The Census Bureau seeks ideas on how to use handheld computers for 2010 census.

January 2005: The bureau makes plans to acquire mobile computers for 500,000 to 600,000 field workers for the 2010 head count.

March 2006: Harris wins a five-year, $600 million contract from the bureau to build an automated system capable of collecting data in the field, including handheld computers for field workers.

January 2007: Census officials prepare to use handhelds in dress rehearsal.

March 2008: Facing rising costs and confusing requirements, the bureau considers not using handhelds for 2010 census.

April 2008: Census officials announce they will use paper questionnaires for follow-up visits, not handheld computers because of schedule, performance and cost problems.

April 2008: The Government Accountability Office says the bureau should have defined the requirements for the handhelds earlier.

Sept. 2008: GAO says handhelds might not be ready to be to canvass addresses.

May 2009: Census officials say handhelds perform well during address canvassing work.

Neither one was expected by the Census Bureau at this time last year, to be sure. The economic crash of late 2008 surprised everyone from Wall Street to the White House. And until the address-canvassing phase, handheld devices had been a painful source of bad news for the bureau.

In 2008, officials had scrapped plans to use the devices for the critical follow-up visits to people who do not return census questionnaires via mail. The devices, provided by Harris, experienced transmission problems and froze during tests, according to a Government Accountability Office report.

Despite that key -- and expensive -- setback, the handheld devices still enabled workers to verify addresses more quickly and accurately, said James Christy, director of the bureau’s Los Angeles regional office. That discovery still came too late to change overall plans for 2010, however. Based on the earlier, unsuccessful tests, Census officials have already dropped plans to use the devices in follow-up interviews and instead are reverting to old-fashioned paper forms.

Even so, the handheld devices, which are connected to digital cellular networks, offered another benefit to the bureau's 2010 exercise, Christy said. They allowed managers to monitor workers’ productivity in near real time, and that information, in turn, allowed managers to make quicker and more effective decisions on how to deploy workers.

“In this case, with the handhelds, we were able to evaluate that in almost real time,” he said. “We had some areas of the country that didn’t have digital cellular coverage, so we still had to rely on plugging in to a phone line and dialing into our system. But even that was the next morning rather than the two, three or four days like it was before.”

In the past, Census managers had to rely on employees to let them know if work was being completed, Christy said. Until a courier or the U.S. Postal Service delivered the written reports, managers had no way of knowing what had been completed.

Lessons Learned

Other government agencies can learn from the bureau’s missteps, said Bill Damaré, regional vice president of ESI International, which provides project management training to the bureau.

The first mistake was basing the requirements for the new handheld devices on existing technology, Damaré said. Agencies should issue requirements that reflect their needs and let providers figure out how to meet them. Specifying technologies rather than describing desired outcomes can derail a project before it leaves the station, he said.

“In this case, that is exactly what happened to Census,” Damaré said. “They wanted handheld devices with [Global Positioning System] capability that would be simple to use and that allowed each surveyor to run around the neighborhoods and link up with a satellite to post information immediately.”

Faulty requirements resulted in the project’s budget rising from $1.5 billion to about $3 billion, Damaré said. With time running out, Census officials decided to revert to clipboards and pencils for follow-up visits.

“Then Census [officials] just decided to pull the plug because they knew they were never going to get what they needed,” he said.

The major lesson learned from the project is that all stakeholders should be brought in early to develop the requirements for new technology. That group should include the people who will use the technology, agency leaders and potential vendors, Damaré said.

Through that process, an agency might discover that a certain requirement will be difficult or impossible to achieve. They might also find out that existing technology can satisfy their requirements. Either way, Damaré said, clear and realistic requirements make technology development projects less risky.

Expensive Experiment

The agency's second mistake was planning to use untried technology on a massive scale.

The decennial census is the largest peacetime mobilization the government performs, said Arnold Jackson, the bureau’s associate director of the decennial census. The bureau hires more than 1 million people to count more than 300 million Americans. In performing their duties, canvassers cover more than 9 million blocks.

The large deployment of temporary workers will last until about August 2010, Jackson said.

The peaks and valleys of work at Census make it difficult to develop technology, he said. Unlike other agencies, the bureau doesn’t have the luxury of developing, testing and deploying new technology over an extended period of time. And Census intensely uses technology for about 18 months rather than seven to 10 years, as other agencies do.

“We have the challenge of having new technology or new procedures that get tested, but never at a scale that approaches the near-chaotic rhythm of an actual decennial,” Jackson said. “So the challenge of doing it once a decade is just one that will always be there.”

About the Author

Doug Beizer is a staff writer for Federal Computer Week.

Featured

Reader comments

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Please type the letters/numbers you see above