Big data, big deficits at USPS
Is the Postal Service's use of big data a praiseworthy innovation, or an expensive indulgence? (Stock image)
Our recent story on the suprising places big data is being used prompted one reader to comment:
"Ummm... I wouldn't hold the USPS up as a paragon of 'success.' However, I think that you might have identified one of the reasons that USPS is failing. Why do they need a network of supercomputers whose capability exceeds that of NOAA's weather forecast centers? Didn't the mail get delivered back when there were no ZIP codes or barcodes? USPS needs to take a step backwards, away from big data and focus on getting 'back to basics.'"
Frank Konkel responds: Admittedly, delivering the mail does not seem as inherently cool as tracking weather events like Hurricane Sandy or using complex, voluminous data sets to make reasonable climate predictions, but as this follow-up story explains USPS is using big data to reduce overall costs and detect fraud. The technology is complex -- the data from each scanned mail piece is compared to a database of about 400 billion records in real-time through an impressive 16-terabyte in-memory computing environment -- but the payoff is huge, and it's an important one because operational expenses incurred by the USPS are not funded through tax dollars. That means lost revenue through fraud might cost billions without this kind of system in place.
In addition, while "snail mail" might seem outdated, USPS sent out 160 billion pieces of mail in 2012, and people are still receiving their packages and mail in a few days despite paying only 46 cents per sent item. Were it not for efficiency increases and improved fraud detection through big data and supercomputing, it's likely USPS wouldn't be able to get mail out as fast as it done, and it is a near certainty that it would cost more to send out each letter or package from grandma.
Posted by Frank Konkel on Mar 27, 2013 at 12:10 PM