Data repository could cut federal medical costs
Federal, state agencies hope analytics boost health care quality, squash fraud
Health programs for seniors and low-income individuals, already straining federal and state resources, threaten to gobble up an even larger chunk of the economy when waves of boomers retire. The Centers for Medicare and Medicaid Services is banking on improving how it manages claims data as one way to help lessen the impact.
Agency officials will use and reuse claims data as they formulate health care policy and to prevent fraud and abuse, said Wallace Fung, deputy director and chief technology officer at CMS in the Health and Human Services Department.
CMS recently began operation of the first version of its Integrated Data Repository, which holds the first 10 months of Medicare claims data from its Part D prescription drug program. By spring, CMS will combine and integrate data from claims for hospital stays and physician visits, referred to as Medicare Part A and Part B, with the drug data, for a single view.
“Literally, it’s a gold mine. It gives us the ability to look from a national perspective across Parts A, B and D for trends,” he said.
CMS will analyze the data, for example, to discern trends in waste and fraud for program integrity, and duplicate testing and treatments for patients’ conditions that increase costs. The agency also will use the repository to help improve quality of care and to provide patient data for electronic health records, he said.
CMS contracted with Teradata, a division of NCR Corp. of Dayton, Ohio, last year to implement the Integrated Data Warehouse to provide capacity, data mining and business intelligence in a five-year blanket purchasing agreement.
Ultimately, the data repository will integrate physician, hospital and drug data, and give CMS a complete picture of health care delivery, instead of islands of data, said Stephen Brobst, Teradata chief technology officer.
“Only when you see the data in its totality and detail can you make good decisions about the quality of care and the efficacy of that care delivery,” he said.
The repository receives data from legacy systems that process 40 billion claims annually.
“Data is stored all over the place, and there are a variety of ways to look at it,” Fung said.
Teradata provides a bridge application from Informatica Corp. of Redwood City, Calif., to extract, transform and load data from those older systems into a model for the repository, Brobst said. CMS loads the data into a Teradata data warehouse, which is a massively parallel processing architecture, collections of Intel CPUs using standard high-volume server technology.
“The secret sauce is the BYNET, which lets us put together all Intel servers so they behave like one big database server,” he said.
Teradata software lets those servers communicate efficiently with each other across the BYNET. Teradata also has a relational database management system designed for business intelligence. A tool from Cognos Inc. of Burlington, Mass., ReportNet, gives CMS workers access to the data.
For example, CMS could take drug claims data in context of other detailed claims submitted by the same beneficiary, provider or pharmacy and analyze it for the probability that the claim is fraudulent. The software automates the detection of a pattern and flags it so a CMS staff member can determine the merits for follow-up.
“You do the analysis at the aggregate level, but you drill down to do the problem-solving or opportunity identification,” Brobst said.
Connect with the GCN staff on Twitter @GCNtech.