Electronic Health Records

Integration & Interoperability News

LOINC Director Advises Restraint in Health Data Standardization

Redundant or unnecessary information is not worth labor-intensive health data standardization.

Health Data Standardization

Source: Thinkstock

By Kate Monica

- A recent op-ed by LOINC and Health Data Standards Director at the Regenstrief Center Daniel J. Vreeman, PT, MS, cautioned healthcare organizations against standardizing all types of health data.

While health data standardization enhances interoperability, not every lab test or data element is worth the sometimes labor-intensive, exhausting effort of standardization. 

Following the 21st Century Cures Act, innovators have developed open application programming interfaces (APIs) and other technologies to help make information accessible for exchange without special effort. Though the Cures Act served as a gateway for progress in interoperability, Vreeman suggests some provisions may seem overwhelming to healthcare stakeholders. 

“The language in the Cures Act is quite broad, and the thought of trying to make all data elements for all purposes is enough to make many want to give up,” wrote Vreeman in the HIMSS op-ed. “Now seems a good time to remind ourselves that when it comes to data standardization, not all juice is worth the squeeze.”

The increasing availability of data through open APIs using the Fast Healthcare Interoperability Resources (FHIR) standard necessitates the arduous task of presenting health data elements using common vocabulary standards to allow different applications to understand the information.

LOINC in particular has been an integral part of enabling this level of standardization. LOINC’s universal code system provides standardization in medical test result identification, observations, and a variety of other clinical measures. It is the most widely-used code system in the world.

As a LOINC Director, Vreeman admitted standardization is not always worthwhile.  

“Even within a common type of health data, such as laboratory test results, the effort to standardize every last test is extensive,” he wrote. “And there are diminishing returns.”

Specifically, some information may be redundant.

“When we studied standardized test results within the Indiana Network for Patient Care, the largest inter-organizational clinical data repository in the country, there was a Pareto type distribution where a few tests accounted for most of the result volume,” wrote Vreeman. “Indeed, less than 20% of the tests accounted for more than 99% of the volume, and that same set of tests accounted for all of the results for 99% of patients.”

Additionally, standardizing some data elements is more challenging than others. The varying amount of effort required to standardize some types of data as well as the potential for repetitive information warrant discretion.

“Both nationally and locally, our standardization efforts benefit from a clear picture of what we trying to accomplish and which data are needed for those purposes,” Vreeman stated. “Some fruit is always lower than others.”

Vreeman stated his experience has shown that the data elements best suited for standardization are those that are already available in a discrete electronic format, demonstrate a business purpose for leveraging the data, and offer value to clinicians.

“We do need to be mindful that different end goals will require different kinds of standardization,” he reasoned.

Vreeman pointed to ONC’s Common Clinical Data Set and the FHIR US Core Implementation Guide as being useful for learning how to implement standards with an open API.

“The FHIR US Core’s Vital Signs profile provides a good example of picking a high value target and being precise about the standardized structure and codes (semantics) we should use for these data,” he stated.

Despite progress, Vreeman stated some areas of healthcare such as radiology procedures are still in need of improved standardization. 

“As we expand the breadth and depth of health data our applications use, especially in emerging areas such as social determinants of health, a key consideration should be understanding which variables are worth the standardization squeeze,” he concluded.

The Regenstrief Institute’s Center for Biomedical Informatics (CBMI) is currently at work on an initiative to advance interoperability and patient safety by developing an automated patient EHR matching solution using a $1.7 million grant.

During the five-year project, researchers will develop and test evidence-based solutions to improve patient matching accuracy and reduce patient harm resulting from misidentification. 

Continue to site...