Skip to content

Biomedical Odyssey

Life at the Johns Hopkins School of Medicine

Biomedical Odyssey Home Perspectives in Research From Research to Clinic: Regulatory Frameworks for AI in Medicine

From Research to Clinic: Regulatory Frameworks for AI in Medicine

Healthcare and medical, Doctor and robotics research diagnose Human brains scan. Record and report with modern virtual interface, alzheimer's and parkinson, science, Innovation and Medical technology

What steps are clinical teams responsible for to ensure that artificial intelligence (AI) recommendations are accurate and unbiased before using them in patient care? If a care team member uses AI to automatically write a note in a patient’s chart, how should this be indicated in the electronic medical record? Once an AI has been shown to have largely accurate performance, is it necessary that a physician be able to explain the AI’s decision in an individual case?

These questions come up as the capacity for AI to successfully perform medically relevant tasks continues to improve at a remarkable pace. However, despite this scientific progress, the regulatory frameworks needed to govern clinical implementation of AI in safe, ethical and equitable ways are still being constructed.

In practice, the absence of any regulatory guidelines means that many AI-related technologies — from automated analyses of radiology and pathology reports, to chatbots for consultation — are being used for research and performance benchmarks on anonymized, previously collected data, or are being used by individual teams at academic medical centers alongside standard physician interpretation as part of ongoing clinical research. These are valuable endeavors needed to understand the scope, utility, and potential risks and benefits of these technologies. But, as AI-based products move toward commercialization and widespread use, the importance of building regulatory frameworks equipped to govern AI use in the clinic cannot be underestimated.

The effort to establish initial standards for use of AI models in electronic medical records is already underway, with Health Data, Technology and Interoperability: Certification Program Updates, Algorithm Transparency and Information Sharing (HTI-1) legislation from the Office of the National Coordinator for Health Information Technology (ONC) that took effect in February 2024. HTI-1 lays out a framework for safe use of protected health data.

A key provision of HTI-1 is regarding algorithmic transparency — in other words, ensuring that machine learning models are not complete “black boxes.” HTI-1 will require that information be available about how clinically applied algorithms are used in decision making, and how data and processes are used to reach these decisions. This includes defining the scope within which the algorithm is designed to be used, explaining what procedures were used to train and validate the algorithm, and outlining processes for measuring performance and updating the technology. These measures are important safeguards to help clinicians evaluate the correctness of these algorithms and to ensure that using AI does not inadvertently promote biases or false information. Beyond this, HTI-1 helps to establish interoperability standards that promote information and data sharing among approved systems, while ensuring that security is maintained.

This legislation is expected to have wide-reaching effects. The ONC certifies health information (IT) technology used in 96% of hospitals and by 78% of office-based physicians in the United States. Although HTI-1 does not explicitly require specific algorithms or AI technologies, it establishes a framework for their use in clinical practice and ensures that AI adoption in medical settings will meet standards for fairness, transparency and security — an important first step in bridging AI research and development with mainstream clinical practice.


Related Content

Want to read more from the Johns Hopkins School of Medicine? Subscribe to the Biomedical Odyssey blog and receive new posts directly in your inbox. 

Leave a Reply

Your email address will not be published. Required fields are marked *