Skip to site navigation Skip to main content Skip to footer content Skip to Site Search page Skip to People Search page

Alerts and Updates

Joint Commission and Coalition for Health AI Issue Guidance on Provider Use of AI

September 29, 2025

Joint Commission and Coalition for Health AI Issue Guidance on Provider Use of AI

September 29, 2025

Read below

Key roles should be filled by individuals with tech experience.

On September 17, 2025, the Joint Commission and Coalition for Health AI issued a joint guidance document entitled “Responsible Use of AI in Healthcare” to help providers implement AI while mitigating the risks of its use. The guidance provides seven elements that constitute responsible AI use in healthcare and discusses how provider organizations can implement them, as follows.

AI Policies and Governance Structures

Establish written policies for AI use and create formal roles for responsible parties to implement, update and enforce those policies. Key roles should be filled by individuals with tech experience.

Patient Privacy and Transparency

AI policies should address data access, use and protection. They should include disclosures and educational tools to inform consumers on how providers are using AI.

Data Security and Data Use Protections

AI policies should include specific steps to protect data. These should include adding the following items to data-use agreements:

  • Limiting permissible uses of data
  • A “minimum necessary data” rule
  • Prohibiting reidentification
  • Third-party security obligations
  • Audit rights

Internal security elements should include encryption, access controls, regular security assessments and incident-response plans.

Ongoing Quality Monitoring

This step should begin with gathering as much information as possible from AI vendors before implementing any AI tools. Active-monitoring policies should include the following:

  • Regularly validating and testing AI tools
  • Evaluating the quality and reliability of AI data
  • Assessing outcomes from AI tool use
  • Developing an AI dashboard
  • Creating a process for reporting errors to leadership and vendors

Voluntary, Blinded Reporting of AI Safety-Related Events

Implement policies for such reporting. These can include existing structures, such as the Joint Commission’s sentinel-event process. This step allows all industry actors to learn from each other, lessening the need for regulation by encouraging voluntary innovation.

Risk and Bias Assessment

Implement processes to identify and address biases that pose a risk to patients or providers. For example, if an AI tool was trained on data from young, healthy patients, it may not work accurately for older, less healthy patients. Risks from such biases include safety errors, misdiagnoses, administrative burden and reduced quality of care. Specific steps organizations should take include the following:

  • Determine whether AI tools were developed with appropriate datasets, representative of the populations to be served.
  • Determine whether AI tools have undergone bias-detection assessments.
  • Regularly monitor and audit AI tools to identify and manage biases.

Education and Training

Train providers on proper AI tool use (and the limitations of their AI tools) and educate them on how AI works.

Pursuing the above objectives should ensure a comprehensive approach to AI use, create internal accountability for AI use, protect patients and their data, enhance patient outcomes, mitigate provider and organization liability and promote consumer confidence in providers and their use of AI.

For More Information

If you have any questions about this Alert, please contact Erin M. Duffy, Matthew C. Mousley, Taylor Hertzler, any of the attorneys in our Health Law Practice Group, any of the attorneys in our Artificial Intelligence Group or the attorney in the firm with whom you are regularly in contact.

Disclaimer: This Alert has been prepared and published for informational purposes only and is not offered, nor should be construed, as legal advice. For more information, please see the firm's full disclaimer.