How Top Healthcare CTOs and CIOs Can Interpret JAMA Paper:
‘The Compelling Need for Shared Responsibility of AI Oversight: Lessons From Health IT Certification’
The paper “The Compelling Need for Shared Responsibility of AI Oversight: Lessons From Health IT Certification” h/t Raj Ratwani, PhD, Dr. Christopher Longhurst et.al., Thanks for sharing it Dr. Felix Ankel. This paper underscores the critical importance of a multi-faceted approach to AI governance in healthcare. This requires integrating tactics, governance, operations, technology, and leadership. The JAMA paper calls for rigorous certification and continuous oversight to ensure AI systems are safe, secure, and trustworthy, aligning with the IEEE UL 2933 Trust, Identity, Privacy, Protection, Safety, & Security, TIPPSS framework. By way of context, ANSI-accredited standards like IEEE, developed by 250 experts across 22 countries (disclaimer, I’m co chair of the trust SG), are designed with a level of rigor that is near equivalent to or even more stringent than the accreditation standards of ACGME, ensuring global consistency, safety, and trustworthiness in their implementation.
The JAMA paper highlights the need for robust leadership to drive ethical AI deployment and the integration of advanced technology that adheres to high standards of safety and governance.
Everyone who knows me knows that I believe operations rules as the future of a tech enabled medical system. We’re therefore framing this proposed alignment between the standard and JAMA paper with the following terms: Tactics, Governance, Operations, Technology, and Leadership
Tactics
- Trust (T): The paper advocates for rigorous certification processes that build and maintain trust in AI systems within healthcare. This includes proactive measures to prevent harm, ensuring AI algorithms meet stringent safety and ethical standards.
- Protection (P): The focus on assurance testing and certification directly aligns with the need to protect AI systems from vulnerabilities and unintended consequences, ensuring they are resilient and safe for patient care.
Governance
- Ethical Guidelines and Accountability: The paper highlights the necessity of shared responsibility in AI oversight, advocating for a governance framework where developers, users, and regulators work together. This ensures AI systems are deployed ethically and remain accountable to rigorous standards throughout their lifecycle.
- Leadership: Effective governance is underpinned by strong leadership, which the paper implies is essential for driving the ethical deployment of AI. Leaders in healthcare must champion the integration of robust governance practices to navigate the complexities of AI in medicine.
Operations
- Safety (S): The operational emphasis on regular recertification and continuous monitoring ensures that AI systems remain safe and effective over time. This is crucial for maintaining patient safety in dynamic clinical environments.
- Security (S): Although not explicitly stated, the paper’s call for ongoing oversight and updates inherently supports the security of AI systems, ensuring they are protected from evolving threats and vulnerabilities.
Technology
- Advanced AI and IoT Integration: The paper underscores the importance of deploying AI technology that is rigorously tested and certified, ensuring it meets high standards of safety, interoperability, and ethical use. This technological foundation is critical for achieving the TIPPSS framework’s goals.
- Innovation within Standards: The paper suggests that while innovation is vital, it must occur within the boundaries set by robust standards like IEEE UL 2933, ensuring that technological advancements do not compromise safety or ethical considerations.
Leadership
- Driving Ethical AI Deployment: The paper implicitly calls for leadership that prioritizes the ethical deployment of AI. Leaders in healthcare must ensure that their organizations adhere to high standards of governance and operations, fostering a culture of accountability and continuous improvement.
- Strategic Vision: Effective leadership is needed to align the deployment of AI technologies with long-term strategic goals, ensuring that innovations contribute positively to patient care and system efficiency while adhering to stringent safety and ethical standards.
Summary and Key Takeaways for a Physician Audience: h/t Dr. Art Douville, Medigram Chief Medical Officer
The JAMA paper, “The Compelling Need for Shared Responsibility of AI Oversight: Lessons From Health IT Certification,” emphasizes the importance of a collaborative approach to AI governance in healthcare. It highlights the need for rigorous certification and continuous oversight to ensure that AI systems are safe, secure, and trustworthy. This approach aligns with the IEEE UL 2933 framework, which focuses on Trust, Identity, Privacy, Protection, Safety, and Security (TIPPSS).
Key Takeaways:
- Building Trust in AI: The paper advocates for rigorous certification processes that build and maintain trust in AI systems, ensuring they meet stringent safety and ethical standards to prevent patient harm.
- Ethical Governance: Effective governance requires a shared responsibility between developers, users, and regulators to deploy AI systems ethically and ensure they remain accountable throughout their lifecycle.
- Operational Safety and Security: Continuous monitoring and recertification are crucial for maintaining the safety and security of AI systems, ensuring they remain effective and protected against evolving threats.
- Leadership in AI Deployment: Strong leadership is essential to drive the ethical deployment of AI in healthcare, ensuring that technological advancements are aligned with long-term strategic goals and patient care improvements.
By focusing on these areas — tactics, governance, operations, technology, and leadership — healthcare organizations can effectively navigate the complexities of AI deployment, ensuring these technologies are used to benefit patients while engendering and assuring trust.
Sherri Douville Bio
Sherri Douville leads at the intersection of healthcare, technology, and AI governance. As the CEO of Medigram, she spearheads the development and deployment of secure, AI-enabled mobile solutions to transform healthcare communication and decision-making. Sherri’s comprehensive experience also includes being series editor for Taylor & Francis, a serial, best selling healthcare technology author, engineering coauthor, and book reviewer. In her role as Co-Chair of the Trust Subgroup for IEEE UL 2933, she contributes to the development and advancement of global standards focused on trust, identity, privacy, protection, safety, and security (TIPPSS) in clinical IoT and AI systems.
With a deep commitment to ensuring that technology serves the highest standards of patient safety and medical ethics in healthcare, Sherri has been instrumental in shaping the future of healthcare technology governance. She is also the founder and Chair of the Trustworthy Technology and Innovation Consortium (TTIC), where she fosters collaboration among industry leaders to drive forward-thinking standards and policies.
Sherri is a recognized speaker frequently addressing complex challenges at the confluence of technology and healthcare. Her work is particularly relevant in a post-Chevron world, where the role of rigorous standards like IEEE UL 2933 becomes ever more critical for compliance, operational excellence, and leadership in healthcare innovation.