blogsfeatured

Artificial Intelligence is transforming healthcare on a global scale, with potential to deliver equitable, affordable, and high-quality healthcare solutions. As we enter a new era where AI plays a central role in software and intelligence, setting standards for its responsible use becomes crucial. Standards are essential for ensuring that healthcare AI systems are safe, effective, and trustworthy, as well as for scaling AI from niche applications to system-wide infrastructure. In this article, we explore the stakeholders, roles, and responsibilities within healthcare AI, along with an overview of current and future AI standards shaping this vital industry.

Layers of the Healthcare AI: Developers, Platforms, and Providers

Think of the healthcare AI as a multi-layered structure:

  1. Frontier Model Providers: Companies like Anthropic, OpenAI, and Meta develop foundational models that power complex AI applications.
  2. Platform Providers: Cloud hyperscalers like AWS, Azure, and Google Cloud provide the infrastructure where healthcare AI applications are built and deployed.
  3. Healthcare Providers and IT: This top layer integrates and operates AI solutions to deliver quality services to patients.

1. Frontier Model Providers: Pioneering AI Safety

At the base of our AI cake, frontier model providers focus on developing advanced AI models capable of complex reasoning and orchestration. Their primary responsibility? Establishing AI safety standards to prevent harmful use or misuse.

For example, Anthropic recently introduced AI Safety Level Standards (ASL) to implement graduated safety measures as models become more capable. Additionally, the Frontier Model Forum, a coalition of leading AI companies, works to advance safe AI development, addressing the ethical and safety challenges inherent to powerful AI models.

2. Platform Providers: Building the Responsible AI Infrastructure

The middle layer consists of cloud platforms where healthcare AI is developed and operated. Platform providers, or “hyperscalers” like AWS and Google Cloud, manage everything from physical infrastructure to data security and encryption, enabling smooth, scalable deployment of AI across healthcare. As healthcare software increasingly migrates to the cloud, AI is following suit.

Focus Areas:

  • Developing responsible AI standards
  • Ensuring privacy and security
  • Providing explainability and transparency tools
  • Implementing fairness considerations
  • Establishing safety mechanisms
  • Creating governance best practices

For example, AWS has published its Responsible AI framework, prioritizing education, science, and customer needs across the AI lifecycle.

3. Healthcare Providers: Integrating AI Safely and Effectively into Practice

Healthcare providers and their IT teams are the frontline integrators of AI, using it to improve patient care and streamline workflows. These providers need to ensure that AI systems meet quality standards, are unbiased, and are rigorously tested. Initiatives like the Coalition for Health AI (CHAI) in the U.S. bring together healthcare organizations, academic institutions, and tech companies to collaborate on best practices. CHAI provides an Assurance Standards Guide to help evaluate AI solutions across five key areas: 1/Usefulness and Efficacy, 2/ Fairness and Equity, 3/Safety, 4/Transparency, and 5/ Privacy and Security

Regulatory Landscape for AI in Healthcare

Global and regional regulations are formalizing healthcare AI standards. Three main areas of regulation impact healthcare AI: medical device standards, AI-specific guidelines, and health data privacy.

1. Medical Device Standards: Regulating AI for Patient Safety

AI is often treated as a “medical device” in healthcare. The FDA in the U.S., for instance, reviews AI-based devices through established premarket pathways like 510(k) and De Novo classification. Similarly, Europe’s Medical Device Regulation (MDR) and In Vitro Diagnostic Regulation (IVDR) have frameworks that apply to healthcare AI, focusing on safety and efficacy. These regulations are evolving to better address the dynamic nature of AI compared to traditional devices, ensuring that AI systems remain safe as they are updated and improved.

2. AI-Specific Regulation: Pioneering Standards in the EU

The EU’s AI Act is a pioneering regulation that classifies AI systems by risk level, requiring strict compliance for high-risk applications like healthcare AI. Key aspects include:

  • Risk-Based Classification: High-risk applications (e.g., medical AI) face stringent requirements, while low-risk applications have transparency guidelines.
  • General Purpose AI (GPAI): GPAI models that pose systemic risks due to their broad applications undergo rigorous testing, documentation, and cybersecurity protections.

Other regions, including the U.S. and China, are also working on their own AI regulatory frameworks, which may lead to both convergent and divergent standards.

3. Health Data Privacy: Safeguarding Sensitive Information

Health data privacy regulations like the Health Insurance Portability and Accountability Act (HIPAA) in the U.S. and the General Data Protection Regulation (GDPR) in Europe ensure that sensitive health information is protected. These laws mandate secure handling of health data, covering both data usage and the necessary protections to maintain patient confidentiality.

In the EU, the European Health Data Space (EHDS) aims to create a common framework for sharing health data across Europe. The goals include:

  1. Empowering individuals to control their own health data.
  2. Establishing a single market for AI-based electronic health record systems.
  3. Enabling data usage for research, innovation, and policy-making.

Trustworthy and Scalable Healthcare AI

The journey to building scalable, responsible healthcare AI has just begun. We must aim for standards that balance innovation and regulation. Healthcare AI should be:

  1. Transparent and Explainable: To build trust among patients and healthcare professionals.
  2. Well-Regulated: Ensuring quality, safety, efficacy, privacy, and equity.
  3. Democratized: Allowing for rapid innovation and broad adoption across the healthcare ecosystem.

These principles will lead towards widespread trust and adoption, ultimately improving global health outcomes.