The Clinical AI Readiness Index™ (CARI) Tool

Assess your AI model against 15 critical questions. If you can’t pass this checklist, your AI isn’t clinically serious.

1. Is the data used to train and validate the AI model clearly documented, ethically sourced, representative of the target patient population, and of sufficient quality and quantity for the intended clinical application?

2. Can the AI model's decision-making process be sufficiently understood by clinicians? Are there mechanisms to explain *why* the AI reached a specific conclusion for an individual patient, or to highlight the key input features driving its output?

3. Has the AI model been rigorously tested for performance across diverse datasets, different clinical settings, and various patient subgroups, including edge cases and potential adversarial attacks? How does the model handle uncertainty or ambiguous inputs?

4. Is there robust evidence from well-designed clinical studies (e.g., randomized controlled trials or prospective real-world studies) demonstrating the AI model's safety, accuracy, and actual clinical benefit (e.g., improved patient outcomes, enhanced diagnostic capabilities, or workflow efficiency) in its intended use case?

5. Has the AI model been systematically audited for potential biases related to demographic factors (age, sex, ethnicity), socioeconomic status, geographic origin, or other relevant characteristics? Are there effective strategies in place to mitigate identified biases and ensure equitable performance and outcomes across all patient groups?

6. Is the AI tool designed for seamless and intuitive integration into existing clinical workflows? Has its usability been tested with representative clinical users to ensure it can be used effectively and efficiently with minimal disruption to patient care?

7. Can the AI model's predictions, the input data it used, and its version be logged and audited? Is there a clear and maintained record of model development, updates, and ongoing performance monitoring?

8. Does the AI model, its development process, and its planned deployment comply with all relevant local and international regulatory requirements (e.g., FDA, CE marking, GDPR, HIPAA) and established ethical guidelines for AI in healthcare?

9. Is there a comprehensive plan for continuous monitoring, evaluation, and governance of the AI model's performance in the real-world clinical setting after deployment? Does this include mechanisms for collecting user feedback, detecting performance degradation, and implementing timely updates or recalibrations?

10. Has a thorough and documented assessment been conducted to weigh the potential clinical benefits of the AI model against its potential risks, including but not limited to misdiagnosis, over-reliance by clinicians, deskilling, or introduction of new types of errors?

11. Is the specific intended clinical use of the AI model unequivocally defined, including the target patient population and clinical conditions? Are its known limitations, contraindications, and appropriate scope of use clearly communicated to users?

12. Does the AI model provide a reliable indication of its confidence or uncertainty for each prediction or output? Is this uncertainty communicated to clinicians in an understandable and actionable manner?

13. Are robust technical and organizational measures in place to ensure the security, integrity, and privacy of patient data used by and generated by the AI model, in full compliance with data protection regulations and best practices?

14. Have all relevant stakeholders (including clinicians, nurses, IT staff, and patients where appropriate) been involved in the selection, development, or validation process of the AI tool? Is there adequate and ongoing training provided for all users to ensure competent and safe use?

15. Are there established protocols and contingency plans if the AI system fails, provides clearly erroneous information, or becomes unavailable due to technical issues? Will patient care be managed safely and effectively in such scenarios without relying on the AI tool?