Deploying Interpretable AI Models in Healthcare Diagnostics: A Case Study on Using Explainable Machine Learning for Early Disease Detection and Clinical Decision Support
Ugochukwu Ukeje1*
Abstract
Artificial intelligence (AI) and machine learning (ML) are transforming healthcare diagnostics by enabling faster and more accurate disease detection. However, the growing reliance on complex black box models raises critical concerns about transparency, trust, and accountability particularly in clinical settings where interpretability is essential. This study aims to evaluate interpretable AI models and explainability techniques in the context of early disease detection and integration into Clinical Decision Support Systems (CDSS). The research combines an in depth literature review of interpretable and post hoc explainable ML approaches with a comparative case study using logistic regression, Explainable Boosting Machines (EBM), and XGBoost with SHAP explanations applied to the UCI Heart Disease dataset. Models were assessed using metrics such as accuracy, AUC ROC, precision, explanation clarity, and fairness across demographic subgroups. The results reveal that while XGBoost offers superior predictive performance, EBM achieves a more optimal balance between accuracy, transparency, and clinical usability. SHAP explanations provided valuable local and global insights but required careful interface design for practical deployment. The study highlights the ongoing trade off between interpretability and performance and emphasizes the importance of human centered, trustworthy AI for clinical adoption. These findings offer actionable insights for clinicians, developers, and policymakers working to integrate interpretable AI models into real world diagnostic workflows.
Keywords: Interpretable Machine Learning, Explainable AI, Clinical Decision Support Systems, Early Disease Detection, SHAP, LIME, Healthcare AI