Introduction
Despite a decline in the rate of emergency coronary artery bypass graft (ECABG) following percutaneous coronary intervention (PCI) in the current-stent era (0.3%–0.6%), the mortality and morbidity associated with ECABG remains constant and high.1–4 Historically, PCI has been performed at sites that have access to onsite cardiac surgery. In recent years, however, the number of primary PCI cases being performed at sites without cardiothoracic surgical backup has grown globally for both patients presenting with acute myocardial infarction (MI) and those undergoing elective procedures.5 The American College of Cardiology (ACC)/American Heart Association (AHA)/Society for Cardiovascular Angiography and Interventions (SCAI guidelines designate primary PCI a class IIa indication (is reasonable), and elective PCI a class IIb indication (may be considered) when performed at facilities without onsite surgical backup with the caveat that appropriate planning for programme development has been accomplished and rigorous clinical and angiographic criteria are used for proper patient selection.6 There has been a steady increase in PCI without surgical backup and it was estimated that over 16% of the facilities participating in the National Cardiovascular Data Registry were performing PCI without onsite backup7 by the end of 2005.
Large meta-analyses and randomised controlled trials have found no differences in adverse outcomes between carefully selected patients undergoing PCI at facilities with onsite cardiac surgery and those undergoing PCI at facilities without surgical backup.8–10 The outcome of emergent surgery is extremely uncommon, however, and these studies have been underpowered with respect to this end point to definitively establish the superiority of one strategy versus the other. What is clear is that the low prevalence of ECABG means that surgical backup may not be needed for the vast majority of patients, and that there will be further increases in the number of hospitals providing PCI without onsite surgery.
In this setting, risk scores or decision-support algorithms that can accurately differentiate between patients at high or low risk of ECABG following PCI hold significant value as screening tools to selectively refer cases to facilities with or without surgical backup. Such tools fit particularly well within a hub-and-spoke model of PCI centres by guiding decisions related to patient transfer.11 The availability of tools that can accurately quantify patient risk also provide for more accurate assessments of quality and outcomes across institutions (both with and without onsite surgical backup) by creating the opportunity to perform risk-adjusted observed-to-expected analyses. However, predicting the risk of ECABG following PCI has traditionally been an extremely hard problem, with the low prevalence and multifactorial nature of events resulting in no satisfactory predictors for this outcome.12
The purpose of this study is to explore a computational approach that addresses this goal by drawing on recent advances in statistical machine learning to integrate information from a diverse set of clinical variables that are individually weak predictors of ECABG following PCI. The specific hypothesis underlying this study is that the process of stratifying patients for ECABG can be significantly improved through algorithms that directly optimise cohort-level metrics relevant for clinical use. Specifically, across a fairly broad range of medical applications, the metrics that determine practical utility (measures such as precision, recall and area under the receiver operating characteristic curve (AUROC)) are defined in terms of how the risk scores or predictions are distributed across a cohort. There is no notion of metrics such as an AUROC, for example, at the level of a single patient and they are defined instead in terms of how the risk scores or predictions change across a set of patients. In contrast, the outstanding majority of existing approaches to stratify patients (including the popular methodology of logistic regression) focus in their underlying optimisation problems on individual patients separately without direct regard for how the risk scores or predictions arising from these approaches collectively translate to cohort-level performance for meaningful performance metrics. This study diverges from this trend, and investigates how the process of predicting ECABG following PCI can be improved through a computational model directly optimised for cohort-level performance using novel machine learning methodology; as an objective basis that can be used for selectively referring patients to hospitals with and without onsite surgical backup.