Understanding machine learning classifier decisions in automated radiotherapy quality assurance

Abstract

The complexity of the radiotherapy planning demands a rigorous quality assurance (QA) process to ensure patient safety and to avoid irreversible errors. Machine learning classifiers have been explored to augment the scope and efficiency of the traditional QA process. However, one important gap in relying on the treatment planning QA classifiers is the lack of understanding behind a specific classifier prediction. We examine the topic of interpretable machine learning, and apply appropriate post-hoc model-agnostic local explanation methods to understand the decisions of a regions of interest (ROIs) segmentation/labeling classifier as well as a plan acceptance classifier. For each classifier, a local interpretable model-agnostic explanation (LIME) framework and a team-based Shapley values framework are constructed. LIME framework approximates the predictions of the underlying classifier around the locality of a specified instance with an interpretable linear model, so the explanations are obtained from inspecting the feature weights. Team-based Shapley values framework explains the classifier decisions by calculating the average marginal contribution of each feature group across all coalitions of feature groups for a specified instance. The results demonstrate the importance of evaluating QA classifiers using interpretable machine learning approaches, and show that both QA classifiers are moderately trustworthy and can be used to confirm the expert decisions. The results also suggest that the current version of the QA classifiers should not be viewed as a replacement for human QA process.

Type

Related