Abstract: |
Convolutional neural networks (CNN) are a class of machine learning model that are especially well suited for imagebased tasks. In this study, we design and train a CNN on tissue samples imaged using Multi-Photon Microscopy (MPM) and show that the model can distinguish between chromophobe renal cell carcinoma (chRCC) and oncocytoma. We demonstrate the method to train a model using simple max-pooling vote fusion, and use the model to highlight regions of the input that cause a positive classification. The model can be tuned for higher sensitivity at the cost of specificity with a constant threshold and little impact to accuracy overall. Several numerical experiments were run to measure the model's accuracy on both image and patient level analysis. Our models were designed with a dropout parameter that biases the model towards higher sensitivity or specificity. Our best performance model, as measured by area under the receiver operating characteristic curve (AUC of ROC, or AUROC) on patient level classification, is measured with a 94% AUROC and 88% accuracy, along with 100% sensitivity and 75% specificity. © 2019 SPIE. |