Abstract: |
Endoscopic images are used at various stages of rectal cancer treatment starting from cancer screening and diagnosis, during treatment to assess response and toxicity from treatments such as colitis, and at follow-up to detect new tumor or local regrowth. However, subjective assessment is highly variable and can underestimate the degree of response in some patients, subjecting them to unnecessary surgery, or overestimating response that places patients at risk of disease spread. Advances in deep learning have shown the ability to produce consistent and objective response assessments for endoscopic images. However, methods for detecting cancers, regrowth, and monitoring response during the entire course of patient treatment and follow-up are lacking. This is because automated diagnosis and rectal cancer response assessment require methods that are robust to inherent imaging illumination variations and confounding conditions (blood, scope, blurring) present in endoscopy images as well as changes to the normal lumen and tumor during treatment. Hence, a hierarchical shifted window (Swin) transformer was trained to distinguish rectal cancer from normal lumen using endoscopy images. Swin, as well as two convolutional (ResNet-50, WideResNet-50), and the vision transformer architectures, were trained and evaluated on follow-up longitudinal images to detect LR on in-distribution (ID) private datasets as well as on out-of-distribution (OOD) public colonoscopy datasets to detect pre/non-cancerous polyps. Color shifts were applied using optimal transport to simulate distribution shifts. Swin and ResNet models were similarly accurate in the ID dataset. Swin was more accurate than other methods (follow-up: 0.84, OOD: 0.83), even when subject to color shifts (follow-up: 0.83, OOD: 0.87), indicating the capability to provide robust performance for longitudinal cancer assessment. © 2025 SPIE. |