Automatic Brain Tumor Segmentation

Project by Peter Jagd Sørensen

Introduction

This project explores two key advancements in brain tumor segmentation. First, it repurposes the BraTS dataset, originally designed for preoperative analysis, by introducing a two-label annotation protocol tailored for postoperative scans. This adaptation enables deep learning (DL) algorithms to segment tumors more accurately on postoperative MRI scans by excluding resection cavities from tumor regions. Second, the project evaluates the performance of the state-of-the-art HD-GLIO algorithm, focusing on its ability to segment contrast-enhancing (CE) and non-enhancing (NE) lesions in an independent set of postoperative MRI scans.

Project Background

Automating brain tumor segmentation using DL algorithms has gained significant interest in recent years. The BraTS Challenge has been instrumental in this progress but is limited to preoperative MRI data, leaving a gap for postoperative scans. Most brain tumor patients undergo surgery, creating a need for annotated postoperative datasets, which are currently limited.

From another perspective, the project aims to assess the performance of the HD-GLIO algorithm on an independent dataset of postoperative MRI scans. The study evaluates how well the algorithm segments two types of lesions: contrast-enhancing (CE) lesions and non-enhancing (NE) lesions, providing insight into its applicability for routine clinical workflows.

By adapting the BraTS dataset with a two-label protocol, instead of a three-label protocol, and evaluating HD-GLIO, the project meets the gap between pre- and postoperative tumor segmentation, enabling DL algorithms to address real-world clinical needs.

Project Potential

The project has the potential to advance postoperative brain tumor segmentation by addressing the lack of annotated datasets with a novel two-label annotation protocol. This innovation enhances deep learning accuracy, improves disease monitoring, and supports personalized treatment.

Although HD-GLIO shows strong potential for clinical use, particularly in segmenting larger tumors and NE lesions, it sometimes incorrectly identifies regions (see Figure 3). Addressing these challenges is key to refining models and ensuring clinical integration. This approach ultimately aims to streamline radiological workflows and improve patient outcomes.

Contact Information

Name: Peter Jagd Sørensen
Location: Department of Radiology and Scanning, Rigshospitalet
Position: PhD Student

Publications

Figure 1: Billede 1
Figur 1 compares three brain tumor segmentation methods: the three-label model (original BraTS protocol), the two-label model (postoperative adaptation), and the radiologist's ground truth. The figure highlights the effectiveness of the two-label protocol in excluding resection cavities, enhancing segmentation accuracy.
NCR+NET = necrosis, cysts and non-enhancing tumour core
AT = active contrast-enhancing tumour
ED = oedema and infiltrated tissue
CE = contrast-enhancing tumour
NE = non-enhancing hyperintense T2/FLAIR signal abnormalities.
Figure 2: Billede 2
Figur 2 shows tumor segmentation accuracy using Dice similarity coefficients for three categories: contrast-enhancing tumors (CE), larger CE tumors (>1 cm³), and non-enhancing abnormalities (NE). The Two-label model shows higher accuracy for larger CE tumors, while both models vary in performance for smaller CE and NE regions.
Figure 2: Billede 3
Figur 3 illustrates an error in HD-GLIO's segmentation of CE tumor regions. The algorithm failed to capture part of the cavity wall identified as a CE tumor by the radiologist. Yellow highlights the radiologist's delineation, cyan shows HD-GLIO's segmentation, and green indicates overlap.