Semantic segmentation of 3D medical images with deep learning
Deep Learning recently showed impressive results in visual recognition, especially with the performances given by convolutionnal neural networks (ConvNets) on the ImageNet challenge in 2012. Deep learning methods redefined the state of the art in all visual recognition challenges. Naturally, medical images analysis starts using these methods for all kind of tasks such as semantic segmentation.
The main goal of this thesis is to research algorithms based on deep learning for semantic segmentation of 3D medical images, which consists of assigning a label (e.g. liver, stomach, pancreas, tumour, background) for every voxel in a volume. Setting up general methods which allow an automatic or semi-automatic deployment as well as the processing time (real-time) of a volume are key aspects of the project.