Charles Corbière
CIFRE PhD Thesis supervised by Prof. Nicolas THOME in collaboration with valeo.ai research lab, supervised by Patrick PEREZ. ----------------------------------------------------------------------------------------- Over the past several years, deep learning methods have been experiencing a wide adoption, especially since the stunning victory of a convolutional networks in 2012 at the Large Scale Visual Recognition Challenge (LSVRC). Nowadays, most of research works and technology for image and videos analysis has been based on even deeper convolutional networks. Despite their growing success, particularly in complex data analysis for autonomous driving, many questions remain open in order to enable Valeo and other actors in the field to deploy shared-driving and driving automation control systems. A lot of them are related to accuracy, reliability, predictability and interpretability of the complete chain, from sensors to controllers, as well as their individual components. Progress on these issues is crucial to achieve certification from transportation authorities but also to arouse enthousiasme from users. This thesis will explore different paths to evaluate and improve robustness of multimodal data analysis in autonomous driving systems. We will address this issue around the following three methodological locks : - Decision uncertainty for deep learning ; - Stability of deep convolutional networks ; - Deep Learning on heterogeneous and multimodal data. On the application level, the implementation of reliable uncertainty measures or stability measures is a crucial importance for decision-making in the field of autonomous driving, enabling for instance to give back control to the human user over for decision-making, or to ensure robustness to adverse attacks.
2022
Articles de revue
- Confidence Estimation via Auxiliary Models. In IEEE Transactions on Pattern Analysis and Machine Intelligence, 44 (10): 6043-6055, 2022. doi www
2021
Articles de conférence
- Beyond First-Order Uncertainty Estimation with Evidential Models for Open-World Recognition. In ICML 2021 Workshop on Uncertainty and Robustness in Deep Learning, Virtual, Austria, 2021. www
2019
Articles de conférence
- Addressing Failure Prediction by Learning Model Confidence. In Advances in Neural Information Processing Systems 32, pages 2898-2909, Curran Associates, Inc., Vancouver, Canada, 2019. www