Le prochain séminaire sera un peu particulier puisqu’il s’agira de la projection de l’exposé invité donné en septembre par Adi Shamir dans le cadre de la conférence Esorics 2019.
La projection aura lieu le jeudi 6 février, à 13h30, dans l’amphithéâtre du Loria.
Titre : The Insecurity of Machine Learning: Problems and Solutions
Abstract : The development of deep neural networks in the last decade had revolutionized machine learning and led to major improvements in the precision with which we can perform many computational tasks. However, the discovery five years ago of adversarial examples in which tiny changes in the input can fool well trained neural networks makes it difficult to trust such results when the input can be manipulated by an adversary.
This problem has many applications and implications in object recognition, autonomous driving, cyber security, etc, but it is still far from being understood. In particular, there had been no convincing explanations why such adversarial examples exist, and which parameters determine the number of input coordinates one has to change in order to mislead the network.
In this talk I will describe a simple mathematical framework which enables us to think about this problem from a fresh perspective, turning the existence of adversarial examples in deep neural networks from a baffling phenomenon into an unavoidable consequence of the geometry of R^n under the Hamming distance, which can be quantitatively analyzed.