Vulnerability of deep learning techniques to attacks


In medical imaging, patient privacy is sacrosanct, and patient images [X-ray, CT, MRI, etc] contain significant amount of patient information, such as the images themselves, along with the name, gender, other medical conditions, etc. Deep learning techniques that learn from patient data were thought to be immune to attacks that might compromise this information. The object of the project is to examine the vulnerability of deep learning techniques to various attacks that attempt to compromise patient information. We attempt to simulate the attacker who tries to obtain the patient information by getting hold of the latent space information, and then tries to reconstruct the original data from this latent space information. We evaluate the extent of the loss of patient information. We also attempt to propose different strategies to counteract this potential loss of privacy using different techniques.

vulnerability

The figure above shows the reconstruction of the original image from the latent space information in different techniques.


Publications

Nagesh Subbanna, Anup Tuladhar, Matthias Wilms and Nils Forkert, Understanding privacy risks in typical deep learning models for medical image analysis, accepted for publication at Imaging Informatics for healthcare, research and applications, SPIE 2020.

Team members

  • Nagesh Subanna

  • Anup Tuladhar

  • Matthias Wilms