The development of intelligent systems challenges issues around personal privacy. In his thesis “Towards Privacy Preserving Intelligent Systems”, Md Sakib Nizam Khan at KTH presents privacy-preserving strategies during data processing within smart applications.
Personal integrity is a constitutionally protected human right and a prerequisite for a free and democratic society. Increasingly, intelligent systems challenge respect for personal privacy and it is of the utmost importance that these issues become an integral part of the research and development of these systems.
Md Sakib Nizam Khan, PhD student at the Royal Institute of Technology, KTH, has in his thesis identified potential privacy problems within intelligent systems and proposed strategies to deal with them.
Smart devices
One study examines data storage on individual smart devices, where privacy-sensitive information risks accompanying the device when ownership changes. With the help of data encryption and technologies that can detect environmental changes, privacy protection can be significantly improved.
Additional privacy issues arise within complex systems-of-systems with multiple interconnected smart devices. Within the framework of the thesis, home-based health monitoring systems are studied, and several critical points in need of privacy protection are here identified.
Using synthetic data to train models
Today’s applications of machine learning to analyze personal data in businesses such as e-commerce, healthcare and in financial services are controversial from a privacy perspective. There is therefore a growing need for methods that facilitate privacy-preserving data processing and sharing. Md Sakib Nizam Khan and his colleagues are investigating the possibility of training models with synthetic data to prevent the dissemination of privacy-sensitive information.
The results of the study showed that using synthetic data for this purpose effectively can reduce the risk of a so-called Membership Inference Attack, MIA, where a trained machine learning model can otherwise reveal which training data and thus which individuals’ data has been used.
Image data integrity
Commercial use of various types of image data is another way in which privacy-sensitive information risks being disseminated. In the thesis, a new method for de-identification of image data, called AnonFACES, is presented. AnonFACES is based on algorithms to quantify, improve, and fine-tune the trade-off between privacy and the loss of information in the image data during anonymization.
In summary, the thesis highlights the importance of paying attention to privacy issues in all phases of data processing within intelligent systems. The author identifies a number of critical points and suggests new strategies to protect and preserve privacy-sensitive information in the development and use of these systems.
Read More
Published: July 5th, 2023