Machine Unlearning
In the current digital age, machine learning technology has emerged as a core driving force across numerous industries, encompassing fields such as financial services and healthcare. However, the sensitive data processed by these technologies pertains to personal privacy and confidential information, necessitating proper protection. To this end, we have developed several privacy-preserving mechanisms, including differential privacy and homomorphic encryption. Despite these efforts, these technologies are not entirely effective in preventing data misuse and leakage in practice. Moreover, with the rapid development of Artificial Intelligence Generated Content (AIGC), preventing these generative algorithms from producing harmful content has become an urgent issue for academia, governments, and industry. In this context, Machine Unlearning has emerged as a research hotspot. Machine Unlearning refers to the use of algorithms and techniques to delete learned data or specified information from machine learning models in order to avoid generating harmful content and comply with data confidentiality and privacy regulations. Accordingly, our research aims to design and construct fast, effective, and theoretically sound Machine Unlearning algorithms and systems for different application scenarios.
Current Member: Junxiao Wang, Cheng-Long Wang, Liangyu Wang