Remember What You Want To Forget

So today I will be analyzing and providing a brief summary of my understanding of this paper titled Remember What You Want To Forget

This paper presents one of the algorithms for machine unlearning. Machine Unlearning is an important practice when it comes to privacy and security in machine learning. Machine unlearning algorithms aim to make the model forget certain data points that they were trained upon, with the added complexity of retaining the performance as before.

This can have a wide range of applications, especially in the current times when ML models are almost everywhere, and learning from our data which we grant consent to naively.

Fundamentals

Empirical Loss

The paper starts with some very fundamental concepts. A distribution D describes the dataset generation, a loss function F, and the parameter space W. As usual, the goal of any machine learning algorithm is to minimize the test loss (population risk), F(w), which measures the average performance of a model on all possible data instances according to D. However, D is often unknown and so we resort to working on a sample S drawn from the distribution D.

Sample Complexity

Sample complexity in simple terms is the minimum number of samples required to find the optimal level of performance, in terms of the difference between the population risk of the hypothesis generated by the algorithm and the population risk of the best possible hypothesis. To be precise, the text mentions that the γ sample complexity which is the minimum number of samples n required such that there exists a learning algorithm (A) for which the expected excess risk (the difference between the population risk and the risk of the best hypothesis) is less than or equal to γ.

The authors take the value of γ = 0.01 and cite that for convex and strongly convex losses :

That is, the number of samples n, can be a constant independent of the dimension d of the parameter space W.

Unlearning

Unlearning is defined as the process of modifying a learned model to get rid of the influence of a certain set of data points while reducing the impact on the model’s performance.

Here we suppose a learning model A which takes input a dataset S, and next, we consider an unlearning model A’ which takes input A(S), a set of data points to be deleted U, and some statistical characteristic of the prior data T(S).

The goal here is to minimize the storage size of this characteristic T(S), i.e. The worst case would be when we are storing the entire dataset itself.

Here the authors introduce the concept of (ε, δ) learning, which in simple terms is defined as the scenario when an observer with high probability cannot differentiate between 1.) a model that has been trained on a set of data points and then unlearning algorithm has been applied for m data points and 2.) a model is trained on the set of S-m data points and no unlearning has been applied after that. This is termed as privacy-preserving unlearning, getting its roots from differential privacy.

Next comes deletion capacity, which means, in simple terms, the maximum number of data points that can be deleted without degrading the model’s performance below a certain level. The level here is an excess population risk of 0.01.

Overall, the goal of the paper is to have a high deletion capacity, minimize the size of the statistical feature T(S), and make it less dependent on n which is the number of data points, something that can be very large.

Methodology

The paper highlights one important fact that is often ignored by other researchers. Most of the work in this field tries to optimize the empirical risk, i.e the training loss which can often have poor impact on the population risk, i.e test loss.

This is demonstrated using a simple example where we calculate the mean for a given set of 1-D data points. For a given set of z1, z2, . . . , zn data points, and loss function . The average in thsi case is given by :

The unlearning algorithm is simple for this case and the new average after we receive deletion requests for m data points can be exactly given by :

However, consider an example when someone selects the points to be deleted adversially, in such a case the new average is going to be lower than the actual average if the average was calculated without those points.

The proposed algorithm

The algorithm is based on a ceratin assumption that the loss function is λ-strongly convex, L-Lipschitz and M -Hessian Lipschitz with respect to the model parameter w.

The learning algorithm Asc is defined as a model that takes input a dataset S sampled from a certain distribution D, and the model computes the parameter w by minimizing the empirical loss ( training loss) function f(w,z) .

Asc( the learning algorithm) then returns this paramter w, and additional statistical data T(S) namely a matrix called H, which represents the Hessian of the empirical function. The Hessian is a mathematical concept related to the curvature of the function. This is calculated for the loss function at the point w.

Once the Hessian matrix is calculated the new parameter is computed using this formula:

Finally, a random noise added to this parameter to maintain privacy. The added noise is smaller than the amount of noise typically added for DP learning thus giving quadratic improvements in deltion capacity.

Conclusion

In conclusion the paper explores an interesting methodology of using Hessian Matrix as a characteristic of the data to help delete data points from a trained model. Further the addition of noise enhances the security and also improves the deletion capacity, i.e more points can be deleted without affecting the model’s performance significantly.

While the paper also discusses the proofs of the theorems mentioned, it is beyond the scope of this article. Overall, machine unlearning is an interesting field of study and I hope this article inspires you to delve into this subject deeper.

Scroll to Top