Adversarial Corruption

image credits to Ian Goodfellow et al.

Neural Networks have been observed to be robust to even adversarial corruptions. However, only a handful of theoretical results exist which guarantee robustness of NNs. We have made an attempt to address this, using results and techniques from robust statistics. To measure the robustness of an algorithm, we want to derive its breakdown point which is the largest number of adversarially corrupted points an algorithm can handle and still guarantee recovery. The presence of breakdown point results for simpler learners like linear regression was a primary motivation to pursue this project using robust statistics.

To simplify the problem, we consider the case of regression only single hidden layer neural networks with ReLU activation on each node and an output node with no activation with squared loss function. We analysed two different training procedures and how it can change the training trajectory to affect the final convergence. One apprach utilised the property of ReLU of dividing the input space into 2 halves (active and unactive). Another approach split the problem as a difference of convex functions and we used alternating optimization to train it. This training procedure was well behaved in terms of convergence and theoretically sound.

For complete details about these approaches please look in the attached pdf.

Avatar
Varun Kumar Khare
Graduate

My research interests include program synthesis, privacy and deep learning.