Zhou, Y., Wang, M., Gupta, M., Ambikapathi, A., Suganthan, P. N., &amp; Ramasamy, S. (2022). Investigating Robustness of Biological vs. Backprop Based Learning. ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). https://doi.org/10.1109/icassp43922.2022.9747750
Robustness of learning algorithms remains an important problem to be solved from both the perspective of adversarial attacks and improving generalization. In this work, we investigate the robustness of biologically inspired Hebbian learning algorithm in depth. We find that Hebbian learning based algorithms outperform conventional learning algorithms like CNNs by a huge margin of upto 18% on the CIFAR-10 dataset under the addition of noise. We highlight that an important reason for this is the underlying representations that are being learnt by the learning algorithms. Specifically, we find that the Hebbian method learns the most robust representations compared to other methods that helps it to generalize better. We also conduct ablations on the Hebbian network and showcase that robustness of the model drops by upto 16% on the CIFAR-10 dataset if the representation capacity of the network is deteriorated. Hence, we find that the representations learnt play an important role in the resultant robustness of the models. We conduct experiments on multiple datasets and show that the results hold on all the datasets and at various noise levels.
This research is supported by core funding from: I2R
Grant Reference no. : CR-2019-003