Gopalakrishnan, S., Singh, P. R., Fayek, H., Ramasamy, S., & Ambikapathi, A. (2022). Knowledge Capture and Replay for Continual Learning. 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). https://doi.org/10.1109/wacv51458.2022.00041
Abstract:
Deep neural networks model data for a task or a sequence of tasks, where the knowledge extracted from the data is encoded in the parameters and representations of the network. Extraction and utilization of these representations is vital when data is no longer available in the future, especially in a continual learning scenario. We introduce 'flashcards', which are visual representations that 'capture' the encoded knowledge of a network as a recursive function of some predefined random image patterns. In a continual learning scenario, flashcards help to prevent catastrophic forgetting by consolidating the knowledge of all the previous tasks. Flashcards are required to be constructed only before learning the subsequent task, hence, they are independent of the number of tasks trained before, making them task agnostic. We demonstrate the efficacy of flashcards in capturing learned knowledge representation (as an alternative to the original data), and empirically validate on a variety of continual learning tasks: reconstruction, denoising, and task-incremental classification, using several heterogeneous (varying background and complexity) benchmark datasets. Experimental evidence indicates that: (i) flashcards as a replay strategy is 'task agnostic', (ii) performs better than generative replay, and (iii) is on par with episodic replay without additional memory overhead.
License type:
Publisher Copyright
Funding Info:
This research / project is supported by the National Research Foundation, Prime Minister’s Office, Singapore - Campus for Research Excellence and Technological Enterprise (CREATE)
Grant Reference no. : NA
This research is supported by core funding from: Institute for Infocomm Research
Grant Reference no. : SC20/19-128310-CORE