The entry point for the code is in run.py
. The training code is in the
function train
and the evaluation is in the function test
in main.py
My submission for the Internship Application at SkyLark Labs. The following is a summary of my approach:
- I use the memory augmented neural network (MANN) described in the paper, "One-shot Learning with Memory-Augmented Neural Networks".
- I use the MNIST dataset for my test because its the simplest dataset. In my formulation I split the dataset into two parts. One part is with labels 0,1,2,3,4 and the other part is 5,6,7,8,9.
- The following is the formulation of my training methodology:
- I train the MANN on the first task until convergence using the
--labels
option to specify which labels are part of the first task. If we have to load a pretrained model we use the--pretrained_model
option. - Next, I train the MANN on the second task until convergence using the
--labels
option again to specify which labels are part of the second task. Here, I will also use the--pretrained_model
option to load the model from the previous task. This leads to trasfer of learning. Additionally, I will also use the--no_write
option. This allows us to only train the weights of the convolutional neural network while keeping the memory matrix unchanged. - Repeat steps i. and ii. until no learning is happening in either of the tasks - END
- I train the MANN on the first task until convergence using the
I believe that the asymmetry in the i. and ii. steps allows us to train for both the tasks while also overcoming catastrophic forgetting. Thus achieving continual learning.
The following is a schematic of my approach:
PS: While we have used external memory matrix to address catastrophic forgetting, I believe we can refer to more mordern methods where tasks are defined using prompts. For example, see this recent paper by Facebook AI. Adopting such methods in addition to the augmented memory might be an interesting idea to try.