Paper repro: Deep Metalearning using “MAML” and “Reptile”


In this post I reproduce two papers in the field of metalearning: MAML and the similar Reptile.

The goal of both of these papers is to solve the K-shot learning problem. In K-shot learning, we need to train a neural network to generalize based on a very small number of examples (often on the order of 10 or so) instead of the often thousands of examples we see in datasets like ImageNet.

The metalearning approach of both Reptile and MAML is to come up with an initialization for neural networks that is easily generalizable to similar tasks. This is different to “Learning to Learn by Gradient Descent by Gradient Descent” in which we weren’t learning an initialization but rather an optimizer.

View full post here

In the event you would want to cite this blog post, you could use this template:

  title={Paper repro: Deep Metalearning using “MAML” and “Reptile”},
  author={Ecoffet, Adrien},
  url = {},