Meta-Learning, or so-called Learning to learn, has become another important research branch in Machine Learning. Different from traditional deep learning, meta-learning can be used to solve one-to-many problems and has a better performance in few-shot learning which only few samples are available in each class. In these tasks, meta-learning is designed to quickly form a relatively reliable model through very limited samples. In this paper, we propose a modified LSTM-based meta-learning model, which can initialize and update the parameters of classifier (learner) considering both short-term knowledge of one task and long-term knowledge across multiple tasks. We reconstruct a Compound loss function to make up for the deficiency caused by the separate one in original model aiming for a quick start and better stability, without taking expensive operation. Our modification enables meta-learner to perform better under few-updates. Experiments conducted on the Mini-ImageNet demonstrate the improved accuracies.
Lu, Min; Yang, Jingchao; Wang, Wenfeng; and Hu, Bin
"A Modified Meta-Learner for Few-Shot Learning,"
International Journal of Electronics and Electical Engineering: Vol. 4:
1, Article 5.
Available at: https://www.interscience.in/ijeee/vol4/iss1/5