•  
  •  
 

International Journal of Electronics and Electical Engineering

Abstract

Meta-Learning, or so-called Learning to learn, has become another important research branch in Machine Learning. Different from traditional deep learning, meta-learning can be used to solve one-to-many problems and has a better performance in few-shot learning which only few samples are available in each class. In these tasks, meta-learning is designed to quickly form a relatively reliable model through very limited samples. In this paper, we propose a modified LSTM-based meta-learning model, which can initialize and update the parameters of classifier (learner) considering both short-term knowledge of one task and long-term knowledge across multiple tasks. We reconstruct a Compound loss function to make up for the deficiency caused by the separate one in original model aiming for a quick start and better stability, without taking expensive operation. Our modification enables meta-learner to perform better under few-updates. Experiments conducted on the Mini-ImageNet demonstrate the improved accuracies.

DOI

10.47893/IJEEE.2022.1184

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.