Transfer Learning for Few-Shot Image Classification
DOI:
https://doi.org/10.54097/s8bfcs48Keywords:
Transfer Learning; Few-Shot Learning; Image Classification; Domain Adaptation; Knowledge Distillation.Abstract
Image classification technology has numerous vital applications across various industries and research fields. However, in many practical cases, it is often challenging to obtain sufficiently high-quality labelled data to train reliable models. As a classic and widely used paradigm of few-shot learning, transfer learning effectively mitigates the serious problem of over-fitting caused by limited data samples. It significantly enhances model generalisation ability by transferring knowledge learned from large-scale source domains (e.g., ImageNet) to few-shot target domain tasks. This paper systematically and comprehensively investigates transfer learning-based few-shot image classification methods, categorising them into three main types: feature-representation-based transfer, parameter-based transfer, and relational-knowledge-based transfer. It has also been clearly found that transfer learning methods, such as Adversarial Discriminative Domain Adaptation (ADDA) and knowledge distillation, serve as practical and efficient tools for addressing few-shot image classification problems. However, several critical challenges, such as negative transfer, multimodal heterogeneous transfer, and domain shift, still urgently require careful consideration and effective resolution in the future.
Downloads
References
[1] Pan S J, Yang Q. A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering, 2010, 22(10): 1345–1359.
[2] Wang X, Liu J, Li L M. MASA: Multi-view adaptive subspace alignment for enhanced few-shot learning. Knowledge-Based Systems, 2025, 114148.
[3] Kim H E, Cosa-Linan A, Santhanam N, et al. Transfer learning for medical image classification: A literature review. BMC Medical Imaging, 2022, 22: 69.
[4] Choi J, Krishnamurthy J, Kembhavi A, et al. Structured set matching networks for one-shot part labelling. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018: 3627–3636.
[5] Li J, Guo Y, Xu J, et al. Fine-grained image classification based on deep transfer learning with multi-scale attention. Expert Systems with Applications, 2023, 213: 118850.
[6] Tzeng E, Hoffman J, Saenko K, et al. Adversarial discriminative domain adaptation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017: 2962–2971.
[7] Hinton G, Vinyals O, Dean J. Distilling the knowledge in a neural network. arXiv preprint arXiv: 1503.02531, 2015.
[8] Fukuda T, Suzuki M, Kurata G, et al. Efficient knowledge distillation from an ensemble of teachers. Interspeech, 2017: 3697–3701.
[9] Li X, Ma H, Guo S, et al. Graph knowledge distillation-guided few-shot learning for hyperspectral image classification. Journal of Supercomputing, 2025, 81(11): 1–46.
[10] Hussain M, Bird J J, Faria D R. A study on CNN transfer learning for image classification. Advances in Intelligent Systems and Computing, 2019, 840: 224–233.
[11] Khan S Z N, Zhao Q, Kabir M, et al. Brain tumour classification for MR images using transfer learning and fine-tuning. Computerized Medical Imaging and Graphics, 2019, 75: 34–41.
[12] He J, Zhang X, Liu Q. An improved LS-SVM algorithm based on model parameter transfer and its application. Pattern Recognition Letters, 2020, 140: 51–58.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Highlights in Science, Engineering and Technology

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.







