Harnessing Effectiveness of ResNet-50 and EfficientNet for Few-Shot Learning

  • Santoshi Vajrangi School of Computer Science and Engineering, KLE Technological University, Karnataka, India
  • Satvikraj Selar School of Computer Science and Engineering, KLE Technological University, Karnataka, India
  • Anupama P Bidargaddi School of Computer Science and Engineering, KLE Technological University, Karnataka, India
  • Rishi Hiremath School of Computer Science and Engineering, KLE Technological University, Karnataka, India
  • Vivek Yeli School of Computer Science and Engineering, KLE Technological University, Karnataka, India
Keywords: Few-shot learning, ResNet-50, EfficientNet, Image classification

Abstract

Inspired by the concept of human intelligence- learning and expanded upon with several examples – several- step learning focused on computers that can classify images in a comparable way. This article covers the interesting field of sparse learning, focusing on comparing its implementation using two popular deep learning networks: ResNet and EfficientNet. Little learning has the potential to be effective on tasks where obtaining large data sets is expensive or impossible. This allows machines to mimic humans’ ability to learn and expand from small samples, thus opening possibilities in the field of several types of diagnostics, personalized recommendations, systems, and robotics. Our main goal is to measure and compare the accuracy achieved by these models when learning on limited datasets and to show that EfficientNet achieves better accuracy when it requires fewer parameters and computational resources compared to ResNet-We considered VGG-flowers dataset for comparison. Our results show that Narrow EfficientNet outperforms ResNet-50 in terms of overall accuracy (85.20% vs. 84.30%), precision (85.60% vs. 85.40%), recall (85.30% vs. 84.50%) and F1 acquisition (85.45% vs. 84.95%). This suggests that EfficientNet’s emphasis on computational efficiency and parallelism may provide a slight advantage on limited data.

Metrics

Metrics Loading ...

References

M. Tan, Q. Le, Efficientnet: Rethinking model scaling for convolutional neural networks, In International conference on machine learning. PMLR, (2019) 6105-6114.

K.M. Nguyen, Do, M. V., & Carneiro, G. (2019). Bayesian deep learning for image restoration: Fix the kernel, estimate the noise. In International Conference on Machine Learning.

G. Ravi, A. Beatson, (2019) Learning with fewer labels: Recent progress on few-shot learning, In Advances in Neural Information Processing Systems.

G.J. Gordon, Bayesian deep learning, Nature: Machine Intelligence, 1(1), (2019) 8.

J. Hu, (2020) Meta-learning for semi-supervised few-shot learning. In International Conference on Learning Representations.

M. Sun, (2021). Prototypical meta-learning with adaptive class balances. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.

W. Wang, F. Liu, J. Liu, L. Xiao, Multi-view relation learning for cross-domain few-shot hyperspectral image classification, IEEE Transactions on Geo- science and Remote Sensing, 61(2), (2023) 1428-1444.

F. He, G. Li, L. Si, L. Yan, F. Li, F. Sun, (2023) Proto-typeFormer: Transformer-based pro- totype learning for few-shot image classification. arXiv preprint arXiv:2302.09783. https://doi.org/10.48550/arXiv.2310.03517

R. Wei, (2023) Dual-channel prototype network (DCPN) using transformers and CNNs for efficient few-shot pathology classification. IEEE Journal of Biomedical and Health Informatics.

J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, L. Fei- Fei, (2009) Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, 248–255. IEEE.

A. Singh, H. Jamali-Rad, (2022) Transductive decoupled variational inference for few-shot classification. arXiv preprint arXiv:2208.10559.

R. Zhang, Z. Wang, Y. Li, J. Wang, Z. Wang, (2023) FewSAR: A Few-shot SAR Image Classification Benchmark. arXiv preprint arXiv:2306.09592.

K. He, X. Zhang, S. Ren, J. Sun, (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 770-778.

A. Antoniou, A. Storkey, H. Edwards, (2018) How to train your MAML, arXiv preprint arXiv:1810.09502.

Published
2023-12-24
How to Cite
Vajrangi, S., Selar, S., P Bidargaddi, A., Hiremath, R., & Yeli, V. (2023). Harnessing Effectiveness of ResNet-50 and EfficientNet for Few-Shot Learning. International Journal of Computer Communication and Informatics, 5(2), 46-55. https://doi.org/10.34256/ijcci2325



Views: Abstract : 4 | PDF : 2

Plum Analytics