Few-Shot Sentiment Adaptation: A MAML -Based Framework for Low-Resource NLP
DOI:
https://doi.org/10.63503/j.ijcma.2025.94Keywords:
Few-Shot Learning, Sentiment Analysis, Low-Resource NLP, Meta-Learning, MAML, Few-Shot Classification, Transfer Learning, Multilingual NLP, Low-Resource Sentiment Datasets, Adaptation in NLPAbstract
Sentiment analysis in low-resource languages has a tough obstacle to jump over because there just isn’t enough labeled data. Traditional deep learning models tend to hit a wall in these situations since they need large datasets to really shine. To tackle this issue, we came up with Few-Shot Sentiment Adaptation (FSSA), a meta-learning framework based on MAML. This cool approach lets us classify sentiment with just a few labeled examples. By training on languages that have lots of resources and then adapting to those that don’t, we can quickly pick up on sentiment patterns using a 5-way, 5-shot method. We tested FSSA on some public low-resource sentiment datasets and compared it with fine- tuned BERT models, zero-shot learning, and other few-shot classification techniques. Our results showed a major improvement over existing methods, proving to be pretty adaptable even when dealing with limited data. Unlike the usual transfer learning methods, FSSA allows for quick tweaks without needing extensive fine-tuning, making it ideal for real-world low-resource Natural Language Processing (NLP) applications. This research helps bridge the gap between high- and low-resource languages in the NLP field, reducing the reliance on large annotated datasets. We’ve made few-shot meta-learning for sentiment analysis easier to understand, paving the way for more strong and efficient language models.
References
[1] Wróblewska, A. (2024). Few-shot methods for aspect-level sentiment analysis. Information, 15(11), 664. https://doi.org/10.3390/info15110664
[2] Kumar, S., Sanasam, R., & Nandi, S. (2024). IndiSentiment140: Sentiment analysis dataset for Indian languages with emphasis on low-resource languages using machine translation. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (pp. 7689–7698). Association for Computational Linguistics. https://aclanthology.org/2024.naacl- long.425
[3] Zhou, J., Zhou, J., Zhao, J., Wang, S., Shan, H., Tao, G., Zhang, Q., & Huang, X. (2023). A soft contrastive learning-based prompt model for few-shot sentiment analysis. arXiv preprint arXiv:2312.10479. https://arxiv.org/abs/2312.10479
[4] Yang, X., Feng, S., Wang, D., Hong, P., & Poria, S. (2022). Few-shot multimodal sentiment analysis based on multimodal probabilistic fusion prompts. Proceedings of the 30th ACM International Conference on Multimedia, 2022, 1-9. https://doi.org/10.1145/3581783.3612181
[5] Vacareanu, R., Varia, S., Halder, K., Wang, S., Paolini, G., John, N. A., Ballesteros, M., & Muresan, S. (2023). A weak supervision approach for few-shot aspect-based sentiment analysis. arXiv preprint arXiv:2305.11979. https://arxiv.org/abs/2305.11979
[6] Zhou, Z., Feng, H., Qiao, B., Wu, G., & Han, D. (2023). Syntax-aware hybrid prompt model for few-shot multimodal sentiment analysis. arXiv preprint arXiv:2306.01312. https://arxiv.org/abs/2306.01312
[7] Cahyawijaya, S., Lovenia, H., & Fung, P. (2024). LLMs are few-shot in-context low-resource language learners. Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. aclanthology.org
[8] Hasan, M. A., Das, S., Anjum, A., Alam, F., Anjum, A., Sarker, A., & Noori, S. R. H. (2023). Zero- and few-shot prompting with LLMs: A comparative study with fine-tuned models for Bangla sentiment analysis. arXiv preprint arXiv:2308.10783. arxiv.org
[9] Idrees, H. (2024). Few-shot learning: Learning with limited data. Medium. medium.com Yadav, A. (2024). Few-shot learning for data-scarce problems. Medium
[10] Naman, A., Sinha, C., & Mancini, L. (2021). Fixed-MAML for few-shot classification in multilingual speech emotion recognition. arXiv preprint arXiv:2101.01356. arxiv.org
[11] Barnes, J., Klinger, R., & Schulte im Walde, S. (2018). Bilingual sentiment embeddings: Joint projection of sentiment across languages. arXiv preprint arXiv:1805.09016. https://arxiv.org/abs/1805.09016
[12] Asgari, E., Braune, F., Roth, B., Ringlstetter, C., & Mofrad, M. R. K. (2019). UniSent: Universal adaptable sentiment lexica for 1000+ languages. arXiv preprint arXiv:1904.09678. https://arxiv.org/abs/1904.09678
[13] Hou, B., Yu, L., Mao, X., & Yang, Y. (2023). Few-shot learning with prompt tuning for sentiment analysis. Neurocomputing, 536, 120-131.
[14] Su, J., Zhang, W., Liu, Y., & Li, S. (2023). Meta-learning-based sentiment analysis for low-resource languages. Journal of Artificial Intelligence Research, 76, 435-460.
[15] Finn, C., Abbeel, P., & Levine, S. (2017). Model-agnostic meta-learning for fast adaptation of deep networks.
Proceedings of the International Conference on Machine Learning (ICML), 70, 1126-1135.
[16] Antoniou, A., Edwards, H., & Storkey, A. (2019). How to train your MAML. International Conference on Learning Representations (ICLR)
[17] Ponti, E. M., Ouellet, P., Belinkov, Y., Vulić, I., & Korhonen, A. (2021). Parameter-efficient few-shot learning for cross-lingual NLP. Transactions of the Association for Computational Linguistics, 9, 77-94.
[18] Zhu, X., Chen, W., Zhang, X., & Li, Y. (2022). Few-shot learning for multilingual sentiment analysis with contrastive learning. ACM Transactions on Information Systems (TOIS), 40(3), 1-22.
[19] https://www.researchgate.net/figure/Flow-diagram-depicting-the-proposed-meta-training-and-testing- approach-using-an-embedding_fig1_364953765