Enhancing Multilingual Sentiment Analysis with Large Language Models: Current Trends and Future Direction


Date Published : 10 January 2026

Contributors

Sai Kiran Oruganti

Lincoln University College
Author

Keywords

: Sentiment Analysis Large Language Models BERT GPT Multilingual Sentiment Analysis Low-Resource Languages Transformers Fine-Tuning Cross-Lingual Models Sentiment Classification.

Proceeding

Track

Engineering, Sciences, Mathematics & Computations

License

Copyright (c) 2026 Sustainable Global Societies Initiative

Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Abstract

Sentiment analysis (SA) has witnessed substantial progress with the advent of large language models (LLMs) such as Bidirectional Encoder Representations from Transformers (BERT), Generative Pre-trained Transformer (GPT), and Text-to-Text Transfer Transformer (T5): these models have outperformed traditional methods and have added value to the classification of sentiments of different languages. This review provides an in-depth analysis of LLMs and their application in sentiment analysis. We explore their advantages, challenges, and the impact they have in sentiment classification in different languages, especially lower-resource languages. Also, we suggest the future models that can enhance these models.

References

No References

Downloads

How to Cite

Oruganti, S. K. . (2026). Enhancing Multilingual Sentiment Analysis with Large Language Models: Current Trends and Future Direction. Sustainable Global Societies Initiative, 1(2). https://vectmag.com/sgsi/paper/view/140