Enhancing Multilingual Sentiment Analysis with Large Language Models: Current Trends and Future Direction
Contributors
Sai Kiran Oruganti
Keywords
Proceeding
Track
Engineering, Sciences, Mathematics & Computations
License
Copyright (c) 2026 Sustainable Global Societies Initiative

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Abstract
Sentiment analysis (SA) has witnessed substantial progress with the advent of large language models (LLMs) such as Bidirectional Encoder Representations from Transformers (BERT), Generative Pre-trained Transformer (GPT), and Text-to-Text Transfer Transformer (T5): these models have outperformed traditional methods and have added value to the classification of sentiments of different languages. This review provides an in-depth analysis of LLMs and their application in sentiment analysis. We explore their advantages, challenges, and the impact they have in sentiment classification in different languages, especially lower-resource languages. Also, we suggest the future models that can enhance these models.