In this work, we propose a method for automatically determining the sentiment of text reviews posted by online store customers using recurrent neural networks equipped with two LSTM blocks with 1024 and 128 neurons, each employing ReLU activation. For text encoding into vectors, we use the FastText model, which captures the morphological nuances of the Russian language by extracting information from word sub-units. The two successive LSTM layers enable modeling of long-term contextual dependencies, crucial for analyzing textual data. This study aims to overcome the limitations of existing methods that only partially account for semantic and morphological language features. At the core of our approach is the automatic extraction of contextual dependencies, which is especially relevant when processing unstructured data. The experimental part of the research consists of developing a model trained on a dataset of 60000 annotated reviews, split into training and test sets in an 80:20 ratio, and conducting a comparative analysis of its classification results against those obtained with logistic regression and single-layer LSTM models. The results demonstrate an increase in classification accuracy - up to 90 % on the test set - through the use of two LSTM layers. Comparative evaluation shows that our two-layer LSTM architecture outperforms the classical logistic regression algorithm and single-layer LSTM models across key metrics (Accuracy ≈ 0.89; F1-Score ≈ 0.90; AUC ≈ 0.95). The proposed methodology enables effective sentiment analysis of large volumes of text data. Moreover, it exhibits high practical relevance for scalable monitoring of user feedback and can be extended by integrating attention mechanisms (self-attention) and hybrid architectures that combine the strengths of RNNs and Transformers.
LSTM, RECURRENT NEURAL NETWORKS, TEXT TONE