THAUFIQURAHMAN, DIAZ (2025) OPTIMALISASI SP4N LAPOR!: INTEGRASI BERT DAN LSTM UNTUK PENINGKATAN ANALISIS SENTIMEN PUBLIK. S1 thesis, Universitas Mercu Buana Jakarta.
|
Text (HAL COVER)
01 COVER.pdf Download (574kB) | Preview |
|
![]() |
Text (BAB I)
02 BAB 1.pdf Restricted to Registered users only Download (71kB) |
|
![]() |
Text (BAB II)
03 BAB 2.pdf Restricted to Registered users only Download (402kB) |
|
![]() |
Text (BAB III)
04 BAB 3.pdf Restricted to Registered users only Download (163kB) |
|
![]() |
Text (BAB IV)
05 BAB 4.pdf Restricted to Registered users only Download (1MB) |
|
![]() |
Text (BAB V)
06 BAB 5.pdf Restricted to Registered users only Download (148kB) |
|
![]() |
Text (DAFTAR PUSTAKA)
07 DAFTAR PUSTAKA.pdf Restricted to Registered users only Download (167kB) |
|
![]() |
Text (LAMPIRAN)
08 LAMPIRAN.pdf Restricted to Registered users only Download (1MB) |
Abstract
The Indonesian government continues to encourage the transformation of egovernment-based public services through the SP4N LAPOR! application. However, service quality challenges and the dominance of user complaints are still major issues, reflected in low ratings and negative reviews on the Google Play Store. This research aims to systematically and empirically analyze the sentiment of users of the SP4N LAPOR! application using a combination of BERT and LSTM deep learning approaches. Review data for the period 2020-2025 was collected through web scraping, resulting in 1,376 data which was then processed through the stages of cleansing, tokenizing, normalization, stopword removal, and stemming, as well as heuristic-based spam tagging without deleting data to test the robustness of the model. Automatic sentiment labeling using IndoBERT from the Hugging Face library shows the dominance of negative sentiment (62.2%), followed by positive sentiment (21.7%) and neutral (16.1%). The model training process was conducted with a batch size of 16 for a maximum of 20 epochs with the implementation of early stopping (patience=2) to prevent overfitting. Regularization is applied through dropout on the LSTM with a dropout rate of 0.2, and model optimization using Adam optimizer with a learning rate of 0.001 and weight decay of 1e-5. Experiments on variants of ALL DATA and NON-SPAM datasets with various data sharing scenarios show that the BERT+LSTM model consistently achieves high accuracy of 93-94% on ALL DATA and up to 95% on NON-SPAM, with the highest precision and recall reaching 0.99 on the negative class. Keywords: Sentiment Analysis, SP4N LAPOR!, BERT, LSTM, Deep Learning, eGovernment, User Reviews, Early Stopping Pemerintah Indonesia terus mendorong transformasi pelayanan publik berbasis egovernment melalui aplikasi SP4N LAPOR!. Namun, tantangan kualitas layanan dan dominasi keluhan pengguna masih menjadi isu utama, tercermin dari rating rendah serta ulasan negatif di Google Play Store. Penelitian ini bertujuan untuk menganalisis sentimen pengguna aplikasi SP4N LAPOR! secara sistematis dan empiris menggunakan pendekatan deep learning kombinasi BERT dan LSTM. Data ulasan periode 2020–2025 dikumpulkan melalui web scraping, menghasilkan 1.376 data yang kemudian diproses melalui tahapan cleansing, tokenizing, normalisasi, stopword removal, dan stemming, serta penandaan spam berbasis heuristik tanpa menghapus data untuk menguji robustness model. Pelabelan sentimen otomatis menggunakan IndoBERT dari library Hugging Face menunjukkan dominasi sentimen negatif (62,2%), diikuti sentimen positif (21,7%) dan netral (16,1%). Proses pelatihan model dilakukan dengan batch size 16 selama maksimal 20 epoch dengan implementasi early stopping (patience=2) untuk mencegah overfitting. Regularisasi diterapkan melalui dropout pada LSTM dengan dropout rate 0.2, dan optimasi model menggunakan Adam optimizer dengan learning rate 0.001 dan weight decay 1e-5. Eksperimen pada varian dataset ALL DATA dan NON-SPAM dengan berbagai skenario pembagian data menunjukkan bahwa model BERT+LSTM konsisten meraih akurasi tinggi 93–94% pada ALL DATA dan hingga 95% pada NON-SPAM, dengan precision dan recall tertinggi mencapai 0.99 pada kelas negatif. Kata kunci: Analisis Sentimen, SP4N LAPOR!, BERT, LSTM, Deep Learning, eGovernment, Ulasan Pengguna, Early Stopping.
Actions (login required)
![]() |
View Item |