-
21
Leveraging Multilingual Transformer for Multiclass Sentiment Analysis in Code-Mixed Data of Low-Resource Languages
Published 2025-01-01“…Subsequently, the Multilingual Bidirectional Encoder Representations from Transformers (mBERT) model was optimized and trained for multiclass sentiment analysis on the code-mixed data. …”
Get full text
Article -
22
Integrating structured and unstructured data for predicting emergency severity: an association and predictive study using transformer-based natural language processing models
Published 2024-12-01“…Unstructured data, including chief complaints and reasons for visit, were processed using a Bidirectional Encoder Representations from Transformers (BERT) model. …”
Get full text
Article -
23
Overview of deep learning and large language models in machine translation: a special perspective on the Arabic language
Published 2025-06-01“…The bidirectional-encoder-representation from transformer (BERT) and LLMs are presented to utilize the big amount of textual data to learn translation patterns. …”
Get full text
Article -
24
A deep learning model for prediction of lysine crotonylation sites by fusing multi-features based on multi-head self-attention mechanism
Published 2025-05-01“…Multiple features are extracted from natural language processing features and hand-crafted features, where natural language processing features include token embedding and positional embedding encoded by transformer, and hand-crafted features include one-hot, amino acid index and position-weighted amino acid composition, and encoded by bidirectional long short-term memory network. …”
Get full text
Article -
25
From Extractive to Generative: An Analysis of Automatic Text Summarization Techniques
Published 2025-01-01“…The review highlights significant milestones in the development of summarization algorithms, including the emergence of Transformer-based models like Bidirectional Encoder Representations from Transformers (BERT) and Generative Pre-trained Transformer (GPT), which have significantly improved the quality and coherence of generated summaries. …”
Get full text
Article -
26
Multi-Head Graph Attention Adversarial Autoencoder Network for Unsupervised Change Detection Using Heterogeneous Remote Sensing Images
Published 2025-07-01“…The MHGAN employs a bidirectional adversarial convolutional autoencoder network to reconstruct and perform style transformation of heterogeneous images. …”
Get full text
Article -
27
Information extraction from green channel textual records on expressways using hybrid deep learning
Published 2024-12-01“…Eight entities are designed and proposed in the NER processing for the expressway green channel. three typical pre-trained natural language processing models are utilized and compared to recognize entities and obtain feature vectors, including bidirectional encoder representations from transformer (BERT), ALBERT, and RoBERTa. …”
Get full text
Article -
28
Identifying Non-Functional Requirements From Unconstrained Documents Using Natural Language Processing and Machine Learning Approaches
Published 2025-01-01“…In our approach, features were extracted from the requirement sentences using four different natural language processing methods including statistical and state-of-the-art semantic analysis presented by Google word2vec and bidirectional encoder representations from transformers models. …”
Get full text
Article -
29
A fake news detection model using the integration of multimodal attention mechanism and residual convolutional network
Published 2025-07-01“…Baseline models used for comparison include Bidirectional Encoder Representations from Transformers (BERT), Robustly Optimized Bidirectional Encoder Representations from Transformers Approach (RoBERTa), Generalized Autoregressive Pretraining for Language Understanding (XLNet), Enhanced Representation through Knowledge Integration (ERNIE), and Generative Pre-trained Transformer 3.5 (GPT-3.5). …”
Get full text
Article -
30
Rumor detection using dual embeddings and text-based graph convolutional network
Published 2024-11-01“…This model uses dual embedding from two pre-trained transformer models: generative pre-trained transformers (GPT) and bidirectional encoder representations from transformers (BERT). …”
Get full text
Article -
31
Rethinking Technological Investment and Cost-Benefit: A Software Requirements Dependency Extraction Case Study
Published 2025-01-01“…Specifically, we extract dependencies from textual descriptions of software requirements and analyze the performance of two state-of-the-art ML techniques: Random Forest and Bidirectional Encoder Representations from Transformers (BERT), a encoder only Large Language Model. …”
Get full text
Article -
32
A Hybrid Deep Learning Approach for Cotton Plant Disease Detection Using BERT-ResNet-PSO
Published 2025-06-01“…It is, therefore, crucial to accurately identify leaf diseases in cotton plants to prevent any negative effects on yield. This paper presents a hybrid deep learning approach based on Bidirectional Encoder Representations from Transformers with Residual network and particle swarm optimization (BERT-ResNet-PSO) for detecting cotton plant diseases. …”
Get full text
Article -
33
EYE-Llama, an in-domain large language model for ophthalmology
Published 2025-07-01“…We evaluated EYE-Llama against Llama 2, Llama 3, Meditron, ChatDoctor, ChatGPT, and several other LLMs. Using BERT (Bidirectional Encoder Representations from Transformers) score, BART (Bidirectional and Auto-Regressive Transformer) score, and BLEU (Bilingual Evaluation Understudy) metrics, EYE-Llama achieved superior scores. …”
Get full text
Article -
34
Tackling misinformation in mobile social networks a BERT-LSTM approach for enhancing digital literacy
Published 2025-01-01“…Early detection of misinformation is essential yet challenging, particularly in contexts where initial content propagation lacks user feedback and engagement data. This study presents a novel hybrid model that combines Bidirectional Encoder Representations from Transformers (BERT) with Long Short-Term Memory (LSTM) networks to enhance the detection of misinformation using only textual content. …”
Get full text
Article -
35
Needle in a haystack: Harnessing AI in drug patent searches and prediction.
Published 2024-01-01“…The aim is primarily that of demonstrating how the proverbial needle in a haystack was identified, namely through leveraging the superb pattern-recognition abilities of the BERT (Bidirectional Encoder Representations from Transformers) algorithm. …”
Get full text
Article -
36
Sentiment Analysis of X Users Toward Electric Motorcycles Using SVM and BERT Algorithms
Published 2025-08-01“…This study presents a comparative analysis of Support Vector Machine (SVM) and Bidirectional Encoder Representations from Transformers (BERT) for sentiment analysis on electric motorcycles in Indonesia using data from the social media platform X, formerly known as Twitter. …”
Get full text
Article -
37
Twitter User Account Classification to Gain Insights into Communication Dynamics and Public Awareness During Tampa Bay's Red Tide Events
Published 2024-05-01“…Having used several text classification algorithms and feature preprocessing approaches, Support Vector Machine with Bidirectional Encoder Representations from Transformers (BERT) yielded the best cross-validation performance in both accuracy (90%) and versatility (unweighted F1 score of 0.67). …”
Get full text
Article -
38
Sporting a virtual future: exploring sports and virtual reality patents using deep learning-based analysis
Published 2025-06-01“…Using patent big data, we introduce SportsBERT, a bidirectional encoder representation from transformers (BERT)-based algorithm tailored for enhanced natural language processing in sports-related knowledge-based documents. …”
Get full text
Article -
39
VitroBert: modeling DILI by pretraining BERT on in vitro data
Published 2025-08-01“…We therefore introduce VitroBERT, a bidirectional encoder representations from transformers (BERT) model pretrained on large-scale in vitro assay profiles to generate biologically informed molecular embeddings. …”
Get full text
Article -
40
A Deep Learning Model for Automatic Citation Document Recommendation in Non-Obviousness Judgment: Using BERT-for-patents and Contrastive Learning
Published 2025-03-01“…Six models were trained based on the bidirectional encoder representations from transformers (BERT), and the performances were compared. …”
Get full text
Article