Search alternatives:
transformers » transformed (Expand Search)
Showing 21 - 40 results of 66 for search 'Bidirectional encoder presentation from transformers', query time: 0.13s Refine Results
  1. 21

    Leveraging Multilingual Transformer for Multiclass Sentiment Analysis in Code-Mixed Data of Low-Resource Languages by Muhammad Kashif Nazir, Cm Nadeem Faisal, Muhammad Asif Habib, Haseeb Ahmad

    Published 2025-01-01
    “…Subsequently, the Multilingual Bidirectional Encoder Representations from Transformers (mBERT) model was optimized and trained for multiclass sentiment analysis on the code-mixed data. …”
    Get full text
    Article
  2. 22

    Integrating structured and unstructured data for predicting emergency severity: an association and predictive study using transformer-based natural language processing models by Xingyu Zhang, Yanshan Wang, Yun Jiang, Charissa B. Pacella, Wenbin Zhang

    Published 2024-12-01
    “…Unstructured data, including chief complaints and reasons for visit, were processed using a Bidirectional Encoder Representations from Transformers (BERT) model. …”
    Get full text
    Article
  3. 23

    Overview of deep learning and large language models in machine translation: a special perspective on the Arabic language by Sanaa Abou Elhamayed, Mohamed Nour

    Published 2025-06-01
    “…The bidirectional-encoder-representation from transformer (BERT) and LLMs are presented to utilize the big amount of textual data to learn translation patterns. …”
    Get full text
    Article
  4. 24

    A deep learning model for prediction of lysine crotonylation sites by fusing multi-features based on multi-head self-attention mechanism by Yunyun Liang, Minwei Li

    Published 2025-05-01
    “…Multiple features are extracted from natural language processing features and hand-crafted features, where natural language processing features include token embedding and positional embedding encoded by transformer, and hand-crafted features include one-hot, amino acid index and position-weighted amino acid composition, and encoded by bidirectional long short-term memory network. …”
    Get full text
    Article
  5. 25

    From Extractive to Generative: An Analysis of Automatic Text Summarization Techniques by Liu Zixu

    Published 2025-01-01
    “…The review highlights significant milestones in the development of summarization algorithms, including the emergence of Transformer-based models like Bidirectional Encoder Representations from Transformers (BERT) and Generative Pre-trained Transformer (GPT), which have significantly improved the quality and coherence of generated summaries. …”
    Get full text
    Article
  6. 26

    Multi-Head Graph Attention Adversarial Autoencoder Network for Unsupervised Change Detection Using Heterogeneous Remote Sensing Images by Meng Jia, Xiangyu Lou, Zhiqiang Zhao, Xiaofeng Lu, Zhenghao Shi

    Published 2025-07-01
    “…The MHGAN employs a bidirectional adversarial convolutional autoencoder network to reconstruct and perform style transformation of heterogeneous images. …”
    Get full text
    Article
  7. 27

    Information extraction from green channel textual records on expressways using hybrid deep learning by Jiaona Chen, Jing Zhang, Weijun Tao, Yinli Jin, Heng Fan

    Published 2024-12-01
    “…Eight entities are designed and proposed in the NER processing for the expressway green channel. three typical pre-trained natural language processing models are utilized and compared to recognize entities and obtain feature vectors, including bidirectional encoder representations from transformer (BERT), ALBERT, and RoBERTa. …”
    Get full text
    Article
  8. 28

    Identifying Non-Functional Requirements From Unconstrained Documents Using Natural Language Processing and Machine Learning Approaches by Qais A. Shreda, Abualsoud A. Hanani

    Published 2025-01-01
    “…In our approach, features were extracted from the requirement sentences using four different natural language processing methods including statistical and state-of-the-art semantic analysis presented by Google word2vec and bidirectional encoder representations from transformers models. …”
    Get full text
    Article
  9. 29

    A fake news detection model using the integration of multimodal attention mechanism and residual convolutional network by Ying Lu, Naiwei Yao

    Published 2025-07-01
    “…Baseline models used for comparison include Bidirectional Encoder Representations from Transformers (BERT), Robustly Optimized Bidirectional Encoder Representations from Transformers Approach (RoBERTa), Generalized Autoregressive Pretraining for Language Understanding (XLNet), Enhanced Representation through Knowledge Integration (ERNIE), and Generative Pre-trained Transformer 3.5 (GPT-3.5). …”
    Get full text
    Article
  10. 30

    Rumor detection using dual embeddings and text-based graph convolutional network by Barsha Pattanaik, Sourav Mandal, Rudra M. Tripathy, Arif Ahmed Sekh

    Published 2024-11-01
    “…This model uses dual embedding from two pre-trained transformer models: generative pre-trained transformers (GPT) and bidirectional encoder representations from transformers (BERT). …”
    Get full text
    Article
  11. 31

    Rethinking Technological Investment and Cost-Benefit: A Software Requirements Dependency Extraction Case Study by Gouri Ginde, Guenther Ruhe, Chad Saunders

    Published 2025-01-01
    “…Specifically, we extract dependencies from textual descriptions of software requirements and analyze the performance of two state-of-the-art ML techniques: Random Forest and Bidirectional Encoder Representations from Transformers (BERT), a encoder only Large Language Model. …”
    Get full text
    Article
  12. 32

    A Hybrid Deep Learning Approach for Cotton Plant Disease Detection Using BERT-ResNet-PSO by Chetanpal Singh, Santoso Wibowo, Srimannarayana Grandhi

    Published 2025-06-01
    “…It is, therefore, crucial to accurately identify leaf diseases in cotton plants to prevent any negative effects on yield. This paper presents a hybrid deep learning approach based on Bidirectional Encoder Representations from Transformers with Residual network and particle swarm optimization (BERT-ResNet-PSO) for detecting cotton plant diseases. …”
    Get full text
    Article
  13. 33

    EYE-Llama, an in-domain large language model for ophthalmology by Tania Haghighi, Sina Gholami, Jared Todd Sokol, Enaika Kishnani, Adnan Ahsaniyan, Holakou Rahmanian, Fares Hedayati, Theodore Leng, Minhaj Nur Alam

    Published 2025-07-01
    “…We evaluated EYE-Llama against Llama 2, Llama 3, Meditron, ChatDoctor, ChatGPT, and several other LLMs. Using BERT (Bidirectional Encoder Representations from Transformers) score, BART (Bidirectional and Auto-Regressive Transformer) score, and BLEU (Bilingual Evaluation Understudy) metrics, EYE-Llama achieved superior scores. …”
    Get full text
    Article
  14. 34

    Tackling misinformation in mobile social networks a BERT-LSTM approach for enhancing digital literacy by Jun Wang, Xiulai Wang, Airong Yu

    Published 2025-01-01
    “…Early detection of misinformation is essential yet challenging, particularly in contexts where initial content propagation lacks user feedback and engagement data. This study presents a novel hybrid model that combines Bidirectional Encoder Representations from Transformers (BERT) with Long Short-Term Memory (LSTM) networks to enhance the detection of misinformation using only textual content. …”
    Get full text
    Article
  15. 35

    Needle in a haystack: Harnessing AI in drug patent searches and prediction. by Leonardo Costa Ribeiro, Valbona Muzaka

    Published 2024-01-01
    “…The aim is primarily that of demonstrating how the proverbial needle in a haystack was identified, namely through leveraging the superb pattern-recognition abilities of the BERT (Bidirectional Encoder Representations from Transformers) algorithm. …”
    Get full text
    Article
  16. 36

    Sentiment Analysis of X Users Toward Electric Motorcycles Using SVM and BERT Algorithms by Calvin Adiwinata, Afiyati Afiyati

    Published 2025-08-01
    “…This study presents a comparative analysis of Support Vector Machine (SVM) and Bidirectional Encoder Representations from Transformers (BERT) for sentiment analysis on electric motorcycles in Indonesia using data from the social media platform X, formerly known as Twitter. …”
    Get full text
    Article
  17. 37

    Twitter User Account Classification to Gain Insights into Communication Dynamics and Public Awareness During Tampa Bay's Red Tide Events by Andrey Skripnikov, Tania Roy, Fehmi Neffati, Melvin Adkins, Marcus Beck

    Published 2024-05-01
    “…Having used several text classification algorithms and feature preprocessing approaches, Support Vector Machine with Bidirectional Encoder Representations from Transformers (BERT) yielded the best cross-validation performance in both accuracy (90%) and versatility (unweighted F1 score of 0.67). …”
    Get full text
    Article
  18. 38

    Sporting a virtual future: exploring sports and virtual reality patents using deep learning-based analysis by Jea Woog Lee, Sangmin Song, JungMin Yun, Doug Hyun Han, YoungBin Kim

    Published 2025-06-01
    “…Using patent big data, we introduce SportsBERT, a bidirectional encoder representation from transformers (BERT)-based algorithm tailored for enhanced natural language processing in sports-related knowledge-based documents. …”
    Get full text
    Article
  19. 39

    VitroBert: modeling DILI by pretraining BERT on in vitro data by Muhammad Arslan Masood, Anamya Ajjolli Nagaraja, Katia Belaid, Natalie Mesens, Hugo Ceulemans, Samuel Kaski, Dorota Herman, Markus Heinonen

    Published 2025-08-01
    “…We therefore introduce VitroBERT, a bidirectional encoder representations from transformers (BERT) model pretrained on large-scale in vitro assay profiles to generate biologically informed molecular embeddings. …”
    Get full text
    Article
  20. 40

    A Deep Learning Model for Automatic Citation Document Recommendation in Non-Obviousness Judgment: Using BERT-for-patents and Contrastive Learning by Dongkun Yoo, Jiheon Han

    Published 2025-03-01
    “…Six models were trained based on the bidirectional encoder representations from transformers (BERT), and the performances were compared. …”
    Get full text
    Article