Showing 101 - 120 results of 2,359 for search 'improve answer', query time: 0.11s Refine Results
  1. 101

    Interactive instructional teaching method (IITM); contribution towards students’ ability in answering unfamiliar types questions of buffer solution by Habiddin Habiddin, Ulfa Rafika, Utomo Yudhi

    Published 2023-12-01
    “…Students’ adversity quotient correlated positively to their ability to answer unfamiliar questions of buffer solutions. Meanwhile, the effect of students’ learning interests and adversity quotient on students’ ability to answer unfamiliar questions was found uncorrelated.…”
    Get full text
    Article
  2. 102
  3. 103
  4. 104

    Enhancing vaccine communication in social Q&A: identifying readily applicable factors for answer acceptance on medical sciences stack exchange by Hengyi Fu

    Published 2025-03-01
    “…This study investigates factors influencing the acceptance of answers to vaccine-related questions on social Q&A platforms, aiming to improve online vaccine communication. …”
    Get full text
    Article
  5. 105

    An Empirical Evaluation of Large Language Models on Consumer Health Questions by Moaiz Abrar, Yusuf Sermet, Ibrahim Demir

    Published 2025-02-01
    “…<b>Conclusions:</b> Current small or medium sized LLMs struggle to provide accurate answers to consumer health questions and must be significantly improved.…”
    Get full text
    Article
  6. 106
  7. 107

    Research on a traditional Chinese medicine case-based question-answering system integrating large language models and knowledge graphs by Yuchen Duan, Qingqing Zhou, Yu Li, Chi Qin, Ziyang Wang, Hongxing Kan, Hongxing Kan, Jili Hu, Jili Hu

    Published 2025-01-01
    “…This approach could play a crucial role in modernizing TCM research and improving access to clinical insights. Future research may explore expanding the dataset and refining the query system for broader applications.…”
    Get full text
    Article
  8. 108

    Assessing the performance of zero-shot visual question answering in multimodal large language models for 12-lead ECG image interpretation by Tomohisa Seki, Yoshimasa Kawazoe, Yoshimasa Kawazoe, Hiromasa Ito, Yu Akagi, Toru Takiguchi, Kazuhiko Ohe, Kazuhiko Ohe

    Published 2025-02-01
    “…These findings suggest a need for improved control over image hallucination and indicate that performance evaluation using the percentage of correct answers to multiple-choice questions may not be sufficient for performance assessment in VQA tasks.…”
    Get full text
    Article
  9. 109

    Analyzing Diagnostic Reasoning of Vision–Language Models via Zero-Shot Chain-of-Thought Prompting in Medical Visual Question Answering by Fatema Tuj Johora Faria, Laith H. Baniata, Ahyoung Choi, Sangwoo Kang

    Published 2025-07-01
    “…Medical Visual Question Answering (MedVQA) lies at the intersection of computer vision, natural language processing, and clinical decision-making, aiming to generate accurate responses from medical images paired with complex inquiries. …”
    Get full text
    Article
  10. 110

    From Questions to Answers: Teaching Evidence-Based Medicine Question Formulation and Literature Searching Skills to First-Year Medical Students by Juliana Magro, Caitlin Plovnick, Gregory Laynor, Joey Nicholson

    Published 2025-02-01
    “…After the workshop, students completed a posttest. Students showed improvement in differentiating background and foreground questions (p < .001), formulating answerable clinical questions (p < .001), and developing appropriate database searches (p < .001 and p = .002). …”
    Get full text
    Article
  11. 111

    The Use of the Cloze Test in Reading Comprehension Assessment in Brazil: Post-Pandemic Challenges by Flávia Oliveira Freitas, Gislane Evangelista dos Santos, Raquel Meister Ko Freitag

    Published 2025-05-01
    “…The criteria for analyzing these answers are based on Taylor’s (1953) exact answers initial proposal (Brown 1980; 2013), added to other assessment instruments used in the Psychology field. …”
    Get full text
    Article
  12. 112

    How reliable are ChatGPT and Google’s answers to frequently asked questions about unicondylar knee arthroplasty from a scientific perspective? by Ali Aydilek, Ömer Levent Karadamar

    Published 2025-06-01
    “…Results A total of 83.3% of ChatGPT’s responses were found to be consistent with academic sources, whereas this rate was 58.3% for Google. ChatGPT’s answers of 142.8 words, compared to Google’s 85.6-word average. …”
    Get full text
    Article
  13. 113

    Accuracy, appropriateness, and readability of ChatGPT-4 and ChatGPT-3.5 in answering pediatric emergency medicine post-discharge questions by Mitul Gupta, Aiza Kahlun, Ria Sur, Pramiti Gupta, Andrew Kienstra, Winnie Whitaker, Graham Aufricht

    Published 2025-04-01
    “…This study compared 2 versions of ChatGPT in answering post-discharge follow-up questions in the area of pediatric emergency medicine (PEM). …”
    Get full text
    Article
  14. 114
  15. 115

    Explanation and design of mentoring in order to promote human resources activities in the National Company of Southern Oil-bearing Regions by Ali Ghasemi Ghasemvand, Vahid Chenari, Mehrdad Hamari, Seyyed Ali Akbar Ahmadi

    Published 2024-11-01
    “…Abstract The aim of the current research is to explain and design mentoring in order to improve human resources activities in the National Company of Southern Oil Regions. …”
    Get full text
    Article
  16. 116

    Mother: a maternal online technology for health care dataset by Odongo Steven Eyobu, Brian Angoda Nyanga, Lukman Bukenya, Daniel Ongom, Tonny J. Oyana

    Published 2025-04-01
    “…The answers to the questions were provided and validated by professional medical personnel. …”
    Get full text
    Article
  17. 117

    Arch-Eval benchmark for assessing chinese architectural domain knowledge in large language models by Jie Wu, Mincheng Jiang, Juntian Fan, Shimin Li, Hongtao Xu, Ye Zhao

    Published 2025-04-01
    “…The results reveal significant differences in the performance of these models in the domain of architectural knowledge question-answering. Our findings show that the average accuracy difference between Chain-of-Thought (COT) evaluation and Answer-Only (AO) evaluation is less than 3%, but the response time for COT is significantly longer, extending to 26 times that of AO (62.23 seconds per question vs. 2.38 seconds per question). …”
    Get full text
    Article
  18. 118

    Comparative performance of ChatGPT, Gemini, and final-year emergency medicine clerkship students in answering multiple-choice questions: implications for the use of AI in medical e... by Shaikha Nasser Al-Thani, Shahzad Anjum, Zain Ali Bhutta, Sarah Bashir, Muhammad Azhar Majeed, Anfal Sher Khan, Khalid Bashir

    Published 2025-08-01
    “…While these tools show promise for answering multiple-choice questions (MCQs), their efficacy in specialized domains, such as Emergency Medicine (EM) clerkship, remains underexplored. …”
    Get full text
    Article
  19. 119

    Enhancing responses from large language models with role-playing prompts: a comparative study on answering frequently asked questions about total knee arthroplasty by Yi-Chen Chen, Sheng-Hsun Lee, Huan Sheu, Sheng-Hsuan Lin, Chih-Chien Hu, Shih-Chen Fu, Cheng-Pang Yang, Yu-Chih Lin

    Published 2025-05-01
    “…This study aims to evaluate and compare the performance of these LLMs in answering frequently asked questions (FAQs) about Total Knee Arthroplasty (TKA), with a specific focus on the impact of role-playing prompts. …”
    Get full text
    Article
  20. 120