Mostrar el registro sencillo del ítem

dc.contributor.authorMadrid‑García, Alfredo
dc.contributor.authorRosales Rosado, Zulema
dc.contributor.authorFreites‑Nuñez, Dalifer
dc.contributor.authorPérez‑Sancristóbal, Inés
dc.contributor.authorPato‑Cour, Esperanza
dc.contributor.authorPlasencia‑Rodríguez, Chamaida
dc.contributor.authorCabeza Osorio, Luis
dc.contributor.authorAbasolo‑Alcázar, Lydia
dc.contributor.authorLeón‑Mateos, Leticia
dc.contributor.authorFernández‑Gutiérrez, Benjamín
dc.contributor.authorRodríguez‑Rodríguez, Luis
dc.date.accessioned2024-02-13T08:48:04Z
dc.date.available2024-02-13T08:48:04Z
dc.date.issued2023
dc.identifier.issn2045-2322spa
dc.identifier.urihttps://hdl.handle.net/10641/3971
dc.description.abstractThe emergence of large language models (LLM) with remarkable performance such as ChatGPT and GPT-4, has led to an unprecedented uptake in the population. One of their most promising and studied applications concerns education due to their ability to understand and generate human-like text, creating a multitude of opportunities for enhancing educational practices and outcomes. The objective of this study is twofold: to assess the accuracy of ChatGPT/GPT-4 in answering rheumatology questions from the access exam to specialized medical training in Spain (MIR), and to evaluate the medical reasoning followed by these LLM to answer those questions. A dataset, RheumaMIR, of 145 rheumatology-related questions, extracted from the exams held between 2010 and 2023, was created for that purpose, used as a prompt for the LLM, and was publicly distributed. Six rheumatologists with clinical and teaching experience evaluated the clinical reasoning of the chatbots using a 5-point Likert scale and their degree of agreement was analyzed. The association between variables that could influence the models’ accuracy (i.e., year of the exam question, disease addressed, type of question and genre) was studied. ChatGPT demonstrated a high level of performance in both accuracy, 66.43%, and clinical reasoning, median (Q1–Q3), 4.5 (2.33–4.67). However, GPT-4 showed better performance with an accuracy score of 93.71% and a median clinical reasoning value of 4.67 (4.5–4.83). These findings suggest that LLM may serve as valuable tools in rheumatology education, aiding in exam preparation and supplementing traditional teaching methods.spa
dc.language.isoengspa
dc.publisherScientific Reportsspa
dc.rightsAtribución-NoComercial-SinDerivadas 3.0 España*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/3.0/es/*
dc.titleHarnessing ChatGPT and GPT-4 for evaluating the rheumatology questions of the Spanish access exam to specialized medical training.spa
dc.typejournal articlespa
dc.type.hasVersionAMspa
dc.rights.accessRightsopen accessspa
dc.description.extent1521 KBspa
dc.identifier.doi10.1038/s41598-023-49483-6spa
dc.relation.publisherversionhttps://www.nature.com/articles/s41598-023-49483-6spa


Ficheros en el ítem

FicherosTamañoFormatoVer
s41598-023-49483-6.pdf1.484MbPDFVer/

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem

Atribución-NoComercial-SinDerivadas 3.0 España
Excepto si se señala otra cosa, la licencia del ítem se describe como Atribución-NoComercial-SinDerivadas 3.0 España