ChatGPT-Medical guidance
Description
The purpose of this study was to assess ChatGPT's ability and its potential risks in medical guidance for patients with MAFLD. The study was conducted in February 2023. We simulated the questions raised by patients with MAFLD, and tested 110 questions in six aspects, including concept, diagnosis, progression, prevention, treatment and comorbidity. Each question was submitted to ChatGPT three times. Appropriateness and consistency were evaluated. In the data set, we presented the questions and corresponding answers. Judgments for consistency and appropriateness and reasons were also provided.
Files
Steps to reproduce
We simulated the questions raised by patients with MAFLD, and tested 110 questions in six aspects, including concept, diagnosis, progression, prevention, treatment and comorbidity. Each question was submitted to ChatGPT three times. The responses were evaluated in two aspects: 1. Appropriateness: the answers were judged as inappropriate, if one of the answers was incorrect, including demonstrably false statement, inaccurate diagnosis, or inappropriate suggestions; 2 Consistency: consistency between three answers was judged at a rough level; if they were inconsistent, they were considered as unreliable. Since we mainly focused on whether ChatGPT can provide appropriate medical guidance for the population of patients, we paid few attentions to the accuracy and controversy of the medical mechanism, because this would not have a significant impact on the patient's awareness. The sentences that may affect the patients' medical-related behavior were given more attention.