Integrating artificial intelligence in renal cell carcinoma: Evaluating ChatGPT’s performance in educating patients and trainees Journal Article


Authors: Mershon, J. P.; Posid, T.; Salari, K.; Matulewicz, R. S.; Singer, E. A.; Dason, S.
Article Title: Integrating artificial intelligence in renal cell carcinoma: Evaluating ChatGPT’s performance in educating patients and trainees
Abstract: Background: OpenAI’s ChatGPT is a large language model-based artificial intelligence (AI) chatbot that can be used to answer unique, user-generated questions without direct training on specific content. Large language models have significant potential in urologic education. We reviewed the primary data surrounding the use of large language models in urology. We also reported findings of our primary study assessing the performance of ChatGPT in renal cell carcinoma (RCC) education. Methods: For our primary study, we utilized three professional society guidelines addressing RCC to generate fifteen content questions. These questions were inputted into ChatGPT 3.5. ChatGPT responses along with pre- and post-content assessment questions regarding ChatGPT were then presented to evaluators. Evaluators consisted of four urologic oncologists and four non-clinical staff members. Medline was reviewed for additional studies pertaining to the use of ChatGPT in urologic education. Results: We found that all assessors rated ChatGPT highly on the accuracy and usefulness of information provided with overall mean scores of 3.64 [±0.62 standard deviation (SD)] and 3.58 (±0.75) out of 5, respectively. Clinicians and non-clinicians did not differ in their scoring of responses (P=0.37). Completing content assessment improved confidence in the accuracy of ChatGPT’s information (P=0.01) and increased agreement that it should be used for medical education (P=0.007). Attitudes towards use for patient education did not change (P=0.30). We also review the current state of the literature regarding ChatGPT use for patient and trainee education and discuss future steps towards optimization. Conclusions: ChatGPT has significant potential utility in medical education if it can continue to provide accurate and useful information. We have found it to be a useful adjunct to expert human guidance both for medical trainee and, less so, for patient education. Further work is needed to validate ChatGPT before widespread adoption. © AME Publishing Company.
Keywords: cancer patient; patient education; practice guideline; renal cell carcinoma; medical education; artificial intelligence; clinical evaluation; health education; medical student; oncologist; clinician; human; article; renal neoplasm; renal cell carcinoma (rcc); artificial intelligence (ai); large language model; chatgpt; artificial intelligence chatbot
Journal Title: Translational Cancer Research
Volume: 13
Issue: 11
ISSN: 2218-676X
Publisher: Pioneer Bioscience Publishing Company  
Date Published: 2024-11-30
Start Page: 6246
End Page: 6254
Language: English
DOI: 10.21037/tcr-23-2234
PROVIDER: scopus
PMCID: PMC11651803
PUBMED: 39697745
DOI/URL:
Notes: The MSK Cancer Center Support Grant (P30 CA008748) is acknowledged in the PDF -- Source: Scopus
Altmetric
Citation Impact
BMJ Impact Analytics
MSK Authors