The Dangers of Talking to Artificial Strangers
ChatGPT may be the trendy new AI to talk to – but how willingly should clinicians be welcoming it into their practices?
Jed Boye | | 4 min read | Discussion
“Can an AI language model replace your expertise and experience? Of course not! But ChatGPT can certainly complement your knowledge and provide valuable insights into diagnosis, treatment, and management of various eye conditions. Think of ChatGPT as your sidekick - like Batman's Robin or Harry Potter's Hermione. It can help you access the latest information and research, answer patient questions, and even tell you a bad joke to lighten the mood during a stressful day at work. So why not give it a try? Who knows, it might even become your new favorite coworker!”
- ChatGPT on the use of ChatGPT by ophthalmologists
ChatGPT has taken the world by storm. The AI chatbot has been the fastest adopted application of all time, taking only two months from its initial launch to reach over 100 million monthly active users – for comparison, it took Google almost a year and Facebook four years to achieve the same milestone. When looking at the capabilities of the tool, it’s not surprising to see why; anybody with an account is granted access to the world’s most powerful virtual assistant, co-author, and increasingly capable interlocutor. However, before it becomes completely ingrained into our lives – like Google and Facebook – it may be worth looking closer – past the hype – to consider if and how this technology should be embraced, especially within the medical field.
Perhaps the first thing to give one pause is ChatGPT’s occasional and subtle fabrication of “facts,” which, due to their often plausible nature and the fact that they are usually concealed within a bed of correct information, are difficult for the average user to identify (1). For example, when asked to give the risk factors of myopia, ChatGPT told me, “Myopia is slightly more common in boys than in girls.” – an answer at odds with the current literature (2, 3). This problem is particularly worrying when applied to medicine, but why does it happen? ChatGPT is designed to sound correct, but not to actually be correct. The model has been trained on a large dataset of real human interactions, which it analyses to find linguistic patterns that it can then replicate in its output. It’s basically approximating human language with it having “no source of truth” – something that its developers, OpenAI, readily admit (4).
However, even though its creators admit the current fallibilities of what they have created, ChatGPT is can be less ready to do the same, in some cases worryingly so (5). What might ring even more alarm bells is the political and discriminatory bias that ChatGPT has been shown to have. In one notable case, when asked to develop a function to check if someone would be a good scientist, the output given was disheartening (6).
So, should you use ChatGPT? That’s not for me to say. It can be a powerful tool with the right prompts and in the right hands. But it also has the potential to do more harm than good. Its quick adoption is a sign that it is likely here to stay, but perhaps we should all think before we let ChatGPT speak.
Do you use ChatGPT? Do you plan to incorporate it into your medical practice? If so, how? What are your thoughts on ChatGPT? Please let us know in the comments below, or by emailing us: [email protected].
- B Guo et al., “How close is ChatGPT to human experts? Comparison corpus, evaluation, and detection,” arXiv:2301.07597 (2023).
- D Czepita et al., “Role of gender in the occurrence of refractive errors,” Ann Acad Med Stetin, 53, 5 (2007). PMID: 18557370.
- C Enthoven et al., “Gender predisposition to myopia shifts to girls in the young generation,” Invest Ophthalmol Vis Sci, 62, 2331 (2021).
- OpenAI (2023). “Introducing ChatGPT”, Available at: http://bit.ly/3UIDBrZ
- MovingToTheSun, Twitter (2023). Available at: https://bit.ly/3KEDQzO
- spiantado, Twitter (2023). Available at: https://bit.ly/3ogQnC4