| خلاصه مقاله | Introduction: Nowadays, digital health literacy is emerged in the context of technology or electronic sources. Artificial intelligence (AI) language-generated tools, such as ChatGPT (Chat Generative Pre-Trained Transformer), currently have shown promising potential in different fields. ChatGPT developed by Open AI, uses the large language model (LLM) to generate human-like texts within seconds. This chatbot can provide responses to a wide range of questions. However, these answers may contain errors. This article reviews the pros and cons of using AI as a source for health literacy and patient education.
Methods: To obtain the studies based on the objectives of this study we searched the PubMed, Embase, and Google Scholar for the studies from 2020 inwards. Only English language studies were included. The quality of the studies was not assessed. No limitation for the study design implemented.
Results: A health chatbot can provide the users with information about the disease risk factors, healthy life style habits, and various health topics, in an easy and user-friendly manner. This may potentially reduce the unnecessary hospital visits.However, LLMs are at risk of a phenomenon called “hallucinations” or stochastic parroting, in which it generates convincing and linguistically fluent answers that are in fact totally incorrect, providing the user with misinformation or disinformation. The Chatbots utilize extensive text data from various sources on the internet, and so are limited to the dataset they are fed or trained on, and unable to check the reliability, depth, or accuracy of the source. These tools might provide correct answers to basic questions, but when it comes to a disease diagnosis or treatment more caution should be taken.
Moreover, as LLMs are built on word associations, they theoretically could identify or create multiple patterns or associations between separate data and, thereby, persuasively link many diseases to the provided situation. Ignoring the unique characteristics of each individual may result in a recommendation that is very not personalized for the specific context. So, chatbots may be used as a tool for screening but not the diagnosis which should be affirmed by a medical visit.
Health care providers may use this tool to generate health tips. Further advancements are needed to ensure the reliability of the information. It is vital that a field specialist review and revise the text generated by the Chatbot, to ensure its originality and validity. Moreover, relying on AI for medical advice accompany ethical concerns. For instance, the misinterpretations or inaccuracies might lead to false negative or false positive findings. So, the responsibility and accountability of AI tools are under question.
Conclusion: We can conclude that the chatbots are still double-edged swords, with both advantages and disadvantages. AI has multiple potentials, but it still should be considered as a conjunct tool in the field of people’s health and not a totally independent modality. Both the community and health providers should be aware of the errors of using this technology without consulting a health specialist. |