Semantic Scholar Open Access 2023 52 sitasi

ChatGPT: Temptations of Progress

Rushabh H. Doshi S. Bajaj H. Krumholz

Abstrak

ChatGPT is an artificial intelligence (AI) chatbot that processes and generates natural language text, offering human-like responses to a wide range of questions and prompts. Five days after its release, ChatGPT garnered one million users, and the program has been called world-changing, a tipping point for AI, and the beginning of a new technological revolution (Metz 2022). From helping physicians form differential diagnoses to answering patient questions, ChatGPT may have transformative implications across medicine. Nevertheless, the full scope of its promise and pitfalls remains unknown. Given the attention experts are giving ChatGPT, we asked it (December 15 version) how it would impact medical research. We then asked the question: How would ChatGPT impact medicine more broadly? The responses are in Figure 1 and 2, respectively. What was particularly striking about the program’s response was its irrevocably progressive attitude. ChatGPT emphatically notes its own promise—analyzing big data, automating menial tasks, improving accuracy and democratization of research, and ensuring faster clinical implementation of basic science—but gives no consideration to potential pitfalls. In this age of rapid technological advances, innovation can be mistaken for progress if novel tools are not deployed with care. In a piece published by the National Academy of Engineering (Jasanoff 2020), Jasanoff discusses three key temptations of technocracy, or the dangers of relying on technology and science to solve sociopolitical problems. The first is the prevailing attitude that technology drives society while law and ethics hinder progress; innovation is seen as inherently good and virtuous while potential adverse consequences are dismissed. Jasanoff also critiques the temptation that something should be done just because it can: creating the next paradigm-shifting technology becomes the sole objective instead of rooting out bias or ensuring that innovation meets the needs of broader communities. The final temptation Jasanoff offers is how technological failures and societal harm are portrayed as unintended consequences, or products of misuse. Designers are thus absolved of their products’ harms. AI tools can undeniably help improve medical research and practice, and to a certain extent, they already have. But as ChatGPT’s response underscores, the deployment of these tools must be accompanied with caution, reflection, and responsibility. Physicians and other scientists have already expressed concerns over some of ChatGPT’s blindspots. The program offers almost instantaneous responses to complex questions, but its unequivocable confidence could be potentially dangerous (Lin 2022). These responses can be more dangerous than the existing bias of search engines like Google because users are not as easily provided the opportunity to evaluate their sources. While users of search engines can evaluate multiple internet links and sources for their information, ChatGPT often provides a singular answer to complex questions, with no alternatives. Given that 89% of people in the United States google their symptoms before seeing a physician (Eligibility Team 2019), many patients may start consulting “Dr. ChatGPT” but be unable to distinguish useful medical information from potentially dangerous inaccuracies. Additionally, ChatGPT’s accuracy is known to deteriorate around more complex topics, and its knowledge can be outdated as the program is restricted to what it learned before 2021 (Lin 2022). For example, ChatGPT generated a convincing explanation on “how crushed porcelain added to breast milk can support the infant digestive system.” (Birhane and Raji 2022) Medicine is a field with many rare disorders and complex pathophysiology, and the utilization of ChatGPT for patient education of these disorders could pose health risks. Additionally, like many other AI tools, ChatGPT can demonstrate prejudice and bias in its answers, despite guardrails against inappropriate requests and responses. For instance, when asked by one user to write code if someone would be a good scientist based on race and gender, ChatGPT defined scientists’ worth by their being white and male (Lin 2022). Similarly, when the same user asked if a child’s life should be saved based on race and gender, ChatGPT offered a function that all lives should be saved, besides a child who was African American and male (Lin 2022). These biases are concerning, but not necessarily unexpected given that AI tools can perpetuate the prejudice of the data on which they are trained. Historically, these biases have arisen because of small sample sizes and limited

Topik & Kata Kunci

Penulis (3)

R

Rushabh H. Doshi

S

S. Bajaj

H

H. Krumholz

Format Sitasi

Doshi, R.H., Bajaj, S., Krumholz, H. (2023). ChatGPT: Temptations of Progress. https://doi.org/10.1080/15265161.2023.2180110

Akses Cepat

PDF tidak tersedia langsung

Cek di sumber asli →
Lihat di Sumber doi.org/10.1080/15265161.2023.2180110
Informasi Jurnal
Tahun Terbit
2023
Bahasa
en
Total Sitasi
52×
Sumber Database
Semantic Scholar
DOI
10.1080/15265161.2023.2180110
Akses
Open Access ✓