Even though the idea of AI has been around for a long time, the use of it hadn’t become popular until 2023, when everyone started using ChatGPT, Snapchat’s “My AI,” and things like Google Assistant. The only thing is, AI might not be as helpful as we think it is.
The idea of using AI in healthcare seems like a great idea; however, it can cause more problems and a lot of individuals are uncomfortable with the idea of having a robot take care of their medical issues. AI might have a lot of knowledge and learn things quicker than humans, but there are things that AI lacks that humans have such as trust, empathy, and honesty. Near the end of January, Robert F. Kenedy Jr referenced the idea of AI working in healthcare. He explained how a health clinic “developed an AI nurse that you cannot distinguish from a human being” and diagnoses people just “as good as any doctor” (PBS News). And even though AI is helpful in many situations, physically being in the medical field may not be a place for it.
The problem with using artificial intelligence alongside healthcare is that doctors who have spent their lives studying and getting a degree in medicine will have used all that time and money for nothing if we allow AI to take their jobs. In addition, AI differs from humans because they don’t have emotions or feelings. Doctors know that “patients are vulnerable beings whose life is just as precious as their own” which causes them to “establish an empathetic connection with their patients” (NLoM). AI doesn’t have any sense of empathy and won’t be able to tell if their patients are scared, stressed, or uncomfortable in the setting because of not having any feelings.
Artificial intelligence also creates another issue dealing with surgeries and appointments. The way AI functions throughout surgeries may differ from overall appearance and view of patients, however there can still be complications.
Now the doctors that we have are trained and have studied all types of procedures that a patient would need or possible options that they could have. However, if we used AI in this case, they don’t have any training regarding surgeries, testing, or taking samples. Things that could go wrong because AI has no medical experience is that it can “misdiagnose and worsen healthcare disparities” meaning if AI gives the wrong diagnosis, it can make a patient’s health ten times worse if given the wrong medication (Science Direct). In addition, because it is a robot and can only function as one, AI doesn’t have any self-awareness. There could be a possibility that during a procedure an issue with the patient’s body comes up, but AI isn’t able to sense it. In the event that AI doesn’t know something else could be going on in the body, it can further damage skin cells, bones, blood vessels, nerves, and other things. According to the National Library of Medicine, AI can use its calculations and materials which could be “misapplied to surrounding tissues, resulting in accidental burn injuries.”
Equivalent to AI not being able to sense other problems during an operation, there are also issues with AI trying to give a diagnosis to patients. Eugene Kruglik explains how “one of the primary AI challenges is the lack of data” meaning it doesn’t have enough access to information and “needs a vast number of images and videos for training” (Vention). If AI has insufficient evidence to apply a diagnosis to a patient, whether it’s for a cold or a disease, how could we know if it ever tells us the right diagnosis? The main challenge is trying to find a solution for rare diseases such as cancer to which AI only has the same information we do so “achieving accurate diagnoses remains a challenge…for less common diseases with limited data” (Brookings). The less knowledge we have on certain diseases and topics leads to AI having a hard time “diagnosing rare diseases” because it is “hindered by the scarcity of data” and leads to a misdiagnosis of a patient (Brookings).
Another concern towards the argument of AI taking doctor positions is that not all patients would be comfortable with the idea of it. The main reason is because they don’t trust AI to help establish a diagnosis as well as humans responding better to empathy which AI doesn’t have. Pew Research states how “six-in-ten people in the U.S. say they would feel uncomfortable if their own health care provider relied on artificial intelligence” to do the simple tasks that their doctors went to school for. AI would also end up being ineffective in hospitals “due to accountability/liability concerns” as well as “issues with the patient’s trust and acceptance” (Brookings).
The thing we need to learn is that, while AI is helpful in so many different ways in certain areas, there are places where it doesn’t belong until it’s evolved and knows enough information as us. We should treat them like MedStudents, they’re just learning how to be doctors and so is AI.