‘AI is coming and there’s nothing stopping it’
Artificial Intelligence can be a useful tool in the healthcare field as well as reinventing and reinvigorating the way …
This item is available in full to subscribers.
If you are a current print subscriber, you can set up a free website account by clicking here.
Otherwise, click here to view your options for subscribing.
Please log in to continue |
|
‘AI is coming and there’s nothing stopping it’
Artificial Intelligence can be a useful tool in the healthcare field as well as reinventing and reinvigorating the way healthcare professionals practice, according to research and experts Ocean State Stories interviewed. The rapidly evolving field of AI has brought many new advances such as robotic arms to assist with surgeries and systems that will help doctors and nurses to chart patient care.
When thinking of AI in the healthcare field, one must realize that AI has been in use for years. With the emergence of ChatGPT and other similar systems, it is common, especially with the recent popularity of such systems, to think of AI as only chat bots — but the healthcare field has already employed AI in many capacities, such as CT scanners and MRI machines.
One study conducted by Carta Healthcare in August 2023 polled 1,027 U.S. adults. Carta Healthcare, which manufactures products that streamline administrative tasks for providers, conducted its first-ever poll regarding AI. They found that three out of four patients don’t trust AI in a healthcare setting, but four out of five patients did not know if their provider was using AI or not. About 40% of the respondents admitted that their knowledge of AI is limited but were divided almost evenly when asked if they would be comfortable with AI in a healthcare setting.
Perhaps transparency is key, though some experts assert that in some cases transparency could do more harm than good. Education regarding AI is critical; however, some patients may not wish to know. This could be for several reasons, such as not having faith in AI, not having faith in their provider to use AI appropriately, or just generally being uncomfortable with AI because of a lack of understanding.
Dr. Gaurav Choudhary, a co-principal Investigator for a research clinical study involving digital stethoscopes to detect and stratify pulmonary hypertension, believes that AI holds great promise for the future of healthcare. “AI is coming, and there’s nothing stopping it,” says Choudhary, Director of Cardiovascular Research at The Warren Alpert Medical School of Brown University and Lifespan Cardiovascular Institute and Associate Chief of Staff (Research) at the Providence VA Medical Center.
In his research, Choudhary is looking at the use of AI in the form of digital stethoscopes to assist with early identification of pulmonary hypertension, which can be hard to diagnose as there is no easy measure to confirm this condition, save for a clinical examination. AI technology will use algorithms to discover and examine different cardiac and pulmonary patterns to aid in early detection and treatment. Analysis of these algorithms will also aid in assisting healthcare professionals with taking and recording notes on their patients. AI can assist in summarizing patient interactions clearly as AI can always be listening. The analytics collected by AI can assist healthcare professionals to discover meaningful patterns in risk and can assist in professional conclusions.
Choudhary himself does not write code but encourages the collaboration of heathcare providers and AI developers to learn about AI. He notes that AI only “learns” what we program it to learn. He believes that part of the future of healthcare lies with personalized AI, meaning that AI knows an individual well enough to know what diseases they are at risk for and what medications the patient will be able to take effectively. This would lead to general wellness suggestions based on individualism. There are already some technologies that do this, such as smart watches, but all diagnosis and treatments currently available are based on research, trials and observation of patterns. Personalized AI can individualize care experiences that clinical trials cannot.
There is still the question of responsibility. If AI predicts an incorrect diagnosis, who is responsible? Is it the doctor working with the AI system, or the developer? What safeguards are in place to protect patients?
Currently there is essentially no case law regarding liability with medical AI. Should something go wrong and AI fail, it is possible that the developer and/or the medical professional may be liable under a variety of tort principles. Holding a developer liable may be a slippery slope, however, considering all the people it takes to create and program AI.
In an interview, Congressman Jim Langevin, Distinguished Chair of the Institute for Cybersecurity and Emerging Technologies at Rhode Island College, discussed some of these concerns.
Langevin said that AI learns through programming. This means that a person must feed the AI system a series of codes. There are AI systems that are meant for specific tasks, such as robotic arms for surgery, and as such, will only be “taught” what is appropriate for their job.
In terms of how AI works, Langevin says that AI uses long models, which analyze data quickly and comprehensively, “like humans, only faster.” There are two forms: “artificial intelligence,” which looks backwards and focuses on what has already been uploaded to its system; and “generative artificial intelligence,” which is creative, forward-thinking and predictive.
Data integrity, even from a non-medical perspective, is important as well. While AI will know what a security issue is versus what is not, algorithms can still be corrupted. As such, AI systems can “hallucinate,” according to Langevin. When AI systems “hallucinate,” they create their own data, which can contribute to the spread of false news and incorrect information. Why and how AI systems do this is unknown currently. This means that healthcare entities and their third-party vendors will be particularly vulnerable to data breaches and ransomware attacks. This could also prove to be a matter of national security.
AI systems can adapt and have the potential to make cybersecurity better. AI systems can be adapted to have their own malware and can even write “self-healing” codes when they discover a problem within their code. However, AI is “only as good as its algorithm,” says Langevin. This does also mean there is a possibility to eliminate biases within the system.
Despite the potential negatives, AI systems can change the future of medical research. AI can pick up on data trends and patterns, organize and assist in interpretation, and can connect the dots that may not have previously been linked. This is not only true for the care of people, but for medication research and development.
Improving patient safety is one area in which AI is helpful. According to the World Health Organization, about one in every 10 patients around the world are harmed and more than 3 million deaths occur annually due to unsafe care. About 50% of this harm is preventable, as it is related to medication errors. The other half of patient harm is related to unsafe surgical procedures, healthcare associated infections, diagnostic errors, falls, pressure ulcers, patient misidentification and unsafe blood transfusion.
Furthermore, AI chatbots can now be used as preliminary diagnostic tools. A patient can access a chatbot system, either associated with their physician or independent from them, and research their symptoms. By answering a series of questions, AI can tell someone what illnesses they are likely to have and whether they need immediate medical attention. AI can also offer home remedy solutions.
For example, Buoy Health, an AI-based symptom and cure-checker developed by a team from Harvard Medical School, uses algorithms to diagnose and treat illnesses. To use this system, a patient interacts online with a chatbot, answering questions about their health history and their current illness. The chatbot “listens” to the patient, asking further questions and using algorithms, then guides the patient to the correct form of care based on its diagnosis.
The United States is not the only country where the potential impact of AI is being studied. Others including China, Russia, Iran, and North Korea – seen by many as adversaries – are also working to develop their own AI systems. This is another a double-sided coin, as AI systems could be used with both positive and negative intentions when it comes to international relations.
State, federal and international laws will need to be developed to protect not only matters of national security, but the privacy of the people as “force of law is always best,” according to Langevin. There will need to be laws governing the appropriate use of AI as well as what will be considered unethical practice. Doctor/patient confidentiality laws will need to be rewritten to include AI.
Tomas Gregorio, the Chief Innovation Officer at Care New England, says: “There are quite a few concerns with the usage of AI in healthcare including ensuring compliance with HIPAA regulations, safeguarding patient data security, and addressing ethical considerations surrounding AI decision-making in patient care.”
Gregorio goes on to say that there are other drawbacks in recent developments that “may include potential errors in algorithms leading to misdiagnosis, concerns about patient privacy and data security, and the need for ongoing training and education for staff to effectively utilize AI technology.”
Gregorio adds that “there are several barriers to using AI in healthcare. Some of the key barriers include data quality and interoperability issues, privacy and security concerns, lack of a regulatory framework, limited transparency and interpretability of AI algorithms, resistance to change and trust issues, and cost and resource constraints. These barriers need to be addressed to ensure the successful integration of AI in healthcare and maximize its benefits for patients and healthcare providers.”
From the perspective of Care New England, Gregorio acknowledges that the healthcare system is moving slowly with the use of AI technologies and are leaning into things when CNE professionals see the opportunity to benefit the health system. Care New England as a whole is just beginning its digital transformation as a health system and most of their energy is being dedicated to managing organizational change, according to Gregorio. This has caused some hesitancy to fully embrace AI technologies, he says.
While it is unclear where AI technologies will take healthcare in the future, it seems certain that lives will change because of it. Politicians may change their campaign strategies to market themselves as resistant to AI advances instead of focusing strictly on heath insurance, current elected officials may lock horns on regulatory legislation, and people may have the power of a doctor literally in their hands through an application on a smartphone. Perhaps the lifestyle of the cartoon family The Jetsons is not unobtainable and flying cars and tubular travel also will be in our immediate future.
Mel (Rising Dawn) Cordeiro is a Certified Health Education Specialist. Health education and health related research are her passions. She enjoys teaching others and learning about health topics. She also has clinical skills in medication administration and as a nurse aid/home health aide and a pharmacy technician. She is enrolled in Rhode Island College’s nursing program. Editor and chief of the Anchor Newspaper at Rhode Island College, Cordeiro is a writer, poet, and a Reiki practitioner as well as Native American.
(For more stories visit oceanstatestories.org.)
Comments
No comments on this item Please log in to comment by clicking here