About
#News
09.08.2024 Ethics

Bioethics and AI: what role do clinicians and patients play?

Researchers identify problems with the use of programs such as ChatGPT in the clinician-patient relationship and suggest how to make better use of AI tools

A group of researchers set out to articulate bioethical principles for ethical issues related to the use of large language models | Image: Shutterstock

The laws of robotics created by Russian-American writer Isaac Asimov (1920–1992), which underpin the ideas portrayed in “I, Robot” (1950) and other works by the science fiction author, have been invoked in recent years with the rise of artificial intelligence (AI). Even before robots were a reality, ethical parameters were a concern for those envisioning a future in which the machines would assist humans in previously unfathomable tasks.

In the modern context, where even medical recommendations and treatments are assisted by AI, ethics is an even more pressing concern.

In a review article published in NEJM AI, a journal associated with The New England Journal of Medicine, a group of researchers sets out to unite the ethical principles for the use of large language models (LLM), already discussed in other areas, with bioethics.

Large language models are able to analyze large quantities of text data and identify a multitude of patterns that inform how human beings connect words and symbols. These models then learn how to produce new texts.

The authors’ objective is to develop a framework for the proper use of these models within the clinician-patient relationship, since LLMs are already being applied in other fields, such as on ChatGPT, and their use in tools aimed at clinicians and patients is ever growing.

“The broad-ranging applications of LLMs will continue to increase,” write the study’s authors.

“Applying and upholding bioethical principles in each of these interaction scenarios can help to enhance trust in the patient–clinician partnership.”

Although traditional medical ethics has focused on the clinicians’ responsibilities, the authors propose a shift towards a more balanced distribution of responsibilities, “emphasizing that patients themselves should be held accountable for fulfilling their role in the medical relationship.”

For example, patients should disclose and discuss their use of LLMs with the clinician.

According to the authors, all three entities involved in the relationship—patient, clinician, and the systems that govern LLMs—must collectively uphold the four principles of bioethics: beneficence, nonmaleficence, respect for autonomy, and justice.

Beneficence and nonmaleficence

With regard to the first principle, beneficence, the two concerns raised by the researchers are accountability and reliability. The responsible use of AI depends on the intention to build beneficial AI, they argue.

Additionally, the right tools must be used to augment the number of clinical tasks performed by the machine, in order to promote efficiency.

Finally, the target should be uses that benefit patients.

Related to the second principle of nonmaleficence are the so-called “hallucinations,” when models reach incorrect conclusions or even mislead the patient. This can even lead the patient to denying the need for medical care and disseminate false information.

“Responsible design of LLM-powered applications for patient use will need to include a clear indication that responses are AI- or LLM-based, with explanations of the caveats and boundaries of use,” they explain.

Autonomy and justice

In order to comply with the principle of autonomy, it is necessary to ensure data privacy and informed patient consent regarding the collection of information.

To do this, it is crucial to implement data anonymization techniques that will be used by algorithms.

We must also encourage patients to join in shared decision-making models, in which humans evaluate the results obtained by the machine in order to retrain it.

Finally, in the item related to justice, the concern is rooted in transparency and biases. First, recognizing that the datasets and processes used to train the machines are true black boxes, and that the developers are not very willing to share them.

According to the article, we must ensure that the models are developed with a universal perspective and equivalent performance across different user groups, including those historically neglected in studies and even in medical care.

The researchers conclude that the rapid advance of LLMs in medicine offers promising perspectives for improved healthcare, although implementation is complicated by ethical challenges.

Although they propose shared responsibility among clinicians, patients, and those involved in developing the tools, the authors recognize that the framework “does not completely compensate for the lack of established approaches to verify the value propositions and risks” for these applications.

Following Asimov’s example, we must anticipate the dangers in order to avoid them.

* This article may be republished online under the CC-BY-NC-ND Creative Commons license.
The text must not be edited and the author(s) and source (Science Arena) must be credited.

News

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Receive our newsletter

Newsletter

Receive our content by email. Fill in the information below to subscribe to our newsletter

Captcha obrigatório
Seu e-mail foi cadastrado com sucesso!
Cadastre-se na Newsletter do Science Arena