Whether used as a chatbot to generate text, for translation, or for automated paperwork, ChatGPT has been creating quite a stir since the US start-up OpenAI made the text-based dialogue system accessible to the public in November 2022.
Data scientist Teresa Kubacka pointed out that as early as the middle of December, however, references were sometimes being fabricated by ChatGPT and that so-called ”data hallucinations” are dangerous because they can have a considerable effect on internet discourse. Since then, discussions of the ethical consequences and challenges of ChatGPT have continued.
Thilo Hagendorff, PhD, postdoctoral researcher in machine learning at the University of Tübingen, Germany, made it clear at a Science Media Center press briefing
that ChatGPT brings an array of ethical challenges in its wake. Nonetheless, Hagendorff considers the ”heavy focus on the negative aspects” to be ”difficult.” In his experience, reports about ChatGPT are often negative.
”But the positive aspects should also be considered. In many cases, these linguistic models enable us to make better decisions throughout life. They are also definitely creative, they offer us creative solutions, they enrich us with infinite knowledge,” said Hagendorff.
ChatGPT’s Applications
The possible range of applications for ChatGPT is vast. In medicine, for example, ChatGPT has answered questions for patients, made diagnoses, and created treatment regimens, which helped to save time and resources. When used as a chatbot, ChatGPT was able to cater to patient needs and provide personalized responses. Round-the-clock availability could also be beneficial for patients. However, the protection of sensitive patient data must be guaranteed.
And since ChatGPT is based on an automatic learning model that can make mistakes, the accuracy and reliability of the information provided must be guaranteed. Regulation is another problem. While programs such as the symptom checker Ada Health are specialized for the healthcare sector and have been approved as medical devices, ChatGPT is not subject to any such regulation. Despite this, guarantees must be in place to ensure that ChatGPT conforms to the requirements and complies with laws and regulations.
Outside of the medical sector, other ethical problems must be solved. Hagendorff specified the following points in this regard:
- Discrimination, toxic language, and stereotypes: These occur with ChatGPT because the linguistic model’s training stimuli (ie, speech as it is actually used) also reproduce discrimination, toxic language, and stereotypes.
- Information risks: ChatGPT may be used, for example, to find private, sensitive, or dangerous information. The linguistic model could be asked about the best ways to commit certain crimes.
- True or false information: There is no guarantee that such models generate only correct information. They can also deliver nonsensical information, since they only ever calculate the probability of the next word.
- Misuse risks: ChatGPT could be used for disinformation campaigns. However, it is also possible to generate a code that can be used for certain cyberattacks of varying degrees of danger.
- Propensity for humanization: People tend to humanize such linguistic models. This anthropomorphization can lead to an elevated level of trust. The user’s trust can then also be used to gain access to private information.
- Social risks: There could be job losses or job changes in every industry that works with texts.
- Ecologic risks: Unless the power used to train and operate such models is generated in an environmentally friendly manner, the carbon-dioxide footprint could be quite large.
Co-authoring Scientific Articles?
The news that some scientists have listed OpenAI as an author of scientific articles elicited an immediate reaction in the research community and among editors of specialist journals. However, ChatGPT is not the first artificial intelligence application to be listed as a co-author of a scientific paper. Last year, an article appeared on a preprint server that was written by the GPT-3 chatbot, among others, said Daniela Ovadia, PhD, research ethicist at the University of Pavia, Italy, in an article published on Univadis Italy.
In her opinion, the commotion about a largely expected development in technology may be in relation to the definition of the author of a scientific paper. The general rule is that every author of a scientific paper is responsible for every part of the paper and must check the other authors’ work.
After Nature announced that AI will not be accepted as an author, other specialist journals, such as JAMA, followed suit.
Still, this does not mean that the tool may not be used. It must be mentioned in the methodology section of the study, however.
– ”The scholarly publishing community has quickly reported concerns about potential misuse of these language models in scientific publication,” wrote the authors of a JAMA editorial.
As Annette Flanagin, RN, executive managing editor of JAMA, and her team of editors reported, various experts have experimented with ChatGPT by asking it questions such as whether childhood vaccinations cause autism. ”Their results showed that ChatGPT’s text responses to questions, while mostly well written, are formulaic; not up to date; false or fabricated; without accurate or complete references; and, worse, with concocted, nonexistent evidence for claims or statements it makes.”
Experts believe that ChatGPT does not constitute a reliable source of information — at least in the medical field. To be a reliable source, it must be carefully monitored and reviewed by humans, Ovadia wrote. But there are other ethical queries that the scientific community needs to think about, especially since the tool will gradually improve over time.
An improved ChatGPT could, for example, bridge the linguistic gap between English-speaking scientists and non-English-speaking scientists and simplify the publication of research articles written in other languages, Ovadia wrote. She is discriminating in her evaluation of the idea of using AI to write scientific articles and makes the following recommendations:
- Sections written with AI should be highlighted as such, and the methodology used in their creation should be explained in the article itself (for the sake of transparency, including the name and version of the software used).
- Papers written exclusively with AI, especially if they concern systematic literature reviews, should not be submitted. This is partly because the technologies are still not fully developed and tend to perpetuate the statistical and selective distortions contained in their users’ instructions. One exception is studies that aim specifically to evaluate the reliability of such technologies (an objective that must also be explicitly mentioned in the paper itself).
- Creating pictures and using them in scientific papers is discouraged. This step would violate the ethical standards of scientific publications unless these pictures are themselves the subject of the investigation.
Future Development
Stefan Heinemann, PhD, professor of business ethics at the FOM University of Applied Sciences in Essen, Germany, sees the development of ChatGPT as fundamental because it is developing into an artificial person. ”We should consider how far such development should go,” underlined Heinemann at the event, ChatGPT in Healthcare.
”This does not mean that we should immediately dismiss new technologies as dystopian and worry that the subject is now lost because people no longer write proseminar papers themselves. Instead, we should find a balance,” said Heinemann. We have now reached a turning point. The use of technical systems to simplify and reduce office work, or the use of nursing robots, is undoubtedly sensible.
But these technologies will not only cause an eventual externalization of bureaucracy, ”but we will also begin to externalize thought itself.” There is a subtle difference, underlined Heinemann. The limit will be reached ”where we arrive at a place in which we start to externalize that which defines us. That is the potential to feel and the potential to think.”
The priority now is to handle these new technologies appropriately. ”Not by banning something like it but by integrating it cleverly,” said Heinemann. This does not mean that it cannot be virtualized ”or that we cannot have avatars. These are discussions that we still understand as video-game ethics, but these discussions must now be held again,” said Heinemann.
He advocated talking more about responsibility and compatibility and not forgetting one’s social skills. ”It is not just dependent on technology and precision. It also depends on achieving social precision and solving problems with each other, and we should not delegate this. We should only delegate the things that burden us, that are unnecessary, but not give up thinking for the sake of convenience.” Technology must always ”remain a servant of the cause,” concluded Heinemann.
Potential and Pitfalls of ChatGPT and Natural-Language Artificial Intelligence Models for Diabetes Education. Diab Care
From www.medscape.com
Nyhetsinfo
wwww red diabetologNytt