Credit: Pixabay/CC0 Public Domain

So-called generative AI—algorithms that can be used to create content using machine learning—has been much in the news of late. In particular, the tool ChatGPT, created by Open AI, has attracted considerable attention.

In Australia, this has no doubt been contributed to by the comments of Microsoft co-founder Bill Gates, who has recently visited our country and is keen to promote the benefits of the technology.

Most of the debate has been about ChatGPT and similar AI tools, such as DALL.E 2, used for creating supposedly realistic images and arts, and Whisper, which offers " in ," and their potential to facilitate student cheating or to replace workers.

They are also seen as having the potential to free up people's time to enable them to undertake other tasks.

In one article, an entrepreneur, who founded a company specializing in tech start-ups, said, "ChatGPT feels like the introduction of the PC—a tool that allows us to work smarter and enhances the ability of humans to do what they do best, which is create, dream and innovate."

However, thus far, little has been said about the potential use and misuse of these tools in medicine and healthcare, including for diagnoses, including self-diagnoses, and prescriptions.

If history is any guide, these are the areas in which these technologies are likely to find early widespread application. They are also areas where there's much potential for exploitation and misuse, especially given the ready accessibility of the tools online.

In a recent news article, it was reported that Bill Gates saw "obvious benefits [of ChatGPT] in the , and across other industries where a lot of information needed to be understood."

According to the article, "AI could help a doctor write prescriptions, and explain to patients, for example, or also assist in both writing and understanding legal documents."

As a health sociologist interested in new and emerging health technologies, these comments caught my eye. It's typical of the promissory discourse surrounding new health technologies. It's also deeply worrying.

On the face of it, there's much that is appealing about the near-instantaneous production of information using what is in effect a sophisticated chatbot.

Chatbots have been widely used for some time and, although sometimes useful, their limitations are well understood. As a form of AI, they rely on information harvested from many online sources—some of questionable reliability—that also carry biases, including those based on differences of gender, class, race/ethnic, and age.

Much information available online is "personalized" algorithmic-driven advertising, designed to engage users. These personalized messages are crafted to be "emotionally resonant."

In my recently published book, I explored the mechanisms by which emotions are exploited online; for example, through deceptive designs or "dark patterns," designed to trick users and make them feel and act in certain ways, generally with the aim of encouraging them to stay online and to purchase advertised goods and services.

The emotions are exploited as never before, with affective computing, a field founded by Rosalind Picard, oriented to making machines more "human-like" and "conversational." Picard co-founded Affectiva, an MIT-media lab spinoff that claims to be on "a mission to humanize technology," which potentially has vast applications in advertising. This is where generative AI, like ChatGPT, is of great concern.

OpenAI has produced ChatGPT with the claim that this language model "interacts in a conversational way," and that "the dialogue format makes it possible for ChatGPT to answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests."

This claim, which suggests that AI can think, feel and respond like a human, is significant and has evidently attracted much interest, with claims there were about 1 million users of the tool in a week after its release.

The prospect of a thinking, feeling, sentient AI is far-fetched, but well-entrenched in science-fiction and the popular imagination.

What greatly concerns me is people using ChatGPT, and similar tools, for routine medical procedures, including self-diagnoses.

OpenAI is supported by some powerful individuals, including Gates (Microsoft is reported to be already an investor), Elon Musk (current owner of Twitter), Peter Thiel (founder of PayPal), and other big tech entrepreneurs.

They are hardly disinterested players when they talk up the benefits of generative AI. They, and other billionaire entrepreneurs, will no doubt be looking at the huge profits that can be made from generative AI in the fields of health and medicine, and other areas.

Self-diagnosis is already part and parcel of people's engagement with digital media. Many people go online soon after the onset of illness to learn more about their conditions, and search for information and treatments online —as colleagues and I have shown in our research.

These online searches are underpinned by high hopes and the use of heuristic short-cuts to simplify complex decisions regarding the credibility of information. Generative AI will no doubt gain widespread use among those looking for quick answers regarding the treatment and management of often-complex conditions.

There needs to be much more debate about the dangers posed by innovations such as generative AI, which promise much, but also carry huge risks.

Provided by Monash University