Telegram (AI) YouTube Facebook X
Ру
OpenAI Launches ChatGPT Health for Medical Consultations

Medical Professionals Endorse Launch of ChatGPT Health

Experts endorse ChatGPT Health for consultations, despite AI risks.

Experts have endorsed the release of ChatGPT Health for health consultations, despite the risks of AI hallucinations. This is reported by TechCrunch.

Sina Bari, a practicing surgeon and head of AI at iMerit, shared an instance where his patient consulted ChatGPT:

“Recently, he came to me after I recommended a medication and showed a printed dialogue with the chatbot. It stated that the drug had a 45% chance of causing pulmonary embolism.”

Dr. Bari checked the sources and found that the statistics were taken from an article about the drug’s effects on a niche subgroup of people with tuberculosis. These data were not applicable to his clinical case.

Despite the inaccuracies, the doctor positively assessed the launch of ChatGPT Health. In his view, the service allows users to discuss health issues in a more private setting.

“I think it’s great. It’s something that’s already happening, so formalizing the process to protect patient information and introducing some safety measures will make the process more efficient for patients,” said Dr. Bari.

Users can receive more personalized recommendations from ChatGPT Health by uploading medical records and syncing the app with Apple Health and MyFitnessPal. This level of access to personal information has raised concerns in the community.

“Suddenly, medical data is being transferred from organizations that comply with HIPAA to providers that do not. It will be interesting to see how regulators react,” noted MIND co-founder Itai Schwartz.

Over 230 million people discuss their health issues with ChatGPT weekly. Many have stopped “Googling” symptoms, choosing the chatbot as their source of information.

“This is one of the largest applications of ChatGPT. So it makes sense that they would want to create a more private, secure, and optimized version of the chatbot for healthcare questions,” Schwartz noted.

The Hallucination Problem

The main issue with chatbots remains “hallucinations,” which is particularly critical in healthcare. A Vectara study showed that OpenAI’s GPT-5 “hallucinates” more frequently than competitors from Google and Anthropic.

However, Stanford University professor of medicine Nigam Shah considers these concerns secondary. According to the doctor, the real problem with the system is the difficulty in accessing doctors, not the risk of receiving incorrect advice from ChatGPT.

“Currently, if you approach any healthcare system and want to see a primary care physician, you’ll have to wait three to six months. If you have a choice: wait six months to see a real specialist or immediately talk to someone who can do something for you, which would you choose?” he noted.

Administrative tasks can take up about half of a doctor’s time, significantly reducing the number of appointments. Automating these processes would allow specialists to devote more attention to patients.

Dr. Shah leads a team at Stanford developing ChatEHR, software that enables doctors to work with patients’ electronic medical records more rationally and efficiently.

“By making it more user-friendly, doctors will be able to spend less time searching for the necessary information,” said one of the first testers of ChatEHR, Dr. Sneha Jain.

In January, Anthropic announced the release of Claude for Healthcare, a set of tools for healthcare providers and patients.

Подписывайтесь на ForkLog в социальных сетях

Telegram (основной канал) Facebook X
Нашли ошибку в тексте? Выделите ее и нажмите CTRL+ENTER

Рассылки ForkLog: держите руку на пульсе биткоин-индустрии!

We use cookies to improve the quality of our service.

By using this website, you agree to the Privacy policy.

OK