Kenya, January 08 2026 - OpenAI’s launch of ChatGPT Health feels less like a surprise and more like an inevitability. When over 230 million people are already asking ChatGPT health-related questions every week, carving out a dedicated space for those conversations almost sounds responsible. Almost.
On paper, ChatGPT Health is framed as a privacy-conscious upgrade: a separate tab, isolated chat history, and promises that your medical conversations won’t leak into your everyday prompts. Ask a health question in the regular ChatGPT, and you’ll be nudged toward the Health tab “for extra protection.” That sounds reassuring until you look a little closer.
The most eyebrow-raising feature is the option to connect your medical records. Through a partnership with b.well, ChatGPT Health can pull data from millions of healthcare providers, alongside information from wellness apps like Apple Health, MyFitnessPal, Peloton, and even Instacart. The idea is seductive: an AI that understands your lab results, sleep patterns, diet, and fitness routines well enough to give tailored insights and help you prepare for doctor visits.
But this raises a fundamental question: just because something can be personalized, does that mean it should be? OpenAI insists it spent two years developing the product, gathering feedback from hundreds of physicians worldwide. Yet the product isn’t fully live, had reported issues at launch, and notably won’t be available in regions with stricter digital privacy laws like the EU, Switzerland, and the UK. That omission alone should give users pause. If privacy protections are strong enough, why exclude the places that enforce the toughest standards?
The company is clearly aware of public anxiety. It emphasizes encryption, isolation from main ChatGPT chats, and the fact that health conversations won’t be used to train foundation models by default. Still, there’s no end-to-end encryption, and OpenAI can hand over data under court order or emergency circumstances. More importantly, HIPAA doesn’t apply, because ChatGPT Health is a consumer product and not a clinical one.
That detail matters. A lot.
Despite disclaimers stating the tool isn’t meant for diagnosis or treatment, it’s unrealistic to believe people won’t use it that way. In underserved and rural communities, ChatGPT already handles hundreds of thousands of healthcare-related messages each week, often outside normal clinic hours. When access to doctors is limited, AI doesn’t just become a supplement. It becomes a substitute.
And that’s where the real risk lies.
More from Kenya
Large language models don’t “understand” health in the way clinicians do. They predict likely responses based on patterns, not medical certainty. While OpenAI says it redirects people to professionals during distress, the influence of an always-available, confident-sounding AI shouldn’t be underestimated especially in mental and physical health decisions.
OpenAI’s broader push into healthcare makes its ambitions clear. From benchmarks like HealthBench to partnerships and policy proposals suggesting broader access to global medical data, the company is positioning itself as a major player in how people seek and process health information.
Whether that future is empowering or alarming depends on how much trust you’re willing to place in a probabilistic text system and how comfortable you are handing it some of the most sensitive data you have.
ChatGPT Health may well help people understand their bodies better. But it also blurs the line between information, advice, and authority. Until those lines are clearer and privacy protections stronger. This is one health “upgrade” that deserves skepticism, not blind adoption.
The opinion expressed in this article are those of the author and do not necessarily reflect the views of Dawan Africa.

More from Kenya
Opinion: The Dimensions of Organized Smear Campaigns Against Somali Americans





