
A new warning sign in South Korea’s digital health boom
South Korea has spent years building a reputation as one of the world’s most wired societies — a place where same-day delivery, ultrafast internet and mobile-first services are part of everyday life. It is also a country where people are accustomed to using apps for banking, transportation, shopping and increasingly for managing their health. Now, new survey data suggest that many South Koreans are beginning to see generative artificial intelligence not merely as a convenient search tool, but as something that could stand in for a doctor.
That shift in perception is the most striking takeaway from a new survey released April 15 by the Korea Press Foundation, a public institution that studies media use and information trends. According to the survey, 58.3% of adults in their 20s through 60s said generative AI could replace in-person consultation and treatment with a physician or a practitioner of Korean medicine to some extent. Of those respondents, 53.9% said AI could serve as a substitute to a certain degree, while 4.4% said it could do so to a significant degree.
On its face, that number may sound like a vote of confidence in new technology. But a closer reading points to something more unsettled and more consequential: not just enthusiasm for AI, but a fraying information environment in which many people no longer feel confident about what health advice to trust.
The same survey found that 85.8% of respondents said they had encountered health or medical information they believed was inaccurate or exaggerated. Another 76.8% said they had experienced confusion because of contradictory information. Taken together, those numbers describe a public that has more information than ever before but less certainty about which claims are reliable. In that context, AI may be filling a void created not only by technological innovation, but by a broader crisis of trust.
That matters far beyond South Korea. In the United States, patients also toggle between Google searches, TikTok videos, Reddit threads, wellness influencers, telehealth apps and now conversational AI chatbots before they ever sit down with a clinician. Americans have already lived through major waves of health misinformation, from false COVID-19 cures to viral anti-vaccine claims to algorithm-driven fad diets marketed as medical wisdom. What is emerging in South Korea should feel familiar to U.S. readers: when institutions are hard to navigate and information is abundant but inconsistent, people often gravitate toward tools that feel immediate, personalized and nonjudgmental — even when those tools may be wrong.
Why patients ask AI before they ask a doctor
There are practical reasons people may turn to AI first. In South Korea, as in many countries, patients often face time pressures, crowded clinics and the challenge of translating vague symptoms into the language of medicine. A chatbot can seem easier. It is available 24 hours a day, responds instantly and does not make users feel rushed for asking follow-up questions. It can rephrase complicated terms, summarize long passages and organize symptoms into what appears to be a coherent explanation.
That can be especially appealing during the stages before or between doctor visits: when someone is deciding whether a symptom is serious, trying to understand which specialist to see, or making sense of test results they do not fully understand. In those moments, AI can function less as a literal replacement for a physician and more as a digital first stop — something like an endlessly available triage desk, translator and explainer rolled into one.
It is not hard to understand the attraction. Many people, in Korea and elsewhere, find the medical system intimidating. Patients may worry they will sound foolish, misunderstand terminology or forget to ask the right questions. Some symptoms — especially those related to mental health, sexual health, digestive issues or gynecological concerns — can be embarrassing to describe face to face. AI, by contrast, can feel private and frictionless. It does not appear irritated. It does not interrupt. It does not make users feel judged.
That emotional ease is part of the story. Technology adoption is often described in terms of convenience, but in health care, emotional dynamics matter just as much. If a patient feels unheard in a clinic, an AI tool that responds in polished, conversational language may come across as more attentive than a rushed doctor. That does not mean the AI is more accurate. It means the experience feels more satisfying.
And that is where the risk begins. In medicine, a clear and reassuring explanation is not the same thing as a sound clinical judgment. The fact that a chatbot can generate smooth, organized answers does not mean it can evaluate danger signs, account for hidden variables or assume responsibility when a situation turns urgent.
What the 58.3% figure does — and does not — mean
The 58.3% figure is significant, but it should not be overstated. Saying AI can replace doctors to some extent is not the same as saying most people want to abandon medical care or believe doctors are unnecessary. Survey responses about technological possibility often capture mood and expectation more than concrete behavior. A person might tell pollsters that AI could substitute for consultation in some situations and still go to a hospital the moment chest pain, severe bleeding or a child’s high fever enters the picture.
Even so, the result reveals two major shifts. First, the center of gravity for health information appears to be moving. People used to start with search engines, online forums or media articles. Increasingly, they can begin with a chatbot that synthesizes those sources into an answer tailored to their question. That changes not only how information is found, but how it is interpreted. Instead of sorting through links, users receive what feels like a direct response to their own circumstances.
Second, what happens before the clinic visit is becoming more important. By the time patients see a physician, they may already have formed a narrative about their symptoms, narrowed their suspicions to a specific condition or become attached to a preferred treatment option. In other words, AI is not just giving information; it may be shaping the starting point of the doctor-patient conversation.
Researchers involved in the survey warned that the spread of generative AI is changing the way people use health information at a fundamental level. It increases accessibility and convenience, they noted, while simultaneously raising concerns about accuracy, trustworthiness, protection of sensitive personal information and the risks that come with treating AI as a stand-in for professional care.
That framing is important because it avoids a simplistic either-or argument. The issue is not whether AI should be banned from health information. It is already here, and for many users it is genuinely useful. The more urgent question is where its safe boundaries lie — and whether the public can reliably tell when those boundaries have been crossed.
Too much information, too little confidence
The most revealing numbers in the survey may be the ones about misinformation and confusion. When 85.8% of respondents say they have encountered inaccurate or exaggerated health information, and 76.8% say they have been confused by contradictory claims, the problem is no longer a few bad actors posting dubious cures online. It is structural. The public is navigating an information marketplace saturated with advice, warnings, testimonials, product marketing and pseudo-expertise.
This is not unique to South Korea. American readers have seen the same dynamics play out across Instagram wellness culture, YouTube “doctor reacts” videos, podcasts that blur self-help with medical claims and viral posts that convert personal anecdotes into universal prescriptions. One day coffee is portrayed as life-extending, the next as a health risk. One influencer touts a supplement as essential; another calls it useless. Even well-meaning news coverage can sometimes reduce nuanced medical findings into clickable and contradictory headlines.
In that environment, AI may feel like a solution because it seems to cut through the clutter. Instead of forcing users to compare ten websites or argue with strangers in a forum, it delivers a neat synthesis in plain language. But that neatness can be deceptive. Chatbots do not merely retrieve verified facts; they generate plausible-sounding responses based on patterns in data. If the underlying information is mixed, incomplete or misleading, the answer can still sound confident and coherent.
That is a particularly dangerous feature in health communication. In many areas of life, being slightly wrong may be inconvenient. In medicine, it can be harmful. A reassuring but mistaken answer can delay care. An alarmist but unfounded one can trigger panic, unnecessary tests or inappropriate self-treatment. Because medical knowledge depends on context — age, medications, pregnancy status, family history, symptom progression, chronic conditions and more — general explanations can easily mislead when applied to an individual case.
There is also a psychological trap. When people are anxious about their health, they are not simply looking for facts; they are looking for certainty. AI is very good at producing the feeling of certainty. It can take a messy concern and restate it in orderly prose, which may leave users feeling understood. But the sensation of clarity is not the same thing as clinical safety.
What gets lost when medicine becomes a text exchange
Medicine is not only an information service. It is also a process of judgment, examination and responsibility. That distinction can sound abstract until one considers how many diagnoses depend on details that never appear in a typed question.
A patient might describe abdominal pain as mild indigestion when a physician, seeing posture, facial expression, skin tone and tenderness on examination, recognizes signs of appendicitis. A cough might sound routine in text but, in a clinic, be accompanied by labored breathing or low oxygen levels. Dizziness could mean stress, dehydration, medication interactions or something far more serious. Even a careful patient cannot always know which details are important enough to mention, and even a sophisticated AI cannot physically examine a body, order immediate testing on its own authority or assume legal and ethical accountability for a bad call.
This is one reason the doctor-patient relationship cannot be reduced to question and answer. A good clinician does more than recite medical information. A clinician listens for what is omitted, asks follow-up questions shaped by experience, evaluates risk and makes decisions under uncertainty. Just as important, the clinician is answerable for those decisions.
Generative AI has no equivalent form of accountability. It can produce a recommendation, but it does not bear the consequences if a user misreads the advice or if the advice itself is flawed. It cannot monitor a patient for deterioration. It cannot intervene when a condition worsens. And while companies that build AI systems often include disclaimers urging users to seek professional care, disclaimers do not solve the practical problem that many people may rely on these tools precisely when they are trying to decide whether professional care is necessary.
There is a cultural dimension here as well. In South Korea, the survey references both physicians and practitioners of Korean medicine, a dual system that may be unfamiliar to some American readers. Korean medicine, which includes practices such as acupuncture and herbal treatments, operates within a licensed institutional framework in South Korea. Mentioning both types of practitioners underscores that the trust question extends across the health system, not just to one category of clinician. The survey is measuring a broad shift in how people think about expertise itself.
The trust gap behind the technology
If the rise of AI in health care were driven only by novelty, the story would be less troubling. Technologies come and go. What makes this moment more serious is the possibility that AI is gaining ground because many people feel underserved by existing channels of communication.
The survey data suggest that distrust in the information environment is widespread. But distrust rarely develops in a vacuum. It often grows when people feel they cannot get clear answers, when official guidance changes without explanation, when headlines exaggerate findings, or when medical communication feels too technical and too rushed. In other words, a public that leans on AI may not simply be rejecting doctors. It may be responding to a system in which trustworthy information feels difficult to access in human terms.
That point resonates beyond Korea. In the United States, one reason patients increasingly turn to online sources is that the health system can feel forbiddingly complex. Insurance barriers, short appointments, specialist referrals, opaque billing and fragmented records all make it harder for patients to feel anchored. South Korea’s system is different in important ways, including broader national insurance coverage and different patterns of care access, but the emotional logic is comparable. When people feel pressed for time and uncertain whom to trust, the source that feels most available often wins.
That does not mean all responsibility lies with institutions. The architecture of digital platforms also rewards attention, not accuracy. Extreme or emotionally charged claims travel faster. Personal testimony often feels more persuasive than statistical evidence. AI enters this ecosystem with a powerful advantage: it packages complexity into answers that seem tailored, neutral and immediate. That makes it easier to use, but also easier to overtrust.
For journalists, public health officials and medical professionals, the challenge is not simply to warn people away from AI. Blanket rejection would likely fail, not least because the technology does offer real benefits in education and navigation. The harder but more realistic task is to help the public distinguish between low-risk uses and high-risk ones — between asking AI to define a term or organize questions for a doctor, and asking it to determine whether a symptom can be safely ignored.
What a healthier information ecosystem might look like
South Korea’s survey arrives at a moment when many countries are still improvising rules and norms for AI in health settings. The debate often swings between hype and panic: either AI will revolutionize medicine, or it will flood the system with dangerous misinformation. The reality is likely more mundane and more difficult. AI will probably become embedded in health care as an assistive tool, while also creating new risks around misunderstanding, privacy and overreliance.
The most constructive response begins with recognizing that convenience alone cannot be the metric. In medicine, speed and fluency are useful only if they support safe decision-making. That means better public guidance on when AI can be used responsibly, stronger standards for how platforms present medical uncertainty and clearer communication from health institutions themselves.
It also means rebuilding trust the old-fashioned way: through communication that is timely, transparent and humane. If patients increasingly prefer AI because it feels more patient than the people in the system, that is not just a technology problem. It is a warning about how medical institutions, media outlets and public agencies are connecting — or failing to connect — with the people they serve.
There is an opinion embedded in that conclusion, and it should be labeled as such: AI is not the core crisis here. The deeper crisis is that too many people appear to be searching for a trustworthy guide and are finding one in software before they find one in a human relationship. Technology can expose that weakness, accelerate it and profit from it. But it did not create the entire gap on its own.
The Korea Press Foundation survey does not prove that South Koreans are ready to hand over their health to chatbots. What it does show is that a substantial share of the public can imagine doing so, at least in part, while overwhelming majorities report exposure to inaccurate, exaggerated and contradictory health information. That combination should concern anyone who cares about public health.
For American readers, the lesson is clear. The question is not whether AI belongs in the health information landscape; it already does. The question is whether societies can build enough trust, clarity and accountability around medical communication so that convenience does not become a substitute for care. If they cannot, then the phrase “AI instead of a doctor” will not merely describe a technological trend. It will describe a failure of the information systems that patients rely on when their health is on the line.
0 Comments