AI vs. Doctors: Can Artificial Intelligence Actually Replace Your Physician?
The question of whether artificial intelligence can outperform human doctors is no longer just a topic for science fiction. It is now a real, urgent conversation happening at the highest levels of medicine, technology, and business. According to a report by CNBC, leading executives from the healthcare and biotech sectors gathered at a major global event to share their perspectives on where AI fits in the future of medicine. Their views ranged from enthusiastic advocacy to careful caution, painting a nuanced picture of a technology that holds tremendous promise but also very real risks.
AI Is Already Matching Some Doctors, Says Top CEO
Alex Zhavoronkov, founder and CEO of AI drug discovery company Insilico Medicine, made a bold statement at CNBC's CONVERGE LIVE event in Singapore. He said that people should be using AI far more than they currently do for health-related questions. What makes his claim stand out is the confidence behind it. Zhavoronkov stated that many of the AI models available to consumers today have reached a level of capability that is close to, and sometimes better than, certain doctors. This is not a casual observation. It is a claim made by someone at the forefront of AI-powered medicine.
Basic Health Questions Are Where AI Shines
Zhavoronkov was specific about the kinds of questions where AI can genuinely help. He pointed to everyday health inquiries such as dietary guidance and whether someone should start a diet. His argument is straightforward: if a person can get a reliable, accurate answer to a basic health question from an AI, they save valuable time and reduce the burden on overloaded healthcare systems. In his words, some very basic questions could be answered by an AI physician, which would allow people to save their time with a real doctor for more complex issues that truly require human expertise.
ChatGPT and Amazon Enter the Health Space
The excitement around AI in healthcare is not just talk. Major technology companies have already taken concrete steps in this direction. In January 2026, OpenAI launched ChatGPT Health, a feature that allows users to securely connect their medical records and wellness applications to the AI chatbot. OpenAI was clear that this tool was not intended for diagnosis or treatment, but rather to help users better understand their own health data. In the same month, Amazon rolled out its HealthAI tool for members of its primary care chain, One Medical. This tool is designed to provide personalized advice based on a user's medical records, lab results, and current medications. The entry of these tech giants into healthcare signals that consumer-facing AI health tools are becoming mainstream, not experimental.
Biocon CEO Urges Caution and a Learning Curve
Not everyone at CONVERGE LIVE shared the same level of enthusiasm. Shreehas Tambe, CEO and managing director of biotechnology company Biocon, offered a more measured perspective. He described himself as cautiously optimistic, but highlighted a critical concern: the learning curve that comes with putting advanced technology into the hands of people who are still figuring it out. Tambe warned that placing an evolved technology platform in the hands of someone who is still getting comfortable with it could lead to more erroneous results. Rather than solving problems, poorly used AI health tools could create new ones. His point is not that AI is bad; it is that readiness matters. The technology is only as good as the person using it.
More Challenges Than Benefits If Misused
Tambe went further in his warning at the event. He noted that if the wrong person uses an AI health platform without adequate understanding, the outcome could produce more challenges than benefits. This is a sobering thought, especially in a world where millions of people may soon have access to powerful AI health tools on their smartphones. The gap between what AI can theoretically do and what an average user can practically get out of it remains a serious concern that both companies and regulators will need to address. As some experts have previously discussed, the promise of AI in medicine is real but must be approached with transparency and accountability.
AI Is Cutting Drug Discovery Time by More Than Half
One of the most impressive data points shared at CONVERGE LIVE came directly from Zhavoronkov. He revealed that AI tools are now reducing the time it takes for drugs to reach the developmental candidate stage to just 18 months. Traditionally, this process took more than four years. That is a reduction of more than half, which is extraordinary by any standard. Developmental candidates are the stage in drug discovery that occurs before human clinical trials begin. If AI can consistently compress this timeline, the downstream effects for patients could be profound: faster access to new treatments, lower costs, and greater innovation in tackling diseases that have long resisted conventional approaches.
Eli Lilly's $2.75 Billion Bet on AI-Developed Drugs
The pharmaceutical industry is putting serious money behind AI-driven drug development. In March 2026, Eli Lilly, one of the world's largest pharmaceutical companies, signed a deal worth $2.75 billion with Insilico Medicine. The goal of this partnership is to bring drugs developed using AI to the global market. This is not a small pilot program or an experimental collaboration. It is a multi-billion-dollar commitment that reflects genuine confidence in what AI can deliver in the drug discovery pipeline. When a company the size of Eli Lilly commits at this scale, it sends a clear signal to the entire industry that AI is no longer a side project in medicine; it is becoming central to it. The ongoing debate around AI competing with doctors is only intensifying as these investments grow larger.
The Human in the Loop Must Remain
Despite the excitement around what AI can do autonomously, Tambe emphasized a principle that many experts in the field agree on: the human in the loop must remain. When it comes to validating AI models used in drug discovery, Tambe stressed that these models need to be reviewed by people who understand the science deeply. His view is that experts must be able to push boundaries and specify the solutions they want generative AI models to develop. AI, in this framing, is a powerful tool guided by human knowledge, not a replacement for it. The best outcomes will come from a partnership between machine intelligence and human judgment, not from replacing one with the other.
AI Already Present in Your Doctor's Office
It is worth noting that AI in healthcare is not a future concept. It is already here, quietly integrated into the systems many patients interact with every day. AI is currently being used to take clinical notes during doctor's appointments, help schedule patient visits, and analyze medical images for signs of disease. These applications are largely invisible to patients, but they are already reducing the administrative burden on healthcare providers and improving accuracy in diagnosis support. The question is no longer whether AI belongs in healthcare. The question is how far it should go and what safeguards need to be in place when it does.
The Bigger Picture: Optimism Balanced With Responsibility
Stepping back, the conversation at CONVERGE LIVE reflects a broader tension in the world of AI and healthcare. On one side, there is real and well-documented progress. AI is matching doctors on certain diagnostic tasks, cutting drug development timelines dramatically, and opening up access to health information for people who previously had none. On the other side, there are legitimate concerns about misuse, misinformation, and the risks of over-reliance on technology that is still evolving. The right path forward is neither to blindly embrace AI nor to fear it. It is to build the systems, education, and guardrails that allow the benefits to reach people while minimizing the harm.
What This Means for Everyday People
For the average person, the takeaway from this debate is practical. AI health tools like ChatGPT Health and Amazon's HealthAI can be genuinely useful for understanding your own health data, asking basic questions, and preparing for doctor's appointments. However, they are not substitutes for professional medical advice when the stakes are high. A doctor brings context, clinical judgment, empathy, and accountability that no AI model currently replicates in full. The smart approach is to use AI as a complement to medical care, not a replacement for it. Think of it as having a knowledgeable friend available at any hour who can help you understand your lab results, but who would also tell you to go see a real doctor when something serious comes up.
Final Thoughts: A Defining Moment for Medicine
The discussion sparked by Zhavoronkov and Tambe at CNBC's CONVERGE LIVE event is part of a much larger, ongoing conversation that will shape the future of healthcare for decades to come. AI has demonstrated it can assist, accelerate, and sometimes even outperform human capabilities in specific medical tasks. But medicine is not just a technical exercise. It involves trust, ethics, communication, and compassion. The most powerful version of future healthcare is likely one where AI handles the data, the repetition, and the analysis, while human doctors continue to lead with the qualities that machines cannot yet replicate. That balance, difficult as it is to achieve, is what the medical community is now working toward.
Source & AI Information: External links in this article are provided for informational reference to authoritative sources. This content was drafted with the assistance of Artificial Intelligence tools to ensure comprehensive coverage, and subsequently reviewed by a human editor prior to publication.
```
0 Comments