Reading time: 8 minutes
Dr. Maanya Rajasree Katta
Introduction
When a doctor says “you have cancer,” a stack of choices follows, each with trade-offs. Start with surgery or chemotherapy? Add immunotherapy now or later? Combine treatments or use them in sequence? For decades, oncologists have balanced guidelines, experience, and diagnostic results to steer these calls. Artificial intelligence (AI) is adding another layer of analysis to this work, with the aim of making decisions more complete, faster to assemble, and more consistent.
Why are these decisions hard?
No two cancers are exactly alike. Two people with the same cancer type can respond differently to the same drug, and the reasons run deep: one person’s tumor might have a mutation that makes it resistant, while the other’s doesn’t. Their immune systems might patrol for cancer cells with different levels of vigilance, including how their bodies break down the drug, governed by genetic differences in liver enzymes, which can mean the difference between effective treatment and harmful side effects. Today’s workups generate large volumes of data across genomics, imaging, pathology, and years of clinical notes and labs. Even excellent clinicians cannot track patterns across millions of cases. Machine learning systems can. By finding relationships in large datasets, AI supports more personalized, data-driven care, often called precision medicine.
What does AI already do?
AI has moved from research labs into real clinical workflows, though the rollout is far from universal. Some academic cancer centers now use AI tools routinely for imaging analysis and molecular profiling, while many community practices are just beginning to explore these systems. The FDA has cleared dozens of AI-based tools for oncology use, mostly focused on detection and measurement rather than treatment decisions. Here’s what’s happening on the ground:
1. Think of AI as a research assistant that has read millions of records, remembers them precisely, and can match a new case to similar ones to estimate likely outcomes. It does not replace the oncologist. It informs the oncologist.
2. Some models link tumor molecular profiles to drug sensitivity. LORIS, or the Logistic Regression-Based Immunotherapy-Response Score, developed at the National Institutes of Health (NIH), uses six variables from genetic testing and clinical data to estimate the likelihood that a patient will respond to immune checkpoint inhibitors. It is publicly available for clinicians.
3. Deep learning models trained in advanced non-small cell lung cancer can predict response to immune checkpoint inhibitors. This matters because these drugs can be transformative for some patients, yet carry meaningful toxicity and cost for those who may not benefit.
4. Algorithms support pathologists and radiologists by classifying cancer types, stages, and, increasingly, inferring molecular features that guide therapy. Cleared tools focus on detection and measurement to improve consistency rather than make independent decisions [1].
5. Beyond choosing a regimen, models can flag patients at higher risk for severe toxicities or complications, enabling earlier supportive care and closer monitoring. These decisions become even more complex for patients from underrepresented populations, including women, transgender individuals, and those with lower socioeconomic status, who have historically faced barriers to equitable care. AI’s potential to deliver personalized, data-driven recommendations could help reduce these disparities, but only if the systems are trained on diverse datasets that actually represent the populations they serve.
What has changed recently?
1. AI that thinks in sequences, not snapshots:
The shift isn’t just faster algorithms, but also smarter workflows. In 2025, a Nature Cancer paper described an AI agent that doesn’t just analyze data; it decides which data to analyze; it decides which data to analyze. Radiology? Pathology slides? Published guidelines? The AI agent chose and used the right tools in 87.5 percent of required tool calls across the 20 cases, reaching correct clinical conclusions 91% of the time [2].
This matters because oncology isn’t a single decision; it’s a sequence. Read the scan, interpret the biopsy, check the patient’s genetic profile, match it against treatment databases, and flag contraindications. AI systems are learning to walk through that sequence, not just answer isolated questions.
2. From the lab to the clinic:
Moffitt Cancer Center showed this in practice: AI-assisted treatment selection that keeps physicians central, providing probability estimates and rationales the oncologist can actually discuss with patients [3]. Trial matching tools are following the same pattern of broader access paired with expert oversight [4].
The breakthrough isn’t just what AI can do. It’s that AI is learning to do what oncologists do, thinking in steps, question by question, and building toward an answer.
How is it important clinically?
After diagnosis, the tumor is profiled by genomic sequencing. Imaging, labs, and clinical notes are collated. An AI system ingests these inputs and compares them with outcomes from thousands of similar cases, returning probabilities for the most relevant options. The oncologist reviews the outputs with the patient, aligns the choice with goals and logistics, and discusses expected benefits and side effects. In the near term, the value is completeness, speed, and consistency. In the medium term, expect tool-using agents that read scans, slides, and knowledge sources together to draft options for tumor boards, with clear guardrails, site-level validation, and responsibility that remains with clinicians.
Making AI trustworthy
For AI to work safely in oncology, four categories of safeguards must be in place: earning clinical trust through transparency, ensuring performance across diverse populations, meeting regulatory standards, and building secure technical infrastructure.
1. Trust is earned, not assumed –
Prospective studies in adaptive and precision workflows show clinicians accept AI suggestions when they add value and ignore them when they do not [5]. Transparency, case-specific confidence, and simple audit trails support trust and adoption.
2. What works in Boston may fail in Baltimore –
Performance can drift between sites and populations. Reviews emphasize the need for local testing, continuous monitoring, and dataset curation to reduce bias and improve generalizability before broad rollout. An algorithm trained predominantly on one demographic may miss patterns in another.
3. The FDA is watching, and that’s a good thing –
Many decision tools are Software as a Medical Device (SaMD). The FDA’s approach favors predefined change protocols, post-market monitoring, and transparency about intended use, data, and known limits [6]. Locked models are simpler to clear. Adaptive systems must document how updates will be controlled and verified over time.
4. Integration isn’t straightforward –
Integrating reports and images into one workflow is hard and site-specific. Hospitals will need secure pipelines, access controls, and privacy safeguards that work within electronic health records. The technical infrastructure matters as much as the algorithm.
Limits and reality checks
The progress is real, but so are the gaps between promise and proof. Here’s what AI in oncology cannot yet do, and what we need to watch carefully:
1. The evidence gap: Few prospective outcome studies tie AI use to better survival or quality of life. Several are ongoing, and that bar is essential. Currently, we know that AI can analyze data and suggest options. We don’t yet know if patients consistently live longer or better because of it.
2. The accountability gap: Clinicians remain responsible for final decisions. This isn’t a limitation, it’s a feature. AI is a tool, not a teammate with a medical license.
3. The equity gap: If training data underrepresent certain races, ethnicities, ages, or socioeconomic groups, predictions may be less accurate for those patients, causing a bias. Diverse datasets and routine auditing are required.
4. The scope gap: Many cleared products focus on standardizing detection and measurement rather than making treatment decisions, which is appropriate for now.
Where is this heading?
1. Smarter assistants, not autonomous decision-makers
Expect steady growth in decision support that drafts more complete plans. Tool-using agents can chain steps: segment a tumor, compute size change, read a molecular report, pull guideline citations, and assemble candidate regimens with contraindication checks. Early studies show higher completeness and fewer omissions than baseline models, but external validation is still needed, and that validation must come from multiple directions. Hospital tumor boards need to test whether AI recommendations align with their patient populations. Regulatory bodies like the FDA will evaluate safety and efficacy claims. Academic researchers must conduct independent performance assessments across diverse sites. Industry developers bear responsibility for transparent reporting of training data and model limitations.
2. Patients as informed participants
If AI contributed to a recommendation, ask which features drove the model, how confident the estimate is, and how the output aligns with your priorities. Good tools should enable clear conversations, not obscure them. At the same time, your medical data powers these systems. Strong data privacy protections are non-negotiable: de-identification before training, strict access controls, transparent consent about how your information will be used, and the right to opt out. AI can only be as trustworthy as the privacy framework surrounding it.
What does this all mean?
The bottom line is: AI will not replace oncologists. It will help teams be more complete and more consistent, especially in data-dense clinics. The likely near future for AI in oncology today is as decision-supporting tools that draft options with citations, with a tumor board or treating oncologist reviewing and signing off. The bar is prospective validation, clear regulation, and workflows that keep humans in charge.
Header Image Source: Vecteezy.com (under Creative Commons license)
Edited by Lubna Najm
References
1. Matthews GA, McGenity C, Bansal D, Treanor D, et al. Public evidence on AI products for digital pathology. npj Digital Medicine. 2024;7:300. https://www.nature.com/articles/s41746-024-01294-3
2. Ferber D, El Nahhas OSM, Wölflein G, Wiest IC, Clusmann J, Leßmann M-E, et al. Development and validation of an autonomous artificial intelligence agent for clinical decision-making in oncology. Nature Cancer. 2025;6:1337–1349. https://www.nature.com/articles/s43018-025-00991-6
3. Niraula D, Cuneo KC, Dinov ID, Gonzalez BD, Jamaluddin JB, Jin JJ, et al. Intricacies of human–AI interaction in dynamic decision-making for precision oncology. Nature Communications. 2025;16:1138. https://doi.org/10.1038/s41467-024-55259-x
4. Gueguen L, Olgiati L, Brutti-Mairesse C, Sans A, Le Texier V, Verlingue L, et al. A prospective pragmatic evaluation of automatic trial matching tools in a molecular tumor board. npj Precision Oncology. 2025;9:28. https://www.nature.com/articles/s41698-025-00806-y
5. Parikh RB, et al. Effect of human-AI teams on oncology prescreening: Final analysis of a randomized trial. Journal of Clinical Oncology. 2025;43(16_suppl):1508. https://ascopubs.org/doi/10.1200/JCO.2025.43.16_suppl.1508
6. U.S. Food and Drug Administration. Artificial Intelligence in Software as a Medical Device. Content current as of 2025-03-25. https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-software-medical-device; and FDA. Proposed Regulatory Framework for Modifications to AI/ML-Based SaMD: Discussion Paper and Request for Feedback. 2019. https://www.fda.gov/files/medical%20devices/published/US-FDA-Artificial-Intelligence-and-Machine-Learning-Discussion-Paper.pdf

Leave a comment