Back to blog

AI Maturity

A Son Built an AI Workflow to Manage His Mother's Cancer. It Worked.

7 min read · Published April 5, 2026 · Updated April 5, 2026

By CogLab Editorial Team · Reviewed by Knyckolas Sutherland

Pratik Desai published a long post on Saturday describing the AI workflow he built to help manage his mother's Stage 4 cancer treatment. He is a software engineer. She is one of the tens of thousands of patients trying to navigate a care team of oncologists, nurses, insurance coordinators, and specialists while the disease is advancing faster than any one human can process the information.

The system he built is simple. He uploads every medical record, scan report, lab result, and conversation note to a private NotebookLM instance. He asks Claude specific questions about drug interactions, treatment side effects, and whether the next recommended step contradicts anything in the earlier record. When a new specialist makes a recommendation, he has the AI cross-check it against the three months of prior notes in under ten minutes.

There is nothing clinically novel in any of this. The value is in the coordination. Hospitals do not share records cleanly. Specialists often do not know what the other specialists have said. Patients and their families end up holding the thread, and most people do not have the time or training to do that well. Desai's workflow makes the coordination step faster and more reliable.

He is careful in the post about what the AI does and does not do. It does not diagnose. It does not prescribe. It does not replace a doctor. What it does is catch contradictions, surface questions worth asking at the next appointment, and keep the full history accessible to anyone in the care circle who needs a quick answer. That is a real improvement in quality of care, and it was built by one person on a subscription.

Why aren't we talking about this as the central AI story? Because it is not about a new model. It is about a human using available tools to solve a painful personal problem. Those stories are the ones that actually change how AI gets adopted. Policy papers and benchmark scores shape industry opinion. Personal stories like this shape whether your uncle tries Claude next month.

For operators, the Desai workflow has a practical lesson baked in. The most valuable AI workflows are the ones built by the person who owns the problem, not the ones built by a consultant who visited for two weeks. Any AI project where the owner of the pain is far from the builder of the tool is going to miss the real gold. The person navigating the problem every day is the one who knows where the friction really lives.

This applies directly inside organizations. The team lead whose unit spends hours a week reconciling two reporting systems is the one who knows what an AI-assisted workflow should actually do in that context. The senior analyst who corrects the same kind of mistake in every junior's draft is the one who can articulate what 'good' looks like. When AI tooling lands on those people's desks with no involvement in the design, it almost always misses the obvious win.

There is a harder lesson underneath too. Some of the most important AI workflows right now are not being built by product teams or enterprise buyers. They are being built by individual users who happened to have the technical skill and the personal stake to figure it out. That is how every major wave of software adoption has worked. The early adopters build for themselves. Product teams follow once the patterns are visible.

Desai's post includes a line that is worth copying. He writes that the hardest part of the system was not the AI. It was building the trust to use it for something as serious as his mother's care. That trust came from repeated tests where he compared what the AI said against what the doctors said and confirmed they matched. Trust, not capability, was the bottleneck.

If you lead a team working on AI for any regulated domain, that is the story you should carry in your head. The models are already capable enough for most of the workflows your users want help with. The reason they do not yet trust the output is that they have never had a chance to verify it on their own data. Build that verification step into every rollout, because without it the best model in the world is a novelty. With it, even a good enough model becomes a tool your team cannot imagine living without.

Frequently Asked

Is this legally and clinically safe for others to replicate?

It depends on where you live and how you use it. Desai's system does not replace a doctor. It is a coordination layer for the patient and family. The legal questions sit around privacy and data handling, not around medical practice. Keep records private, do not share AI output with clinicians as medical advice, and verify anything material with a provider.

Will AI meaningfully change how patients navigate complex care?

Probably yes, for patients who have access to the tools and the skill to set them up. The benefits cluster in coordination work rather than diagnosis. Over time, tools built explicitly for patient families will make this easier without requiring engineering skill.

What is the takeaway for a team lead at a company?

The most valuable AI workflows are built by the person who owns the problem. Put more tools in the hands of the people doing the work, not just the central AI team. Trust gets built through verification, so design every rollout with a way for users to double-check the output against a source they already trust.

Sources

Related Articles

Services

Explore AI Coaching Programs

Solutions

Browse AI Systems by Team

Resources

Use Implementation Templates