Most engineering leaders think about AI wrong.
When they see a new model, they ask: "How fast can we ship this?" But the interesting question is: "Where will this break?"
figured this out early. He's the CTO of PicnicHealth, and they've built something remarkable: an 8 billion parameter model that beats much larger frontier models at medical tasks. Because Troy understood exactly where large, general models fall short—and he engineered around those constraints.His company built what might be the world's best LLM for medical records—working with 7 out of 10 largest pharma companies and collecting 350 million clinician annotations over a decade. But Troy's most valuable insight isn't about AI's capabilities. It's about the immovable constraints that determine whether your AI implementation succeeds or becomes expensive theater.
The Filing Cabinet Problem
Troy grew up around medicine. Both his parents were doctors. As a kid, he worked in their offices and was "horrified by walls of filing cabinets."
When the government spent $40 billion digitizing medical records, Troy thought: finally. Software will fix this mess.
It didn't. Most EMR systems made doctors less efficient, not more. This taught Troy something important: you can't just layer technology onto broken processes. The process has to change too.
This insight shaped everything that came after.
🎧 Subscribe and Listen Now →
When Leaders Need to Code Again
Here's what's interesting about leadership during technological shifts: engineering leaders may need to get more technical, not less.
Troy started PicnicHealth in 2014, writing code all day. As the company grew, he did what every engineering leader does: stepped back from implementation to focus on team building. "The highest leverage way for me to work is less building everything directly and more building out the team."
But when LLMs emerged, Troy had to reverse course. "The ability to understand where opportunity is requires more direct hands-on experience," he told me.
Why? Because understanding real constraints requires hands-on experience. Where does fine-tuning actually help? Which domains are narrow enough for reliable automation? You can't evaluate these opportunities from team status reports, because the technology is changing too fast.
Troy recognized that during periods of rapid technological change, engineering leaders need deeper technical fluency to make good decisions. He had to balance staying close enough to the technology to spot constraints while still enabling his teams to do their best work.
This isn't micromanaging. It's strategic intelligence gathering about what's actually possible.
The Data Moat
PicnicHealth's advantage isn't the size of their models. It's their data.
They have 350 million annotations from real doctors using their system over a decade. Every time a doctor corrects the AI, the model gets better. "That kind of medical record data is not in the public training corpus," Troy explains.
This creates something interesting: a feedback loop that gets stronger over time. The more doctors use the system, the better it gets. The better it gets, the more doctors want to use it.
Most AI companies focus on building more powerful models. PicnicHealth focused on building better data collection systems.
The Application Layer Surprise
In 2022, everyone thought AI value would flow primarily to model creators—OpenAI, Anthropic, Google. The reasoning seemed sound: models are the hardest part to build, so they should capture the most value.
This turned out to be incomplete.
"I'm very glad that we live in a world where a lot of value is delivered and captured by the application layer," Troy says. Here's why: foundation models are commoditizing, but domain expertise isn't.
A general-purpose model might have broad knowledge, but it doesn't know the specific workflows of clinical trials, or how doctors actually review patient records, or which edge cases matter most in your domain.
This is where constraints become advantages. By focusing on medical records exclusively, PicnicHealth could optimize for things that matter in healthcare but nowhere else.
The Narrow Domain Strategy
Most AI implementations fail because they try to solve everything at once. Picnic Health builds AI agents that operate within their integrated clinical trial system. This sounds limiting, but it's actually powerful.
When you control the entire workflow—from data ingestion to final output—you can build in validation loops, human oversight, and error correction at every step. You can define clear success metrics and create tight feedback cycles.
General-purpose AI tools can't do this. They have to work for everyone, which means they're optimized for no one.
Bottlenecks Don't Disappear
Here's the thing about technological progress: it doesn't eliminate bottlenecks, it just moves them.
AI accelerates drug discovery, but regulatory approval still takes 7-10 years. "Even if there's way more potential assets," Troy observes, "you're still 10 years away from people actually being able to use that."
This pattern repeats everywhere. Technical capabilities advance at an amazing pace, but distribution into real industries and workflows takes much longer. It requires changing human behavior, not just building better software.
The leadership lesson: don't assume AI will solve your bottlenecks. Assume it will create new ones. Your job is figuring out where.
What This Means for You
If you're building with AI, Troy's approach offers a different path:
First, understand your constraints before you optimize for capabilities. Most processes have hidden bottlenecks that no amount of AI will fix. Find those first.
Second, build data flywheels, not just models. Look for workflows where user corrections create proprietary datasets. This is how you build moats in a world of commoditized models.
Third, go narrow before you go wide. Start with controlled environments where you can measure success precisely and iterate quickly. Reliable automation in a narrow domain beats unreliable automation everywhere.
Fourth, during technological shifts, technical leaders need to stay technical. You can't evaluate AI opportunities from conference rooms. You need to understand the constraints firsthand.
The question for your next AI decision: are you solving a real constraint, or just adding sophisticated automation to a broken process?
The difference determines whether you build a moat or just an expensive feature.
A Note About Maestro AI
Troy described a challenge most engineering leaders face: as you grow from writing code to leading teams, you lose visibility into what's actually happening. When the work is scattered across Slack, Jira, and GitHub and more, it becomes impossible to see where time goes or what's blocking progress.
Maestro AI solves this with daily insights that show where your team's energy actually goes, so you can spot problems before they compound.
If you're tired of guessing what's really happening with your team, visit getmaestro.ai or schedule a chat with us here: https://cal.com/team/maestro-ai/chat-with-maestro