Make It Agentic, STAT!
Why We Should Consider Skills Over Buzzwords
I spent the last week building AI agents on no-code platforms to understand what companies actually mean when they ask for “agent-building experience.” THEN, I went down a bit of a thought rabbit hole wondering where that kind of a question comes from.
I have a suspicion that most of them don’t know what they’re asking for. To be clear, no shade to them, I think a lot of managers and people in decision-making spaces are under pressure to “make it AI” and they don’t really know where to start. I just saw this post the other day so I know I’m not the only person noticing that there seems to be some confusion about how to go about this. But, these are decisions that have consequences.
The Variability Problem
There are many ways to build an agent. From no-code platforms to some-code platforms, to direct API integration or custom model training. There are different levels of complexity. An agent just means an LLM connects to an external service (like your email or calendar) and does something without you prompting every action. It could be as simple as an LLM drafting email replies or as complicated as a team of agents working together and learning from one another to solve complex problems.
To illustrate how vague this ask is, lets apply this to a different industry. Lets say a public housing agency was looking for someone with experience in public housing programs and tenant safety compliance so that they understand who needs housing, how to ensure that housing is safe and that it has met safety codes. They also wanted them to have experience as a certified home inspector to ensure they know how to identify whether buildings meet safety standards and can identify risks. Then, they ALSO want someone with “home building experience”. This could mean anything: architect, contractor, framer, roofer, electrician, plumber, DIY renovator, someone who did an IKEA bookshelf hack, someone who built a treehouse with their kid.
There are two points here.
One person should not do all of this.
What skills are we actually looking for by asking for “home building experience”
“Agent-building experience” could mean someone who chatted with a no-code platform for an afternoon, or someone who architected multi-agent systems with compiled, auditable code. These are fundamentally different skill sets with different risk profiles and failure modes.
I Made An Assumption
When I was asked about my agent-building experience in the interview, I didn’t have an answer. I assumed they needed an AI developer, which I am not. So I figured what they actually wanted was a familiarity with agents, how they work, what the risks are and why it is critical to be familiar with these things in the role I had actually applied for.
But after the interview, I questioned my assumption. Do I know they need a developer or are no-code platforms sufficient?
I used Lindy.ai to explore my assumptions. I built a simple email agent for my tutoring business, then tried to add two more workflows just to see what would happen. I immediately knew this platform would not be appropriate for healthcare because the workflow was not visible, which means it was not auditable.
As I added workflows 3 and 4, functionality quickly hit a context window limit; not for my conversation, but for the agent’s workflow itself. If workflows live in conversational memory rather than compiled code, adding complexity doesn’t expand functionality, it breaks existing functions. Iterating doesn’t improve the outputs, it leads to workflow degradation.
What Do Companies Actually Need?
The phrase “agent-building experience” is opaque.
I can now say “I have agent-building experience.” But that tells you nothing about whether I understand architectural risks, can prevent cross-contamination, or know when a platform is fundamentally unsuitable for healthcare.
This hiring challenge isn’t unique to the company that interviewed me. In this article they discuss why it is difficult to hire people with the right AI talent. It boils down to several things: demand for AI skills exceeds supply, employers are looking for unicorn candidates (one person with ALL of the skills) and employers are not familiar enough with AI to evaluate for the right talent. Their proposed solutions are to prioritize skills-based hiring, identify what is most needed for the vacant position and strengthen internal training resources.
In health tech, these skill distinctions are critical. Do you need someone who has built an agent, or do you need someone who understands:
The full spectrum of what “agent” can mean
How different architectures create different risk profiles
Which failure modes matter for your specific use case
When you need deeper architectural knowledge vs. building experience
How to evaluate and hire the right technical talent

If you are looking for someone with agent-building experience, consider identifying the skills you are actually seeking instead. If you can’t articulate what kind of agent, built how, with what architecture, tested for which failure modes, you might be screening for the wrong thing entirely.
The person who ensures regulatory compliance isn’t the person who’s built the most agents; it’s the person who knows the right questions to ask about how those agents work.
This is where clinical and patient safety expertise combined with AI literacy becomes invaluable. They’ll know what to evaluate, how to evaluate it, what risks matter, and when “agent-building experience” is hiding a fundamental mismatch. Let a developer build the agent. That is a standalone skill.
Practical Guidance Based On What I’ve Learned
If a company needs an agent to integrate into their high-risk business, I’m going to extrapolate from my experience and say that no-code platforms are probably not it. I reserve the right to revise this later, but for now, it seems really risky. These use cases likely require actual AI engineers and developers.
If a small to medium
sized low risk business wants an AI agent to help handle incoming business inquiries, then no-code agents are likely just fine, but keep in mind that they require human-in-the-loop supervision. Given the failures I witnessed just in my brief engagement with this type of an agent, I would not trust it to autonomously answer emails. This raises the question of whether these tools are really eliminating work or just shifting it to oversight and cleanup.
