Back to Insights
Your Product Was Built for Humans. AI Agents Can't Use It.

Your Product Was Built for Humans. AI Agents Can't Use It.

Most software is designed around the assumption that somewhere, on the other end of a screen, there is a human being who is confused and needs help. This is a reasonable assumption. It was, until very recently, correct.

Enter the onboarding wizard. The tooltip. The Welcome to [Product]! email with five embedded videos and a link to a 47-step setup guide that someone in a marketing team described, in a Confluence doc, as delightful. Enter the interactive product tour that hijacks your cursor and guides it — slowly, condescendingly — toward a button you could have found yourself in four seconds. Enter the floating chat widget (staffed, at 2am, exclusively by a bot named Max) who asks how your day is going before he tells you he can't help with that.

This is the architecture of modern SaaS. It is, in its way, a masterpiece of human-centred design.

The problem is that AI agents are not human.

They do not need a cursor tour. They do not benefit from a welcome email. They have never once read a tooltip, and they never will. What they need is a token, a working CLI, and documentation that a language model can actually parse. That is the entire list. You could print it on a Post-it note and stick it to a developer's monitor, which is where it belongs.

What Agent-Ready Actually Means

This is where most of the conversation about AI and software stalls — at vague gestures toward API access and automation-friendly. So let's be concrete. An agent-ready product satisfies five criteria. Most products today satisfy two, maybe three.

1. Token-based authentication with no browser dependency

The agent needs to authenticate without a human in the loop. That means an API key, a service account token, or OAuth with a machine flow — not a login page, not SSO that redirects to an identity provider, not an email verification step. If your product's only path to authentication requires a browser session and a human clicking allow, your product is invisible to autonomous agents.

2. A CLI or REST API with genuine feature parity

Not a read-only reporting API bolted on as an afterthought. Full feature parity: the things an agent needs to do in your product — create, update, query, trigger — need to be available programmatically. If the only way to perform an action is through your UI, that action does not exist for an agent.

Spiral is a useful example here. Its CLI exposes the full writing workflow: authenticate with an API key, pass a prompt and a style, receive a structured draft. An AI agent handed those credentials can be productive in minutes — no human required to click through a single setup screen. The agent reads the docs, the agent works. That's it.

3. Machine-readable documentation

This one is underrated. An OpenAPI spec (or equivalent) lets an agent understand your entire API surface without a human translating it. Readable, structured docs — not a Notion graveyard of half-finished pages last edited in 2022, not a PDF someone exported from Confluence — mean the agent can reason about what your product can do and how to use it. Documentation that was written for humans to skim is not the same as documentation a language model can parse.

4. Structured outputs

Agents need to read your product's responses, not just trigger actions. If your API returns HTML, unstructured text, or bespoke error formats that vary by endpoint, the agent is flying blind. JSON responses with consistent schemas and machine-parseable error codes (not something went wrong — contact support) are the baseline. Bonus points for typed responses and clear pagination patterns.

5. No browser-only gates

CAPTCHAs. Email verification flows. Click here to confirm you're not a robot. These are not minor friction points for an agent — they are full stops. Any flow that requires a human to click approve at any stage breaks the autonomy loop. The average enterprise SaaS platform requires a human somewhere: permissions approvals, integration wizards, SSO configurations that require IT to open a ticket that surfaces seventeen days later in a Jira board nobody checks.

An AI agent dropped into that environment does not become productive. It becomes confused, then stuck, then — depending on how it's architected — it starts hallucinating API endpoints that don't exist, which is the software equivalent of a new employee making up answers in their first week rather than admitting they have no idea what's happening.

A Quick Scoring Rubric

Run your current stack (or a product you're evaluating) against these five criteria. One point each.

ScoreClassification [Out of 5]
5Agent-native — deployable autonomously, minimal human oversight
3–4Agent-capable — workable with some scripting or human handoff points
1–2Agent-hostile — significant rework required before useful to an agent
0Invisible — the agent cannot interact with this product at all

Most tools your business currently uses will score 2–3. That's not a condemnation; it's just the reality of a decade of human-centred product design. The question is what you do with that information.

Who's Getting This Right (and Who Isn't)

The products scoring 5/5 today are mostly developer-native by origin — Stripe, GitHub, Linear, Resend. They were built API-first because their early users were engineers who demanded it. That API-first discipline turns out to be exactly what agent-readiness requires. It is a coincidence that is not going to feel like a coincidence for much longer.

The products scoring 1–2 are mostly the category leaders in enterprise software — the platforms that grew up during an era of UI-first design and bolted on APIs when enterprise customers demanded integrations. Their APIs exist, but they're partial. Their auth is complex. Their docs are written for humans. They work fine when a person is operating them. They stall when an agent tries.

This is not a criticism of those products' quality. It is an observation about what happens when the species of operator changes.

The Moat You Didn't Know You Needed

We see this constantly in our advisory work (I am being candid here at some professional cost: we have recommended tools to clients that turned out to be essentially inaccessible to the agents they were meant to power, which is the consulting equivalent of selling someone a car and forgetting to mention it has no steering wheel).

The businesses now running meaningful AI automation — and there are more of them in Australia than the discourse suggests — are not asking is this product good? They are asking can my agent use this without a human babysitting it? Products that cannot answer yes are being quietly bypassed in favour of ones that can.

Agent-native product design is the next competitive moat. Not the warmth of your onboarding sequence. Not your NPS score. Not the friendly illustration of a person at a laptop that someone spent three weeks getting approved in Figma. Your ability to be picked up and deployed by an AI agent in minutes, with no human required.

The products that understand this are building for two users simultaneously: the human who signs the contract, and the agent who actually does the work.

The products that don't understand this are still lovingly crafting their welcome email sequence.

Good luck to them.