Skip to main content

How I work

Engagements are short, fixed-scope, and built to end cleanly. Here's how that runs in practice.

01

What does a 90-day engagement look like?

Three acts. Each act has me doing one kind of work and your team doing another.

By the end, your team is running the agent and I'm gone.

Act 01

Anchor and Ground

Weeks 1 to 3

What I'm doing

Scoping the workflow in writing, establishing the baseline (what the workflow costs you today), and building the eval harness against thirty to fifty real historical examples. By the end of week three, the harness runs and produces a scorecard.

What your team is doing

Surfacing the historical examples, validating the workflow spec, and learning to run the harness. The eval harness ships in week three and stays with you. Your team can score the agent before there is an agent.

Act 02

Engineer

Weeks 4 to 9

What I'm doing

Building the agent, iterating against the eval, keeping the architecture as small as the eval will tolerate. Traces are visible for every run. Weekly progress is real, not performative.

What your team is doing

Reviewing weekly progress, flagging edge cases as they surface, and beginning to read traces. The engineering counterpart on your team is in the loop on architectural decisions, not handed a finished system at the end.

Act 03

iNstrument and Transfer

Weeks 10 to 13

What I'm doing

Shipping the dashboard, tuning alert rules against real production samples, writing the runbook, and walking your team through the operating protocols.

What your team is doing

Taking ownership. By week thirteen, your engineering counterpart can read traces, extend the eval set with new examples, and respond to alerts without me. If they can't, the engagement isn't done.

02

What do you need from my team?

Three things — a workflow owner, an engineering counterpart, and access to historical data. If any are missing, the engagement isn't ready.

01

A workflow owner.

The person who owns the business outcome. Available for the workflow scoping in week one and check-ins through the rest of the engagement. Roughly one hour per week.

02

An engineering counterpart.

The person who'll own the agent operationally after handoff. They don't need to be an AI engineer. A competent engineer who can read code and read traces is enough. Their time commitment grows over the engagement: two hours a week early, four to six hours a week in the final phase as transfer happens. By week thirteen, they're running the agent.

03

Access to the historical data.

Thirty to fifty real examples of the workflow with correct outputs, drawn from your records. This is the single biggest unblocker. If pulling that data requires a multi-week IT process, the engagement starts late. Worth confirming this is straightforward before week one.

03

How is this priced?

No rate card. The engagement is priced as a fraction of what the agent saves you, typically $15k–$50k for 3×–10× year-one ROI.

The math goes in this order. First, we figure out what the workflow costs you today. Labor hours, error rates, throughput ceiling, whatever applies. This is part of the discovery call. If you don't have a rough number, the engagement isn't ready yet.

Second, we estimate what the agent reduces it to. Some of that estimate comes from past engagements; some of it is calibrated against the eval harness in week three. By week three you'll know whether the projected reduction is real.

Third, the engagement is priced so you capture three to ten times your investment in year one, and continue capturing the full savings every year after. For most mid-market workflows that lands the engagement somewhere between $15,000 and $50,000.

Below roughly $100,000 of annual workflow cost, the math doesn't work for either of us. The engineering effort is roughly the same for a small problem as for a medium one, and at small scale you can't capture enough savings to justify the spend. If you're not sure whether your workflow clears that bar, we'll figure it out on the discovery call.

You're not paying for my time. You're paying a fraction of the ongoing savings your team captures forever. If the math doesn't pencil out to at least 3x ROI in year one, I won't take the engagement.

04

What happens if the engagement runs over?

If the overrun is on my side, I absorb the cost. If the scope itself changed, we re-quote together. Fixed scope only works when both sides hold to it — that protects you from a price that drifts upward, and it lets me plan the work honestly. Both pieces matter equally; that's what makes the commitment real.

Scenario A

If the overrun is my error,
I eat the cost.

The fixed scope and timeline are the commitment. If I miscalibrated the engineering effort, that's on me, not on you. The engagement still ships within the agreed budget. This is the protection that makes "fixed scope" actually mean something.

Scenario B

If the overrun is because scope changed,
we re-quote.

A new workflow surfaced. The data turned out to be different from what we scoped. The success criteria moved. These are scope changes, not engineering errors, and they require a new conversation. The original engagement either pauses while we re-scope or completes against the original scope and a new engagement starts.

05

Is there ongoing support after the engagement ends?

No. The engagement is structured so your team can operate the agent without me. That's the point.

There isn't ongoing support, and that's the point. The engagement is structured so your team can operate the agent without me. If transfer phase ends and your team can't run the agent independently, the engagement isn't done. That's why transfer is a deliverable, not a courtesy.

If something breaks six months in, you have two options. First, your team diagnoses it using the runbook and the observability dashboard you've been operating since handoff. Most issues are resolvable that way. Second, if the issue is genuinely beyond what the runbook covers (a model deprecation, a major workflow shift, an integration that broke) you can hire me back as a separate engagement. Same model, fixed scope, priced to ROI. But the engagement is structured so that's rare, not routine.

If you want a managed service, this isn't the right shape. If you want to own the system after I leave, it is.

06

What if I'm not sure this is the right fit?

Uncertain is fine. The 45-minute discovery call is built for exactly that — no pitch deck, no commitment.

If the workflow isn't fully identified, or the data picture isn't clear, or the team capacity question is uncertain, those are exactly the things the discovery call is for. A 45-minute call with me is the cheapest way for both of us to figure out whether this is the right shape for you. No pitch deck, no follow-up sequence, no commitment. Either we both think it's a fit, or one of us doesn't, and we move on.

Book a discovery call

End cleanly. Or don't start.

If owning the agent after handoff is the goal, the discovery call confirms the workflow fits the 90-day shape — and the math fits the ROI floor.

Forty-five minutes, and you'll know whether to move forward.

FreeNo pitch deckGo or no-go on the call
MavenSolutions

One workflow. One agent. 90 days. Then your team owns it.

© 2026 MavenEcommerce Inc. dba MavenSolutions

Andrew Korolov · principal AI engineer