Available for projects · Remote, US-based

Full-cycle product delivery.
AI where it earns its keep.

I build SaaS, micro-SaaS, and AI integrations end-to-end — from discovery to production and beyond. One owner across the whole lifecycle, and a straight answer on whether your problem actually needs AI.

What I do

Services

  • Full-Cycle Delivery

    Discovery, architecture, build, deploy, and operate. One owner, no handoffs between vendors.

  • AI Integration, Honestly

    LLMs, agents, and RAG where they move a real metric. When a classic approach is simpler, cheaper, and more reliable, I’ll say so and build it that way.

  • SaaS & Micro-SaaS

    New products from zero: MVP in weeks, auth, billing, multi-tenancy, analytics, production-ready from day one.

  • Automation & Internal Tools

    Agents and pipelines that remove manual work: document processing, support triage, ETL, ops workflows.

How I think

Case Notes

Representative problems and how I thought them through.

When the algorithm beat the AI

The problem
A high-volume classification pipeline was routing thousands of inbound records per minute. An LLM felt like the default answer; the team had already drafted a prompt and a budget.
What was considered
An LLM classifier, a small fine-tuned transformer, and a deterministic rules-plus-lookup approach fed by the actual input distribution from a week of production traffic.
The decision
Rules plus lookup. The real input space was narrower than it looked, and an LLM at that volume would have added latency, cost, and a new failure mode to operate without moving the accuracy needle.
Outcome
Millisecond-range latency, near-zero variable cost, and a classifier the on-call engineer could reason about at 3 a.m. Reach for AI when the input space is genuinely unbounded, not by default.

When AI was the only path

The problem
Extracting structured fields from messy, high-variance documents and free-text tickets. A rule-based parser had accreted for years and was now costing more to maintain than it was worth.
What was considered
Continuing the rules path, a classical NLP pipeline with hand-crafted features, and an LLM with a tight schema, validators, and evals.
The decision
LLM with engineered guardrails: typed output schema, deterministic post-validation, a small eval set the team could extend, and a clean fallback when confidence was low.
Outcome
The maintenance treadmill stopped. The interesting work shifted from patching regexes to improving evals — a much better place to spend an engineer’s day.

Building a SaaS from zero

The problem
A founder with a clear wedge, no product yet, and a deadline tied to a design partner. The shape of the product was still moving; the shape of the foundation couldn’t.
What was considered
A fast no-code MVP, a heavy enterprise stack, and a narrow production-grade foundation limited to the decisions that are expensive to change later.
The decision
Get the permanent decisions right on day one — auth model, tenancy boundary, billing primitive, observability — and defer everything else behind clean seams. Ship the product in weeks on top of that.
Outcome
The design partner went live on schedule. A year later the product had pivoted twice and the foundation hadn’t needed a rewrite. What gets skipped early is usually what you pay for later.

Shipping a micro-SaaS solo

The problem
One operator. Narrow audience. Must be cheap to run, boring to operate, and survive weeks of inattention without waking anyone up.
What was considered
A fashionable distributed stack, a managed platform with a long bill, and a deliberately unfashionable monolith on a single region with a managed database.
The decision
The boring monolith. Fewer moving parts means fewer 2 a.m. surprises, and the dollars saved on infrastructure funded the dollars spent on reliability.
Outcome
The architecture stayed out of the way. For micro-SaaS, the right architecture is the one that lets you ignore it for weeks.

How I work

Approach

  • Ship first, iterate with signal.

    Useful software in production beats perfect software in a branch. Real users generate better requirements than meeting rooms.

  • AI is a tool, not a strategy.

    I reach for it when the input space is genuinely open-ended. When a deterministic approach is simpler, cheaper, and more reliable, I’ll tell you and build it that way.

  • Own the full stack of delivery.

    From design decisions to production incidents. The person writing the code is the person carrying the pager.

  • Async-first. Written. Transparent.

    Short written updates, visible progress, decisions documented where you can find them six months later.

Get in touch

Tell me what you’re building.

A sharp email beats a form. Send what you have — problem, constraints, timeline — and I’ll reply with specifics.