Available for Senior / Staff / Lead IC
Selected work

Cereba · Principal AI/UX Designer · 2024–2025

Same product, two architectures. Built it on Bubble. Rebuilt it Claude-native in under a month.

A self-building agent for service businesses. Qualification, booking, cancellation recovery, reactivation. V1 shipped on Bubble. V2 rebuilt Claude-native when the tools stopped being the limiter, with the GHL onboarding fixed end-to-end.

RolePrincipal AI/UX Designer · only designer-developer
TimelineOct 2024 – Jun 2025
ProductSelf-building agent · revenue ops for service businesses
PlatformsResponsive web · desktop + mobile
Cereba V2 signup page. Headline 'Get Started Free' over a clean form. The C.O.R.E. Intent system surfaces what it already knows about the user's business context and asks only for the smallest input needed to start. Trust signals: 'Free to try', 'Connect GHL in one step', 'Live in 20 minutes'.
V2 signup. C.O.R.E. Intent surfaces what the system already knows; users provide the smallest input needed to start.
The short version
ProblemV1 on Bubble was stable but the platform's limits met our AI ambition. Multi-LLM support was structurally difficult, scalability was capped, and onboarding metrics flagged users getting stuck at the GHL connect step.
DecisionJoint call to rebuild Claude-native rather than retrofit. My case: UX, stability, scalability, and the speed-of-improvement we'd unlock by leaving the platform that was capping it.
ResultFull rewrite shipped in under a month. New architecture, migrated data, rebuilt front-end, GHL connection unblocked, paywall moved past the product preview, marketing site refreshed. Live, running, customer-facing.
Honest lineV1 ran in production long enough to teach us what to fix. V2 is judgment (when to rewrite) and velocity (under a month), not a from-zero invention.

V1: The Bubble build

Designer-developer of one

I was the only designer at Cereba and also built the front-end. Designer-developer scope from day one, on a team that paired internal product and design (me + stakeholders) with external engineering, QA, and product partners.

What I owned

  • End-to-end product and UX design: onboarding, campaign creation, analytics, dashboard
  • Front-end build in Bubble.io. Every screen, every flow, every state
  • Stripe integration, database wiring, and the connections between Bubble and the third-party services the product depended on
  • PM-adjacent work: stakeholder discovery, prioritization, sprint cadence, and async reviews with the external engineering team
  • The iteration loop after launch: onboarding metrics, user interviews, and the streamlining work that came out of both
Cereba V1 dashboard built in Bubble. Shows conversation count (1,715), conversions (342), and a 19.9% conversion rate. AI test chat panel on the left, auto-reconnect report table on the right. Simple card layout with a free-tier usage meter.
V1 dashboard. Real users, real conversion data, real revenue ops running on Bubble.

V1 launched. Real users, real conversion data, real customers running real revenue ops on top of it. AI was already in the product, but the architecture wasn't built around modern LLMs. It was built around what Bubble could integrate with at the time.

The call to rebuild

When the platform is the limit, not the design

Claude's capability crossed a threshold that V1's architecture couldn't take advantage of. Multi-LLM support was structurally difficult on Bubble. Scalability was capped. The platform itself had become the constraint on how fast the product could improve.

Retrofitting AI onto V1 would have inherited V1's limitations. Rebuilding meant a clean architecture aimed at where the field was going.

The call was joint. The founding team and me. My case had four legs: UX (V1's onboarding metrics had been telling us what needed to change), stability (Bubble was strained at our scale), scalability (we needed headroom we weren't going to get on the existing stack), and speed of improvement (the cost of the rewrite was less than the cost of carrying V1's limits forward another year).

This wasn't "AI is hot, we need it." It was: the tools available now can do what V1 couldn't, and the architecture rewrite buys us forward velocity we cannot buy any other way.

V2: The Claude-native rebuild

Under a month, end-to-end

Full rewrite. New architecture, new repo, all databases reconnected, all production data migrated. Front-end rebuilt from scratch. Visuals refreshed. Capability expanded. Claude could carry more weight than V1's stack, and the design stretched to use it. Marketing site rebuilt alongside.

Cereba V2 dashboard. Richer metrics with percentage deltas (conversations +20%, conversions +66.67%, conversion rate +38.89%). Campaign template launcher with five campaign types: Schedule an Appointment, Reactivate Old Leads, Get a Review, Get a Referral, Rescue Cancellation. Publish queue and What's New changelog panel.
V2 dashboard. Campaign templates, richer metrics with deltas, publish queue, and changelog. Built in under a month.

Time, wall-clock: under one month.

How a one-designer rebuild ships in a month

This is where I developed the AI-as-thought-partner methodology that now runs across every project I work on, including Ascend. A Claude project and a GPT project as design partners, for inspiration, research synthesis, technical feasibility checks, and pattern audits. Tied directly to traditional UX practice: real user feedback from V1 drove every decision the AI helped me move on faster.

The AI didn't replace judgment; it accelerated the work around the judgment. I made the calls. Claude helped me get to the calls in hours instead of days.

What emerged here: C.O.R.E. Intent

V1's onboarding asked users to define their business from a blank prompt. V2's surfaces what the system already knows (about the user, the account, the integration state) and asks for the smallest input needed to act. AI as product behavior, not chat wrapper.

The blank-prompt problem is the same one most B2B AI products still ship with: here's a text box, tell us what you want. C.O.R.E. Intent inverts it: here's what we recognize, confirm or correct it, then we move.

What V1 surfaced, V2 fixed

The GHL connect step was the conversion floor

V1's onboarding metrics were specific. Users got stuck at the GoHighLevel (GHL) connect step. That single step was the conversion floor, and onboarding around it was too long, too in-the-weeds, and forced upfront before users had seen what the product could do.

Cereba V1 onboarding flow. An 8-step progress bar at the top, currently on an early step. The page asks users to define their business from scratch: Full Name, Company Name, and a 'Do you have an active HighLevel account?' toggle, plus a website URL field. Long form with a gradient background.
V1 onboarding. Eight steps, the GHL connection buried mid-flow, users defining their business from a blank prompt.

The V2 onboarding rewrite

  • Shorter and handheld. Reduced step count, replaced free-form configuration with guided defaults the user could override
  • Moved out of forced flow. Onboarding became a quick-setup guide inside the dashboard, not a wall users had to clear before reaching the product
  • Product preview before paywall. Users see what they're getting before they're asked to pay for it
  • GHL connection rewritten as a primary unblock target. The integration that used to be the conversion floor became one of the smoothest steps in the flow
Cereba V2 onboarding step 1. Clean single card: Full Name, Company Name, Company Website URL, and Phone. One CTA: 'Continue to Company Overview'. Subtext: 'We'll research your company and create an overview.'
V2, step 1. Four fields, one action.
Cereba V2 onboarding step 2. The system has researched the company and pre-filled Identity, Location, and Products sections. User confirms or edits. CTA: 'Looks good, let's go!' with subtext 'Your AI is ready to customize.'
V2, step 2. AI researches the company, user confirms. Done.

The principle: V1 ran long enough in production to teach us what the product needed to win. V2 was where we built the win. That's the right ratio. V1 to surface the question, V2 to ship the answer.

Reflection

What worked, what I'd do differently, when this approach is wrong

What worked

  • The judgment call. Rebuilding rather than retrofitting was right. It bought a year of forward velocity instead of a year of patching limits we already understood.
  • The methodology developed during the rebuild. AI as thought partner, paired with traditional UX practice. The same approach I now use everywhere. It started here.
  • One designer through both eras. The same person who saw V1's onboarding metrics led V2's rewrite. No translation loss between what the data said and what shipped.

What I'd do differently

  • Surfaced the rebuild case earlier. By the time we made the call, V1 had carried more passengers than it should have. The signal was visible a few months before we acted on it.
  • Instrumentation from day one of V2. V1 taught us what to look for. V2 should have started with the dashboards V1 ended with, not built them in afterward.

When this approach is wrong

Rebuild when the platform is the limit, not when the design is. If V1's problems had been UX problems, retrofitting would have been the right call. Rewrites are expensive and most of them don't need to happen. The trigger here was that the platform couldn't support what we wanted to build next, not that the screens were wrong.