← back

Migrating Off the Golden Cage: Lessons from Vibe Coding an AI Journaling App

#migration#edge-functions#ui-ux#context-management#debugging

Migrating off a no-code platform mid-build, designing a summarization feature as a side-channel architecture, and the debugging tax that comes with every AI-generated line of code.

Migrating Off the Golden Cage: Lessons from Vibe Coding an AI Journaling App

Today was when the project hit its stride. I completed a full platform migration away from Lovable, shipped the app's first end-to-end AI summarization feature, and learned a handful of lessons about the real cost of "free" tooling and the craft of steering AI-assisted development.

The Strategic Pivot: Why I Left Lovable

Lovable's free tier became the bottleneck. The platform is impressive for prototyping, but when iteration speed matters, a restrictive usage cap is a form of architectural debt. I migrated the entire stack to Cursor, GitHub, and Vercel — trading a managed environment for full control over my deployment pipeline and development velocity.

The hidden cost of migration was OAuth. Reconfiguring authentication to work with a new Vercel production domain was straightforward in theory, but it exposed a deeper problem: because OAuth only functioned on the production domain, I lost the ability to do end-to-end local testing. That meant pushing untested code to production to verify auth flows — a risk I accepted consciously rather than sinking time into a local workaround for a solo project.

Mental model: Free-tier platforms are prototyping accelerants, not production foundations. Budget your migration early, before your velocity depends on something you don't control.

Designing the Summarization Feature Around Constraints

The summarization feature needed to call Gemini, but I didn't want the overhead of deploying and maintaining a second Edge Function. Instead, I added a mode: summarize flag to the existing function, letting it share the authentication pipeline and keeping the architecture flat. One function, two modes, zero redundant infra.

The more interesting design decision was around safety thresholds. The chat pipeline already runs a strict safety stack, so by the time journal content reaches the summarizer, it has already been validated. I relaxed the summarization prompt's safety settings to BLOCK_ONLY_HIGH specifically to prevent false positive blocks on difficult emotional topics — the exact kind of content a journaling app needs to handle gracefully. This is a layered safety model: strict at ingestion, relaxed at analysis.

Mental model: Safety isn't a single dial. It's a pipeline. Match the strictness to each stage's role, or you'll block the very use cases your product exists to serve.

The One-Tap Mood Flow: Constraining AI Output by Design

I removed manual user tagging entirely. Asking someone to categorize their thoughts at bedtime is friction that kills retention. Tags will come later via AI-powered search — the system should do the work, not the user.

For mood confirmation, I designed a one-tap flow that restricts the AI to a fixed set of eight emojis. This is a deliberate constraint. An unconstrained model will suggest obscure or contextually inappropriate emoji, creating a moment of confusion right when you want effortless closure. A fixed set keeps the UI clean and the interaction predictable.

Mental model: The best AI features feel like a shortcut, not a quiz. Constrain the output space so the user's only job is to confirm, not evaluate.

Prompt Engineering as a Development Discipline

Three practices made vibe coding significantly more reliable this week:

I also found that Cursor's direct file editing was consistently faster than Agent mode for executing known architectural changes. Agent mode is powerful for exploration, but when you already know what needs to happen, the overhead of autonomous reasoning just slows you down.

The Debugging Tax: Where AI-Assisted Development Still Breaks Down

Three bugs consumed disproportionate debugging time this week, and each one is a pattern worth naming:

The field name mismatch. The AI-generated Edge Function expected msg.content, but the frontend sent msg.text. The result was that the AI received empty data and returned generic fallback summaries — a failure mode that looked like a bad prompt, not a data pipeline bug. This is the classic "silent wrong" problem in AI systems: the model doesn't crash, it just degrades gracefully enough to hide the real issue.

The self-closed div. A <div> was incorrectly self-closed in JSX, which completely broke the HistoryScreen component. This is a one-character bug that an AI introduced and a human had to find. Syntax-level errors in generated code remain a tax on AI-assisted development.

The shell mismatch. AI-generated documentation and commands default to Bash syntax. The && operator had to be manually corrected to ; for PowerShell on every execution. It's a small friction, but it compounds across a full development session.

Mental model: AI-assisted development doesn't eliminate debugging — it shifts the bug profile. Expect fewer logic errors and more integration mismatches, naming inconsistencies, and environment assumptions.

What's Next

The foundation is now solid: a clean deployment pipeline, a working AI summarization feature, and a UI philosophy built around minimal friction. Next, I'm building the AI-powered search layer that will replace manual tagging entirely, closing the loop on the product's core thesis — that a journaling app should ask nothing of you except honesty.