How the Clarity.Do Web App Was Built in 5 Hours
90% of the Clarity.Do web client was generated in a single AI conversation — 5 hours, one session, no context lost. Here's how Claude Opus, Svelte 5, and an existing native codebase made it possible.
Five hours. That’s how long it took to go from zero to a fully functional web client for Clarity.Do — real-time sync, offline support, every major feature from the native apps, running in any browser. Not a prototype. Not a demo. A production web app.
The remaining 10%? Interface polish, animation tweaks, responsive edge cases — the kind of work that makes something feel right, not just function correctly. But the core? The routes, the components, the services, the state management, the WebSocket sync — all of it, generated and working in a single sitting.
This is the story of how that happened.
The Starting Point: Native Apps Already Existed
Before a single line of web code was written, Clarity.Do already had fully built native apps for iOS and macOS — developed over months into a mature SwiftUI codebase with real-time WebSocket sync, hierarchical task management, an Eisenhower priority matrix, offline persistence, optimistic mutations, and a complete service layer.
That existing codebase became the blueprint. Every feature, every API contract, every WebSocket message type, every edge case that had already been solved in Swift — all of it was context that could be handed to an AI model and translated into a web equivalent.
This is a detail that matters more than it might seem. Building an app from scratch — even with AI — means discovering requirements as you go. Decisions about data models, sync protocols, error handling, and state management all need to be made. But when those decisions already exist in a working codebase, the problem shifts from design to translation. And translation is exactly what large language models are good at.
Why Claude Opus Made This Possible
Since Claude Opus 4.5, something changed. The model became remarkably accurate at generating production-quality Svelte 5 code — not the old Svelte 4 patterns with export let and reactive $: statements, but proper Svelte 5 runes: $state, $derived, $effect, $props. The kind of code you’d actually want to ship.
This matters because Svelte 5 was a significant departure from previous versions. Many AI models still generate outdated Svelte patterns. Opus doesn’t. It understands the runes system, the new reactivity model, the way SvelteKit 2 handles routing and layouts. Hand it a Swift service class and a WebSocket protocol, and it produces a TypeScript equivalent that works on the first pass.
The workflow looked like this: describe the feature, point at the Swift implementation for context, and let the model generate the Svelte 5 component, the TypeScript service, and the reactive store. Review, test, move on. Repeat dozens of times across five hours.
One Conversation, No Context Lost
This is the part that would have been impossible even a few months ago. Claude Code recently upgraded to a 1 million token context window — and that changed everything.
The entire 90% of the web app was generated in a single, unbroken conversation. One session. No compacting, no summarizing, no re-explaining the codebase halfway through. When the WebSocket sync service was being built in hour three, the model still had full context of the API service from hour one. When the Priority view’s store needed wiring up, the model already knew the task model, the sync protocol, the store patterns established in every previous component.
Previously, long coding sessions with AI would hit a wall. The context window would fill up, earlier messages would get compressed or dropped, and suddenly the model would forget conventions it had followed perfectly an hour ago. Time would be spent re-explaining architecture, re-establishing patterns, correcting regressions. The generation was fast but the context management was slow.
With a million tokens, that friction disappeared entirely. The conversation grew as each phase built on the last — Phase 1’s networking layer, Phase 2’s auth flow, Phase 3’s view skeletons, Phase 4’s task management — and the model held the full picture from first commit to last. No drift, no contradictions, no forgotten patterns. The result was a codebase that reads like one person wrote it in one sitting, because in a sense, that’s exactly what happened.
What came out the other side wasn’t boilerplate. It was a codebase with:
- 30+ Svelte components, most under 150 lines each
- 12 service modules handling API calls, WebSocket communication, sync orchestration, and offline mutation queuing
- 14 reactive stores using Svelte 5’s
$stateand$derivedfor fine-grained updates - 8 fully functional routes — Plan, Focus, Upcoming, Priority, Search, History, Inbox, and Settings
- IndexedDB caching for instant hydration and offline support
- A service worker for PWA-level shell caching
- A complete WebSocket sync layer with exponential backoff, heartbeat monitoring, and multi-tab coordination
All of it typed, all of it consistent, all of it following the same patterns throughout the codebase.
The Right Framework for the Job
Choosing Svelte 5 and SvelteKit wasn’t just a preference — it was a strategic decision. Svelte’s explicit reactivity model turned out to be ideal for AI-assisted development.
Where React requires you to think in hooks, memoization, and dependency arrays — patterns that even experienced developers get wrong — Svelte 5’s runes are direct. $state declares reactive state. $derived computes from it. $effect runs side effects. There’s no ambiguity about when something re-renders or why. For an AI model generating code, that explicitness means fewer subtle bugs and more correct-on-first-attempt output.
SvelteKit’s file-based routing eliminated boilerplate decisions. Need a new page? Create a file. Need a layout wrapper? Create a layout file. Need authentication guards? Add logic to the layout. The conventions are so clear that an AI model can follow them without drifting into custom abstractions.
And Tailwind CSS — already integrated via Vite — meant the AI could generate styled components directly. No separate stylesheets, no CSS module naming debates, no context-switching between logic and presentation.
Astonishingly Fast
There’s a reason the Clarity.Do public site describes the web app as “extremely fast” and emphasizes that “the UI feels instant.” That’s not marketing language — it’s architecture.
Svelte compiles components to minimal imperative JavaScript at build time. There’s no virtual DOM. No diffing algorithm running on every state change. When a value updates, only the specific DOM nodes that depend on it get touched. The result is performance that feels native, even in a browser.
The numbers tell the story. A typical React app ships a runtime of 40–50KB just for the framework. Svelte ships zero runtime — the framework disappears at build time, leaving only the code your app actually needs. Combined with SvelteKit’s automatic code splitting and Vite’s optimized production builds, the Clarity.Do web app loads fast and stays fast.
But performance isn’t just about bundle size. It’s about what happens after the page loads. Clarity.Do hydrates from IndexedDB on startup — your tasks appear instantly from the local cache while the WebSocket connection establishes in the background. The sync layer uses transaction IDs for incremental updates, so after the first load, reconnections only fetch what changed. Combined with Svelte 5’s fine-grained reactivity, this means the interface responds to interactions in single-digit milliseconds.
Plan view with hundreds of hierarchical tasks? Smooth. Priority matrix sorting tasks by importance and urgency? Instant. Switching between views? No loading spinners — the data is already there.
Feature Parity in a Browser
The web app isn’t a stripped-down companion. It mirrors the native experience:
- Plan — full hierarchical task tree with infinite nesting, expand/collapse, breadcrumb navigation
- Focus — dedicated view for tasks marked as focused, organized by parent
- Upcoming — time-bucketed sections from Overdue through Next Month, respecting your first-day-of-week preference
- Priority — the Eisenhower matrix with Urgent & Important, Urgent, and Important quadrants
- Search — real-time search across task names and descriptions
- History — completion timeline with undo capability
- Quick Capture — batch task creation with parent-child syntax
- Settings — theme, timezone, accent colors, mobile tab customization, digest emails, device management
Real-time sync over WebSocket means changes made on your iPhone appear on the web instantly — and vice versa. The mutation layer handles optimistic updates with temporary IDs, confirmation tracking, and automatic rollback on failure. It’s the same protocol the native apps use, running in JavaScript.
The app even handles multi-tab gracefully. Open Clarity.Do in two browser tabs, and the BroadcastChannel API ensures only one WebSocket connection stays active, preventing data race conditions. The other tab shows a clean “open in another tab” message.
What the Last 10% Looked Like
The remaining work after those five hours was the kind of polish that separates a functional app from one that feels good to use:
- Responsive breakpoints — the inline task editor at 1212px, bottom tabs on mobile, sidebar on desktop
- Animation timing — task completion effects, departure animations, navigation transitions
- Touch interactions — long-press for quick capture on the floating action button, swipe gestures
- Visual consistency — accent color gradients (five presets with solid, burst, and background variants), status indicator styling, due date pill coloring
- Edge cases — scroll position persistence per route, frozen tab detection with auto-reload, timezone-aware date calculations across different week-start preferences
This is the work AI still doesn’t do well on its own. The subjective decisions — does this animation feel right at 200ms or 300ms? Should this gradient be 18% or 25% opacity? Does the tap target feel big enough on a phone? — those require a human using the product and making judgment calls.
But that’s 10% of the work. The other 90% — the architecture, the data flow, the service layer, the component library, the state management — was AI-generated and working.
This Isn’t Vibe Coding
There’s a temptation to read “90% AI-generated in five hours” and conclude that the AI did the thinking. Open a chat, describe an app, watch the code pour out. That reading misses the point entirely. What made this possible wasn’t just the AI — it was that deep software architecture experience was behind every prompt, every decision, every review.
Every prompt in that five-hour session carried years of architectural knowledge. “Build the sync service with optimistic mutations, temporary ID assignment, confirmation tracking keyed by message ID, and automatic rollback on timeout” isn’t a prompt that writes itself. It comes from having built sync systems before — from knowing what breaks when confirmations aren’t tracked, what happens when two devices mutate the same task, why temporary IDs need remapping.
The AI didn’t make architectural decisions. It executed them. The difference is everything.
The quality of AI-generated code is bounded by the quality of the instructions driving it. A vague prompt produces vague code. A prompt shaped by years of debugging production sync failures, optimizing database access patterns, and designing offline-first architectures produces code that handles those concerns from the start. The model amplifies whatever level of expertise is directing it.
When the model generated the WebSocket service, it was told to implement exponential backoff with a maximum of five reconnection attempts, heartbeat pings every 15 seconds with a 60-second pong timeout, and multi-tab coordination via BroadcastChannel. Those aren’t details an AI invents from a vague prompt like “add WebSocket support.” They’re specifications that come from experience — from having debugged connection storms in production, from knowing that browsers suspend inactive tabs, from understanding that two WebSocket connections from the same user cause data races.
The same applied at every layer. The store architecture used Map<number, Task> for O(1) lookups instead of arrays — because filtering by parent ID on every render would be a performance cliff at scale. The mutation service used temporary negative IDs and tracked pending confirmations — because optimistic updates collide with slow network responses in ways that only become obvious after you’ve seen them in production. The IndexedDB layer batched writes in single transactions — because individual writes block the main thread in ways that browser storage documentation doesn’t warn about.
None of these decisions appeared in the AI’s output by accident. They appeared because the prompts asked for them explicitly. And writing those prompts required the kind of knowledge that only comes from years of building, shipping, and maintaining production software.
This is where the “vibe coding” narrative falls apart. Vibe coding — describing what an app should do in plain language and letting the AI figure out the how — works for prototypes, demos, and simple CRUD apps. It doesn’t work for production software that needs to handle offline state, concurrent mutations, real-time sync across devices, and graceful degradation under network failure. That kind of software requires someone who’s solved these problems before and knows exactly what to specify.
Production software needs someone who knows what questions to ask. How should conflicts resolve? What’s the reconnection strategy? Where does the auth token refresh? What happens when the user opens two tabs? An AI model can answer these questions brilliantly — but only when asked. And knowing which questions matter — and which answers are acceptable — is the product of experience that compounds over years. It can’t be shortcut.
The five-hour timeline wasn’t fast because the work was simple. It was fast because deep architectural experience was driving — decisions already made before, refined over months of building the native apps, and encoded into precise prompts. The AI compressed the implementation time. The design time was already paid.
AI didn’t replace expertise. It gave expertise a multiplier.
What This Means
Two years ago, building a production web app from an existing native codebase would have been a multi-week project at minimum. You’d need a developer fluent in both Swift and TypeScript, familiar with both SwiftUI and Svelte, who could manually translate patterns while maintaining consistency.
Today, it required a clear specification (the existing native app), a model that understands both source and target frameworks (Claude Opus), a context window large enough to hold the entire project in memory (1M tokens), and a modern framework that plays well with AI-generated code (Svelte 5).
The developer’s role didn’t disappear. It shifted. Less time writing code, more time directing, reviewing, and making the judgment calls that require human taste. The AI handled the volume — dozens of components, services, and stores generated in rapid succession. The human handled the vision — deciding what to build, what to prioritize, and what “good enough” actually looks like.
That combination — existing codebase as blueprint, an accurate AI model, a million-token context window, and a framework designed for clarity — is what turned a multi-week project into a five-hour session.
The web app is live. It’s fast. It syncs in real time. And 90% of it was written by an AI in a single conversation.
Plan intuitively. See clearly. Stay focused. Know what’s next — now from any browser.