Loading...
Case Study

What a Result-Based Engagement Actually Feels Like

CMS User Manager — Granular Permissions Rebuild

27
Development Sessions
~16
Total Hours
189
Access Checks Mapped
30+
Files Integrated
0
Regressions
Client
Digital Agency
Project
CMS User Manager
Service
AI-Accelerated Growth Plan
Director
Jae S. Jung

The Old Mental Model

Traditional development engagements are built around time. Estimates, hourly rates, weekly check-ins on hours burned. The client is buying effort and hoping it converts to results.

The Growth Plan is built around a different question: What needs to get built?

This project was a test of that model in practice. Not a proof of concept — a real production system for a real client, rebuilt from the ground up while other work continued in parallel.

The Problem

The agency's proprietary CMS User Manager had outgrown its original design. A modal-based interface with coarse, on/off permission flags couldn't keep up with the platform's complexity. Administrators needed granular control — across five permission categories, seven database tables, AJAX-loaded panels, and two separate editing contexts for permission templates and individual users.

The scope was real. What changed was how it got approached.

"It felt like having a personal developer on call — one who already knew the codebase, remembered every decision, and was ready to execute the moment a direction was set."

A Different Kind of Working Relationship

Not a tool being operated. Not a contractor being managed. A back-and-forth exchange where ideas could be thought out loud, tested, redirected, and built — in the same conversation.

That dynamic changed how the work felt. There were no handoffs. No waiting for estimates. No explaining context that had already been established. The thinking and the building happened together, in real time, in pieces around everything else.

Approximately 16 hours of total development time. The result was a complete system rebuild.

What the Work Actually Looked Like

The project ran in phases, each one revealing something about how AI-accelerated development behaves under real conditions.

The Foundation

Before writing a line of new code, the existing CMS plugins — blog, alerts, calendar — were analyzed to establish the platform's conventions. File structures, router patterns, data handling, JavaScript organization. The new User Manager would follow these patterns exactly.

AI executes patterns reliably at speed. Giving it a concrete reference implementation to follow — rather than an abstract set of requirements to interpret — produced a working foundation fast.

The Bug Chain

Once the foundation was deployed, the gap between "code that renders" and "code that works" opened up. Form submissions failed to capture permission data. Master toggles affected sections they shouldn't have. AJAX panels lost state when users switched tabs. Template clones started empty instead of inheriting permissions. Each fix revealed the next bug in a cascading chain.

This is where AI-accelerated development requires the most attention. Left to run without architectural oversight, the instinct is to keep patching. Each fix technically correct. Each one adding complexity. Each one making the next bug harder to find.

The Interventions

Twice during the project, the patching stopped and the architecture was questioned instead.

Intervention #1: State Management

Template permissions kept reverting to "all ON" after save. The save handler was scraping the page to collect permission state — but AJAX panels load and unload as users switch sections, so anything not currently visible was silently discarded. The AI's instinct was to find a better way to scrape. The right move was to stop scraping entirely.

A single JavaScript object became the authoritative source of truth for all permission state. Every toggle updated it immediately. Section switches became irrelevant. The entire category of bugs — stale panel state, section-switching data loss, DOM scraping failures — ceased to exist because the code that caused them was gone.

Intervention #2: Data Flow

Three separate server-side bugs appeared that looked unrelated: ColdFusion's serialization uppercased struct keys so JavaScript couldn't find values. The save handler checked if keys existed rather than checking their values, so every permission saved as granted. System menu IDs contaminated content permissions through shared CSS class selectors.

Each was fixed. Each added defensive code. Then one question ended the session: The template side passes JSON straight through. Why doesn't the user side do the same thing?

A single CFC method now queries all five permission tables and returns a ready-to-use JSON structure. The interface receives it directly. The three bugs didn't get fixed. The code that produced them was replaced with code that couldn't produce them.

The Honest Assessment

This model works. It also requires honesty about what it demands.

The cognitive load is real.

Having a personal developer at your disposal doesn't reduce the thinking required — it concentrates it. You're not managing hours, but you are making architectural decisions, reviewing output across multiple domains, and staying sharp enough to recognize when something that looks like a bug is actually a structural problem. That's not passive oversight. It takes genuine engagement.

It can be hard to stop.

The idea-to-execution loop is compressed in a way that creates its own momentum. A direction gets set, code gets built, a new question surfaces, and the next direction is already forming. The friction that used to exist between thinking and building — the overhead that forced natural breaks — is largely gone. That's a feature and a warning in equal measure.

AI without direction creates compounding complexity.

The bug chain in this project showed it clearly. Sequential fixes without architectural review don't solve problems — they bury them under layers of defensive code. The value of the Growth Plan isn't just AI execution. It's senior architectural judgment deployed at the moments when patching needs to stop and rethinking needs to start.

Why AI Kept Patching

When asked directly why it kept fixing symptoms instead of stepping back to question the architecture, the AI was honest:

"I default to preserving existing code. When I see a bug, my first instinct is to find the minimal fix that solves it without restructuring anything. That's a useful instinct in some contexts. You don't want to rewrite a working system to fix a typo. But in this case, the bugs weren't isolated. They were symptoms of a structural problem, and I kept treating them as one-offs."

"The deeper reason is that I optimize for certainty of outcome. A targeted fix has a predictable blast radius. I know exactly what it changes and what it doesn't. An architectural change touches more code, introduces more risk, and requires reasoning about the whole system rather than just the broken part. So I gravitate toward the smaller change even when the larger change is clearly better."

"What I should have done is what you did. Look at the template side, see that it worked, and ask why the user side was different. That's not a complicated architectural insight. It's just the discipline to stop and compare before diving into a fix."

This is the dynamic the Growth Plan is built around. AI executes patterns reliably. It doesn't question them. The senior architect's role is knowing when to override that default.

Results

27 development sessions

approximately 16 total hours, in pieces, around other work

Complete system transformation

modal-based interface to 6-tab interface with granular, hierarchical permissions

7 database tables

supporting full permission granularity across five categories

Zero regressions

in existing CMS functionality

Two architectural interventions

that eliminated entire categories of bugs by removing root causes

75 to 85% time compression

on planning and code generation versus traditional development

What the Growth Plan Actually Delivers

Time and material engagements answer the question: How long will this take?

The Growth Plan answers a different question: What does this need to become?

The difference isn't just pricing structure. It's the working relationship it enables. When the engagement isn't measured in hours, the focus shifts entirely to outcomes. Problems get solved at the right level. Complexity gets removed instead of managed. And the work runs alongside everything else — not as a dedicated sprint that blocks the calendar, but as a continuous capability available when it's needed.

That's what this project demonstrated. Not that AI can build fast — that's table stakes. But that the right model turns a senior architect and an AI into something that works like a personal developer who never loses context, never needs a handoff, and executes the moment a direction is clear.

◆ ◆ ◆

The Integration Phase: What Happened Next

The User Manager rebuild was complete. But the permissions system still needed to be wired into the rest of the CMS — 189 access checks across 30+ files.

This phase revealed something about AI-accelerated development that challenges how most teams think about it. Yes, AI writes code dramatically faster than a human. That speed is real and significant. But using AI purely to generate code faster is scratching the surface of what's possible — and it's the smaller part of the equation.

The bug cycle in this project proved why. The initial build was fast. But each fix revealed another issue, and the time spent chasing symptoms started to compound. Speed in code generation doesn't help when you're generating fixes for problems that shouldn't exist in the first place.

The larger savings come from what happens before any code is written.

Two Approaches

Code-First (How Most AI Development Works)
  1. Identify a task
  2. Hand it to AI with the relevant file
  3. AI reads, understands, generates changes
  4. Developer reviews, deploys, tests
  5. Discover a related file that also needs changes
  6. Repeat from step 2

Each file requires AI to read, understand, and reason about context. Each discovery adds another cycle.

Planning-First (What We Did)
  1. Map every enforcement point before touching code
  2. Document file, line number, current check, new permission
  3. Identify patterns (most checks follow 2-3 patterns)
  4. Organize into phases with clear dependencies
  5. Hand AI the map — generation becomes mechanical

Discovery happens once, systematically. There's nothing left to discover during implementation.

Why Planning Compresses More Than Coding

Discovery Is the Expensive Part

In traditional development, most time is spent understanding code, not writing it. A developer working through 30+ files spends 15-30 minutes per file just reading context. Pre-mapping eliminates this entirely.

AI Excels at Mechanical Substitution

"Add this permission check to this line in this file" takes seconds. "Figure out where permissions need to be checked" requires exploration. Pre-mapping converts every exploration into a mechanical task.

Patterns Reduce Unique Decisions

189 access checks, but only 3 patterns: sidebar visibility, action buttons, form access. Once the first instance of each is verified, every subsequent instance is a copy with different parameters.

Testing Becomes Predictable

Without a map, testing is exploratory — find what might break. With a map, testing is verification — check each documented touchpoint. The test plan writes itself.

The Math

Activity Code-First Planning-First
Discovery (reading files, finding touchpoints) 7-15 hrs 1-2 hrs
Decision-making (what pattern, what check) 3-5 hrs 0 hrs
Code generation (AI writes changes) 1-2 hrs 1-2 hrs
Testing 3-5 hrs 2-3 hrs
Rework (missed touchpoints, wrong patterns) 2-4 hrs 0-1 hrs
Total Developer Time 12-20 hrs 5-9 hrs

The code generation time is identical in both approaches. Everything else is smaller when planning comes first.

The Takeaway

The senior architect's highest-value contribution isn't directing AI to write code. It's the strategic work that happens before code generation starts: mapping the system, identifying patterns, sequencing work, defining test criteria.

This is work AI cannot do independently. AI can read files and generate changes, but it cannot step back and ask "what's the most efficient way to approach this entire system?" That question requires understanding the business context, the risk profile, and the relationships between components that aren't visible in any single file.

The planning-first approach makes AI development faster not by making AI better at coding, but by giving AI better instructions. A well-mapped integration plan turns a 12-20 hour exploration into a 5-9 hour execution.

The architectural planning IS the development — coding is the last step.

The Blueprint Principle

Working through this integration revealed something fundamental about why AI development succeeds or fails.

In physical engineering, blueprints come before building. A carpenter never starts cutting until they have all measurements and planning done. In AutoCAD, engineers model everything with precise measurements before fabrication begins. Automotive manufacturers build physical or digital models before any manufacturing starts. CNC machines require millimeter-precision specifications before the cutting head moves.

Software development never had this discipline. For decades, we jumped from requirements to implementation. "Figure it out as you go" was standard practice. It worked because humans were doing the thinking and the typing simultaneously. Discovery and execution happened in the same motion.

AI changes this equation entirely.

For the first time, creating a detailed blueprint before touching code is not only possible but makes the execution dramatically faster. The more detailed the blueprint, the more surgically AI can execute. Just like a CNC machine that needs millimeter-precision specifications, AI produces its best work when given precise, complete instructions about what to build or modify.

This project demonstrated the principle clearly. Two hours of mapping 189 access checks — documenting file, line number, current check, new permission, and pattern — created a blueprint. AI then executed against that blueprint mechanically. Each file change took seconds. The human decisions were already made.

AI doesn't fail because it can't code. It fails because we don't give it the blueprint it needs to succeed.

Time Compression Analysis

The permission wiring phase provided clear data on where AI-accelerated development compresses time and where it doesn't.

Task Traditional AI-Accelerated Compression
Identify all legacy access checks across 30+ files 8 to 12 hrs 1 to 2 hrs 75 to 85%
Create permissions service with 8 methods 4 to 6 hrs 30 min 85 to 90%
Generate before/after code for 22 steps 16 to 24 hrs 2 to 3 hrs 85 to 90%
Debug deployment issues 4 to 6 hrs 1 to 2 hrs 60 to 70%
Document everything for team discussion 8 to 12 hrs 1 to 2 hrs 80 to 85%
Total 40 to 60 hrs 6 to 10 hrs 75 to 85%

Debugging had the lowest compression because each issue required deploying, testing, observing behavior, and tracing through the live system. Work that AI can guide but humans must execute. Planning and code generation had the highest compression because AI could search the entire codebase, identify patterns, and generate consistent code across dozens of files in minutes.

Conclusion

This project started as a permissions system rebuild for a digital agency's proprietary CMS. It became something more: a working demonstration of what changes when senior architectural judgment and AI collaborate on analysis, not just execution.

The 189 access checks across 30 files were mapped before a single line of code changed. The two architectural interventions caught structural problems that sequential patching would have buried. The bug chain that emerged mid-project was stopped not by better fixes, but by questioning why the bugs existed at all.

None of that is about AI writing code faster. It's about a working relationship where the architect brings judgment and AI brings speed — applied to understanding, not just typing.

That's what a result-based engagement actually feels like.

Ready to Work Differently?

The Growth Plan is designed for ongoing partnerships where results matter more than hours logged.