Loading...
From the Book March 16, 2026

What Happens When You Apply a Software Methodology to a Government Proposal

Five days. Thirty pages. One person. And a discovery about AI I wasn't expecting.

I've been part of proposal teams before. I know what traditional government proposal development looks like: a proposal manager coordinating the effort, technical writers shaping the narrative, subject matter experts providing content, a pricing analyst building the cost model, reviewers catching gaps. Two to three weeks of coordinated effort for a project of any real scope.

When I saw the RFP for a municipal website redesign, I decided to try something different. Not because I didn't respect what proposal teams do. Because I'd been developing an AI collaboration methodology on software projects and wanted to see how far it could stretch.

Could one person, applying the same approach I'd used to build applications and migrate legacy systems, produce a competitive government proposal?

The answer was yes. Five days. Thirty pages. But the timeline wasn't the discovery.

The discovery was what AI became during those five days.


The Question Before the Work

On my software projects, I knew what I was building before AI got involved. The product definition existed in my head or in the source code. AI's role was execution: help me build this thing I've already defined.

The proposal was different. I had an RFP. Someone else's description of what they wanted. Before I could write a single word, I had to figure out what I was actually building.

So my first message wasn't "write me an executive summary." It was: should I even compete for this?

AI came back with a strategic assessment. Strengths. Gaps. Competitive positioning. That raised more questions. Where do I need partners? What does the competitive field look like? What will other vendors propose that I can't?

I spent the first two days not writing a single word of the proposal. Just thinking. Analyzing. Building a picture of what I was actually building and whether it was worth building at all.

The RFP process included a Q&A period where all vendors submitted questions. Over two hundred questions came back. I uploaded the entire document and asked AI to find the patterns.

Which vendors were serious contenders and which were fishing. What CMS platforms the competition was likely proposing. Who was planning turnkey solutions versus custom builds. For the first time on any project, I could see the competitive landscape before committing to a direction.

AI also found the emotional center of the entire opportunity buried in the client's own responses. Twenty to thirty content editors constrained by rigid templates. Unable to update their own website without technical support. The current platform worked technically but failed the people who had to use it every day.

That insight became the through-line of everything I wrote. Not my technical capabilities. The client's pain.


Applying the Methodology to a New Domain

When I finally started writing on day three, the collaboration followed the same pattern I'd used on code projects. Each section of the proposal was a domain. Work through it methodically. Provide the raw materials: resumes, project histories, technical details. Give AI the strategic direction. Review the output. Redirect what didn't fit.

The redirecting was constant. AI has its own voice. A government proposal voice. Generic, polished, professional-sounding language that says nothing specific. The kind of writing that evaluators have seen a thousand times.

I kept pushing back. "That doesn't sound like us. Say what we actually do and why it matters."

At one point AI drafted a section about how we would "satisfactorily meet the town's requirements." I stopped it. We don't aim to satisfactorily meet requirements. That's not how we think. Write it the way we actually talk about our work.

Another time AI drafted a response to a weakness in our proposal that essentially admitted the gap and asked the evaluator to overlook it. I redirected: don't apologize for what we don't have. Show them what we offer that nobody else does.

These weren't grammar corrections. They were identity corrections. AI was writing a competent proposal. I needed it to write our proposal.


The Moment That Changed My Approach

Midway through, I did something I'd never thought to do on any of my code projects. I asked AI to validate my work. Not check for errors. Validate the entire approach. Review the proposal as if you were a competing vendor or a critical evaluator. Find every weakness. Every gap. Every reason to score us lower.

On the software projects, I'd use AI to build and execute. It never occurred to me to stop and ask AI to challenge what we'd built. But the proposal context made it natural. You're competing. Someone is going to evaluate this against other submissions. Why not see what they'll see before they see it?

AI didn't hold back. No municipal website experience in our portfolio. Less established CMS platforms than turnkey government vendors. A lean team in a field where competitors would propose ten or fifteen people.

It stung. You pour days into building something you're proud of, and then AI systematically explains why you might lose.

But the data made the proposal stronger. I went back and addressed every vulnerability AI found. And it opened my eyes to something I'd been missing on every previous project: AI isn't just an execution partner or a thinking partner. It's a validation partner. I just hadn't thought to use it that way until the proposal forced me to.


What Five Days Taught Me

Looking back, this project was a turning point I didn't recognize at the time.

Across every project I'd done with AI: building applications, migrating systems, converting codebases. I was practicing something without realizing it. Planning. Execution. Delivery. All of it was happening. I just never stopped to see it as a process.

On the code projects, execution dominated. I knew what I was building. I jumped in and built it. The planning was instinctive, buried inside the doing. It worked, but I couldn't have explained why it worked or taught someone else to do it.

The proposal forced the planning to the surface. I couldn't skip it. I had to answer what am I building and how will I deliver it before writing a single word. And for the first time, AI wasn't my execution partner. It was my thinking partner. The strategic assessment. The competitive analysis. The Q&A patterns. The red team. All of that was planning. Deliberate, structured, visible planning.

That experience taught me something that reshaped how I approach everything now. The planning, the execution, the delivery. They were always there. On every project. I just didn't see them as connected pieces until the proposal made it impossible to skip one.

This journey with AI has been about more than productivity. It's been about discovering a process I was already following and finally understanding why it works.


What Five Days Produced

One person. Five days. A submission-ready package: a 30-page technical proposal with professional diagrams, detailed pricing, team qualifications, past performance narratives, appendices, and a compliance checklist verifying every RFP requirement was addressed.

The traditional timeline for that deliverable is two to three weeks with a dedicated team.

But the numbers aren't the point. The point is that the methodology I'd developed on code projects: building software, migrating systems. It translated completely to a domain I'd never worked in. The same approach. The same rhythm. A completely different deliverable.

The proposal wasn't about code. The methodology was never about code. It's about structured collaboration between human expertise and AI execution. Code was just where I discovered it.

Want the full methodology?

The Architect and The Navigator is available now.

Including how the red team assessment works, what the strategic analysis looked like, and why the same approach works across completely different domains.

Jae S. Jung has been building since 1997: infrastructure, SaaS platforms, legacy migrations, distributed teams across four continents. Not drawing diagrams and handing them off. Actually building. That's the philosophy behind WAM DevTech. AI doesn't replace nearly 30 years of that. It amplifies it.

Share Article