This article emphasizes that "feature success" and "product success" are different, and explains the 13-step development process that real-world teams follow to prevent product failure. From before writing a single line of code to post-launch retrospectives, it kindly reveals the complexity and importance of actual development through the deliberations and decisions made at each stage. In particular, it stresses that identifying risks during the planning and design phases, validating features based on data, and having the courage to acknowledge failure and iterate are essential for product success.


1. The Planning Phase: Setting Product Direction and Crystallizing Ideas

Even though not a single line of code is written during this period, the planning phase is the most intensely debated segment of the entire project lifecycle. It's critical to answer the fundamental question of "what to build" and eliminate potential failure factors in advance.

1.1. Step 1. PF (Prioritized Features): Defining Priority Features

In this step, the Product Owner (PO) discusses and determines project priorities by considering business value and available resources. Ideas are always abundant, but resources are scarce. The most important question is: "What is the most expensive problem right now?" For example, if you're choosing between a "new sign-up event" and "fixing payment errors," log data showing a 28% drop-off rate at the payment stage would lead you to conclude that fixing payment errors is the more urgent issue. Planning starts with finding where the pain is greatest.

1.2. Step 2. FBS (Feature Being Specified): Defining Detailed Feature Requirements

This is where team skill levels truly diverge. Simply listing "let's add this feature" is something anyone can do, but in practice, you need to define a Kill Metric and Guardrail Metric alongside the North Star goal. If conversion rate went up but refund rates skyrocketed or customer service inquiries doubled, that plan was a failure.

1.3. Step 3. RFD (Ready for Design): Design Preparation and Plan Review

Just because detailed specifications are ready doesn't mean you jump straight into design. At this stage, the plan (1-Pager) is reviewed once more with the PO to finalize the scope and direction of the design. Sometimes, after fierce discussion, a feature gets scrapped entirely—and that's actually a very healthy sign.

Spending tens of millions in sunk costs after launch is far more expensive than cutting it here. Having the courage to stop at the right time saves the product.


2. The Design Phase: Aligning Business and Technology

If the planning phase determined "what to build," this phase is about rigorously validating whether "it's really okay to build it this way." Failing to catch risks here can cause serious problems later on.

2.1. Step 4. FBD (Feature Being Designed): UX Design and Usability Testing

This is where UX designers create the screens and experiences users will actually see, based on the plan. What matters far more than whether the design is pretty is whether users find it confusing. If during usability testing with design mockups a user asks, "What's this button for?"—that design is considered a failure and must be redrawn from scratch. It's far better to fix things at this stage than to face user backlash after launch.

2.2. Step 5. RFE (Ready for Engineering): Engineer and QA Kickoff Meeting

Once planning and design are complete, engineers and QA staff gather for a kickoff meeting to officially begin development. This stage is a critical turning point—the moment a developer casually says, "Sure, this seems fine," they assume responsibility for all risks that arise from that feature. That's why you need to be a "prickly senior" at this stage.

"What about handling network disconnections?" "Were refund cases reflected in the plan?" "Are there any personal data legal issues?"

Asking these uncomfortable questions prevents much bigger problems like customer service call floods or overnight server outages after launch. 90% of variables must be caught at this stage for the product to survive.


3. The Development and Verification Phase: Creating Actual Deliverables

Having crossed the major hurdles of planning and risk checking, it's now time to enter the "battlefield" of actually producing results. This phase isn't just static time spent writing code—it's a continuous series of intense negotiations to make the best choices.

3.1. Step 6. FUE (Feature Under Engineering): Feature Implementation and System Building

This is where the assigned engineer implements the actual feature and builds systems based on the plan and design. Here, API negotiations can turn into unexpectedly fierce battles. Frontend and backend teams often have different perspectives on the same feature.

Frontend: "Could you just add this one field to the response?" Backend: "To pull that, we'd need three more DB joins and a complete overhaul of the cache structure."

What appeared to be a "one-line addition" can escalate into a decision that shakes the entire architecture. If adding a new feature slows app launch by 0.4 seconds, for a service with millions of MAU (Monthly Active Users), that can directly translate into revenue loss. That's why this stage often becomes a design battle where features get deferred to address performance first.

3.2. Steps 7 & 8. RFQ (Ready for QA) & FUQ (Feature Under QA): QA Preparation and Verification

Once development is complete, the team prepares for QA review (RFQ). Then the QA team thoroughly checks everything from UI details to functional specifics (FUQ). QA is not the enemy—they're an ally. The harder QA hits you here, the less users will hit you after launch. When QA uncovers all kinds of edge cases—network disconnections, tens of thousands of simultaneous connections—any complacent thinking of "this should be good enough" gets shattered. But discovering and fixing problems here is a hundred times better. Thorough QA is the only shield that protects developers' peaceful sleep.


4. Data-Driven Final Validation and Launch Decisions

Don't celebrate just because QA passed successfully. From here on, it's time to face the cold judgment of data rather than relying on developer intuition or effort. Mental resilience is truly important at this stage.

4.1. Step 9. RFT (Ready for Testing): Technical Preparation for Testing

Before deploying QA-approved features to the actual production environment, this step completes all technical preparations for A/B testing or canary releases. Features are never deployed to all users at once. The first question is always: "If this fails, how do we roll it back?" If Feature Flags, gradual deployment, or instant rollback strategies aren't ready, the experiment shouldn't even begin. If an error occurs with no way to revert, it's a disaster for developers.

4.2. Step 10. FUT (Feature Under Testing): A/B Testing and Data Analysis

This stage involves running A/B tests with a subset of actual users, collecting and analyzing data to verify whether the hypotheses set during planning were correct. This is where morale can really waver. For example, you hypothesize that "changing the button color will improve conversion rate," but the result comes back: "clicks increased but completed purchases decreased." The whole team goes quiet. Data gives you results but doesn't tell you why—so we must dig intensely to understand why clicks increased but purchases declined. You can't let your ego get in the way here.

Admitting that a feature you spent a month building is actually hurting metrics—that's the hardest but most important attitude in practice.

4.3. Steps 11 & 12. FL (Feature Launched) & FNL (Feature Not Launched): Launch or Kill Decision

Based on A/B test results, the feature either officially launches to all users (FL), or the team decides not to launch it (FNL) because the hypothesis was wrong. If metrics look good, you deploy with a smile; if they're bad, you must boldly cut it. Even a feature you stayed up nights building for a month—if the data says no, you must have the courage to discard it. What's truly scary is a feature that failed but stays in the codebase. Such features make code complex and continuously slow down future development as technical debt. Having the courage to throw things away saves the product.

4.4. Step 13. Post-Mortem: Problem Analysis and Improvement Planning

After the entire project concludes, this stage analyzes the problems and success factors to prepare improvements for the next project. In practice, incidents can happen at any time. But when an incident occurs, hunting for who made the mistake is amateur hour. Real teams relentlessly dig into "why the system failed to prevent this mistake."

"Why did the alert fire late?" "Why did the rollback take so long?" "Why was this missing from the test cases?"

Repeating these questions gradually makes the team stronger. Humans make mistakes, but the system must be what prevents them.


Conclusion

The most expensive cost in practice isn't developer salaries—it's sprinting full speed in the wrong direction for three months. That's why even shipping a single feature requires going through such a meticulous and tedious process. School teaches you to write "code that works," but the real world teaches you to build "products that don't fail."

This 13-step process is primarily the story of larger companies, but it's drawn from the author's experience working as a senior developer for 13 years. Even if you're a solo developer who shortcuts many of these steps, understanding and internalizing this overall context will help you transcend just writing good code and become a builder who creates "successful products." May this process help you all grow into stronger developers

Related writing

Related writing