05.06.26 By Dom Profico

When everyone on the product team can produce running software, the entire development model changes. From sequential handoffs of intermediate documents to parallel forging of the real thing.
In Part 1 of the Vibe Coding blog series, I argued that the quality problems attributed to vibe coding aren’t new. They’re the same problems our industry has always had, amplified by faster output. The solution isn’t restricting who can use agentic tools. It’s building governed environments where those tools produce reliable output by default.
But that argument was scoped to engineers. The real shift starts when the people around engineering, the product managers, UX designers, business analysts, and QA professionals, can also produce running software.
The answer isn’t just “more people writing code.” The answer is that the way we create software changes fundamentally.
Think about what actually happens in a typical product development cycle today.
Five roles. Five separate artifacts. Four translation steps between them. And at every single translation step, fidelity degrades.
The engineer interprets the mockup slightly differently than the designer intended. The test cases don’t quite match the acceptance criteria because the BA used ambiguous language. The PRD described a workflow that doesn’t survive contact with technical constraints the PM didn’t know about. The designer’s responsive behavior wasn’t specified precisely enough, so the engineer made assumptions the designer wouldn’t have made.
We’ve normalized this. We treat it as the cost of doing business. Entire methodologies exist just to manage the information loss. Sprint reviews to catch misinterpretations. Design QA to catch visual drift. UAT to catch requirement gaps. All of it is rework driven by the fact that everyone works on representations. No one works on the product.
Imagine commissioning a suit of armor where every specialist, the smith, the leather worker, the engraver, drew a detailed picture of their piece and handed it to a single craftsman who actually made everything. Then each specialist reviewed whether the finished product matched their drawing. No armoring workshop would operate that way. Each specialist works on the actual metal, the actual leather, the actual piece. Every hand that touches the suit is shaping the real thing.
Software development has operated like the absurd version of that imaginary workshop for decades. Agentic coding methods make the absurdity optional.
The contribution-based model doesn’t just change who touches the software. It changes the nature of how software takes shape.
Think about what a blacksmith does. They don’t execute a blueprint. They heat the metal, strike it, observe the result, adjust, strike again. The artifact takes shape through iterative interaction with the material. Each strike is both an act of creation and an act of discovery. The finished product emerges from the process rather than being specified in advance.
That’s a much closer analogy to what actually happens when someone uses an agentic tool to create software. A product manager forging a workflow isn’t executing a pre-defined specification. They’re prompting, seeing what comes back, adjusting, prompting again. The artifact takes shape through rapid iteration. Each cycle is simultaneously prototyping and producing the real thing. They’re the same activity.
The artifact’s maturation follows a natural progression through three modes.
Forge is the initial creation. Rough, rapid, iterative. You’re discovering the shape through repeated interaction. The PM forges a workflow, the designer forges a layout, and the engineer forges the data layer. The output is real and functional, but unpolished, like a breastplate shaped under the hammer, not yet fitted.
Design is deliberate shaping. Now that the raw form exists, you make intentional decisions about how it should work and feel. The designer refines interaction patterns against the design system. The BA encodes precise business logic into what the PM forged. The engineer defines how the forged components integrate. This is the armorer adjusting the articulation points, ensuring range of motion, fitting pieces against one another.
Refine is the finishing work. Polishing, hardening, optimizing for production. The engineer tunes performance and secures the attack surface. Quality Assurance (QA) validates edge cases against the real artifact. The designer ensures accessibility and cross-device behavior. The armor gets its final tempering, its edges smoothed, its straps fitted.
These aren’t role-based phases. They’re modes that any contributor can operate in, and that each piece of the artifact cycles through as it matures. A PM might forge in the morning and design in the afternoon. An engineer might refine one component while forging another. The modes describe the relationship between the contributor and the material at a given moment, not a stage in a sequential process.
A suit of armor isn’t a single monolithic piece. It’s composed of distinct parts that must work together as a system: the breastplate, the pauldrons, the gauntlets, the greaves, the helmet. Each piece requires specific expertise to forge well. Each can be worked on independently. But all of them have to fit together into a coherent whole.
Software is the same. A feature isn’t one undifferentiated mass of code. It’s composed of contribution surfaces: the workflow, the UI components, the data layer, the business logic, the validation rules, the authentication, the test coverage. Each surface can be forged, designed, and refined by the contributor best positioned to do that work.
Here’s what makes this model structurally different from the relay race.
In the traditional SDLC, handoffs transfer intermediate artifacts: a PRD, a mockup, a spec. The recipient has to interpret the intent behind the artifact, translate it into their own medium, and hope the translation is faithful. Review is a separate activity that happens after the translation, when someone checks whether the output matches the input.
In the contribution model, handoffs transfer the artifact itself. When the designer picks up the workflow that the PM forged, they can see exactly what was built. They don’t have to interpret a document. They’re looking at the real thing. And the act of designing it, shaping it with their own expertise, is simultaneously an act of review. They’re implicitly evaluating the PM’s product decisions while explicitly improving the experience.
The same applies at every handoff. When the engineer picks up a designed component, they’re reviewing the design and business logic while hardening the architecture. When QA tests a refined piece, they review everything upstream while validating the quality criteria.
Review isn’t a gate you pass through. It’s embedded in the act of contribution. And it happens through the lens of each contributor’s actual expertise, which is exactly the kind of review that catches real problems.
Feedback also flows naturally. If the engineer discovers that the PM’s workflow doesn’t survive a technical constraint, they modify the artifact. The PM sees the change immediately, understands the constraint through the concrete implementation rather than through an abstract explanation in a Jira comment, and adjusts their thinking accordingly. The cycle that used to take a sprint takes an afternoon.
An armoring workshop doesn’t operate sequentially. Multiple smiths work on different pieces simultaneously. The gauntlet smith doesn’t wait for the breastplate to be finished. But they do need to know the dimensions, the attachment points, the material thickness at the joins. The master armorer provides that coordination, and the shared standards ensure independently forged pieces fit together.
Software works the same way.
The PM is forging the workflow for feature B while the designer is shaping the experience on feature A. The engineer is refining the architecture on a component that both features share. They’re all working on the same codebase, contributing to the same governed artifact, with the pipeline continuously integrating their work and surfacing conflicts.
Each morning, the master armorer walks the workshop. They examine each piece in progress, assess its state, and determine what it needs next. The breastplate was forged to rough shape yesterday; today it needs design attention from the specialist who handles articulation. The gauntlets haven’t been started; the BA is ready to forge the validation logic. The helmet’s engineering was flagged by the governance pipeline overnight; it needs rework before refinement can continue.
This daily assessment, this reading of the workshop’s current state and translating it into a plan for who works on what in which mode, is the coordination mechanism that replaces sprint planning. It’s lightweight, it’s daily, and it’s grounded in the reality of the artifact rather than in a board of tickets.
Over the course of a feature’s life, the mix of modes shifts naturally. Early days are forge-heavy: rough shapes, initial pieces, foundational structure taking form through rapid iteration. Middle days blend forging new pieces with designing existing ones, making deliberate decisions about behavior, interaction, and integration. Late days are refine-heavy: hardening, polishing, optimizing for production. But on any given day, all three modes are active across different pieces of the suit.
The governed pipeline does the integration work continuously. Contributions land throughout the day. Automated governance checks run on every commit: security, architecture conformance, test coverage, design system compliance. Conflicts surface in near-real-time rather than at the end of a sprint.
The obvious question is whether this model survives contact with organizational scale. Multiple features, multiple teams, hundreds of contributors across time zones.
The answer is the same answer from Part 1: governance is infrastructure.
Within a feature, the contribution surfaces provide natural parallelization. Tensions between pieces, the PM’s workflow versus the designer’s interaction pattern, are resolved through the daily workshop assessment and through the embedded review that happens at each handoff.
Across features, the governance layer does the heavy lifting. Architectural fitness functions ensure that independent features don’t create structural conflicts. API contracts and service boundaries, defined and maintained by the master armorers in their Tier 1 governance role from Part 1, prevent integration collisions. The pipeline enforces these constraints automatically, regardless of which role or feature the contribution comes from.
The master armorers are not reviewing every contribution from every smith. They’re maintaining the patterns, the standards, the governed environment itself, stepping in when cross-feature architectural decisions are needed, and working directly on the technically complex pieces that require deep engineering expertise. Their leverage is systemic.
There are real challenges in this model that deserve direct acknowledgment.
| Challenge | What It Means | How the Model Handles It |
|---|---|---|
| Adoption resistance | Not every PM wants to forge workflows. Not every designer wants to move beyond Figma. The model can’t be forced. | Make it attractive, not mandatory. Agentic tools can translate PRDs and mockups into working artifacts, so document-first users aren’t blocked. |
| Expertise boundaries | PMs don’t need database indexing. Designers don’t need API rate limits. | Role-based governed environments. Each role sees what matters to them; complexity is abstracted. (Tier model applied by role context.) |
| Ownership complexity | Multiple roles contribute to the same artifact. Ownership becomes unclear. | Ownership stays domain-based: PM (logic), Designer (experience), Engineer (architecture), QA (quality), Tech lead (integration). Ownership shifts from documents to direct contribution. |
| Cultural shift | Organizations assume only engineers touch code. Roles, hiring, and evaluation are built around that. | Redefine roles around contribution. A PM who forges prototypes is a different role. Requires changes in hiring, evaluation, and team structure. |
The core shift is deceptively simple. In the traditional SDLC, most roles produce intermediate artifacts: documents, mockups, specifications, test plans. These describe what the software should be. One role translates those artifacts into the actual software. Every other role then validates whether the translation was faithful.
In the contribution-based model, every role forges, designs, or refines the actual software. The intermediate artifacts disappear. The translation steps disappear. The validation steps transform from “did the engineer interpret my document correctly” into “does the piece, which I helped shape directly, work the way it should.”
The SDLC doesn’t need to be a relay race, where a baton passes from role to role, accumulating interpretation errors at every handoff. It can be an armoring workshop where every specialist shapes the real thing. Each piece is forged, designed, and refined by the contributors with the right expertise. Pieces change hands, but what changes hands is always the actual artifact, never a drawing of it. The master armorer ensures the pieces fit together. The governance infrastructure ensures every piece meets the standard. And every strike of the hammer on every piece contributes directly to the suit that ships.
The constraint that made the relay race necessary, that only engineers can produce software, is dissolving. The organizations that recognize this will ship better products faster. The ones that don’t will keep writing documents about the software they wish they were building.
This is Part 2 of a three-part series. Part 1 explored why vibe coding’s quality concerns aren’t new and why governance infrastructure beats gatekeeping. Part 3 examines why the org chart is the last place waterfall lives, and what replaces it.
If you don’t want to wait for the series to unfold and want to talk about how these ideas apply to your organization, Bridgenext helps enterprises design and implement governed agentic development models. We’d welcome the conversation.
Dominick is CTO and Head of Engineering & Technology at Bridgenext, where he leads the AI, Data, and Digital Engineering practices. He spends most of his time thinking about how enterprises can adopt agentic development without sacrificing the governance that production systems demand.