BETA

A Manifesto

GUIDANCE SYSTEM
FOR SOFTWARE TEAMS

Systems engineering principles for keeping software teams aligned, from prompts to culture.

How AI was used in this document

This manifesto was written by me with AI as a collaborator throughout the process. Here's what that looked like:

  • Structure and brainstorming. AI helped explore framings, challenge weak arguments, and pressure-test the manifesto's structure. Multiple rounds of structured conversation with AI personas were used to review drafts and identify gaps.
  • Research and footnotes. AI performed deep searches across a personal knowledge library (books, videos, academic papers) and the web to find precise references, verify empirical claims, and source the 20 footnotes in this document.
  • Drafting and revision. AI generated candidate prose that was then edited, rewritten, and often discarded. The core thesis, convictions, and editorial judgment are human; the iteration speed is where AI contributed most.

The goal was not to automate writing but to have a thinking partner that could challenge ideas, find sources faster than manual research, and help maintain consistency across revisions.

Under active calibration

This manifesto is a living document. Version 1.0 shipped with the core thesis intact, but several areas are being developed for future versions.

Planned additions

  • Resistance counter-patterns for each pillar — concrete tactics for overcoming structural, inertial, and political resistance
  • Case studies grounding the framework in real team experiences
  • Scale mechanisms — how context, communication, and culture systems behave differently at 5, 50, and 500 people
  • Stronger empirical grounding for the Culture pillar, particularly around calibration and continuity

Open questions

  • How do AI agents change the guidance system model? They need explicit contracts, not cultural intuition — what does that mean for the flywheel?
  • What's the right relationship between this framework and existing methodologies (DORA, Team Topologies, Shape Up)?
  • How does the communication pillar work in fully async / remote-first teams where ambient signals don't exist?

Creeping routine shifts the direction one degree at a time, until you’re shipping the wrong thing six months later.

The symptoms are familiar. A standup degrades into a status report when nobody synthesizes the signals. A retrospective becomes useless when the group doesn’t welcome real concerns. They’re not just ineffective, they’re creating the illusion of control. W. Edwards Deming knew this 40 years ago: inspecting the output doesn’t fix the process that produced it.

Misalignment persists not because people are malicious—though some benefit from ambiguity—but because the system makes drifting easier than aligning. Leaders can’t articulate their own priorities; whatever clarity exists decays through the hierarchy; by the time the vision reaches the people doing the work, it has mostly faded. Nobody is responsible. Everybody is complicit.

The best sign of aligned teams is when someone makes the right call without needing to ask.

Why? Because it means that not only do they have the right information, they operate in a culture of meaningful work and of trusted judgement.

This happens when three things reinforce each other: clear context makes good communication possible, which builds shared judgment, which generates context worth maintaining. When this flywheel runs, alignment is how the team works. When it breaks, no amount of process fixes it. The best and most essential people are the first to notice, and leave.

There’s a model for describing how alignment works, borrowed from aerospace.

Wait — you have just been pooh-poohing Agile practices, with their story points and metrics dashboards. And now you’re reaching for rocket science? The field where every bolt is counted and measured?

Sharp observation. But the aerospace metaphor isn’t about vanity metrics, it’s about timing and embedding. Rockets don’t wait until they miss the orbit to check that they’re on course. They’re constantly doing three things: sensing where they are, interpreting the gap against a destination, and adjusting before the errors compound.

We don’t need more meetings to check how things are going, we need continuous sensing embedded in the work itself.

This requires infrastructure to think with, signal paths that actually reach someone, and shared judgment to act without asking. Three pillars we describe below.

Why misalignment persists: designing for resistance

Three types of resistance work against alignment. They operate simultaneously, and any system that addresses only one will be defeated by the other two.

Structural resistance: The system rewards the wrong things. Shipping features is visible and rewarded. Routing information is invisible. Documentation is treated as a deliverable, not infrastructure — created for a milestone and never maintained. Even when someone’s job description does include alignment work (like a TPM), it’s rarely valued equally. The person who ensures three teams know about each other’s decisions gets less credit in the performance review than the person who shipped the feature those teams were building. Rational people optimize for what’s measured.

Inertial resistance: Precedent substitutes for thinking. “We’ve always done it this way” persists because changing it requires effort and the status quo is comfortable. Talk substitutes for action — the team spends two hours discussing alignment, produces a Confluence page, and calls the problem solved. The VP whose teams are shipping doesn’t investigate whether they’re aligned — things look fine from that altitude.

Political resistance: Some people benefit from ambiguity, whether consciously or not. The PM who keeps requirements vague retains decision authority. The architect who doesn’t document preserves their indispensability. The VP who lets teams drift avoids choosing between competing visions. When decisions, context, and status travel through single points of control, those points become both bottleneck and power base. The fix is transparency: publish decision paths, make it observable who routes what to whom.

Clear Context

You can't course-correct if you don't know where you are.

Context that lives only in people’s heads is a single point of failure. AI makes this fuzziness worse: it hallucinates when fed scattered notes and stale wikis. But it also gives us tools to fix it: documentation can be generated from the work itself.

The problem isn’t usually missing documentation. It’s treating documentation as a deliverable rather than infrastructure. A deliverable gets created and filed. Infrastructure gets maintained, routed, and deprecated just like code.

What makes context a system

Artifacts alone aren’t enough. Someone has to maintain them, route them to the people who need them, keep them honest, and notice when they go stale.

Maintenance. Stale documentation is worse than none, it’s a gauge stuck on green. Every artifact needs an owner and a mechanism that ensures it is actually used. Engineers check ADRs during code review. New hires stress-test runbooks during onboarding. When an artifact no longer justifies its maintenance cost, delete it.

Distribution. A context system must push, not just store. Team A makes an architecture decision in week two. Team B doesn’t learn about it until integration fails in week eight, even though it was documented. The system must deliver decisions to the teams they affect, not wait for them to come looking. If the signal reached an inbox but didn’t change a sprint, it didn’t reach them. And when a document goes stale, ask why — the answer usually points to a distribution path that broke, not a lazy author.

Consistency. The style guide says “use pattern A.” The codebase is half A, half B. Which is authoritative? When you find conflicting context, fix the conflict or delete the stale artifact. Don’t leave two truths competing.

Feedback. Every consumer of context is a sensor. Build feedback into consumption: ADRs linked from PRs get freshness checks during code review. Runbooks that fail during incidents get flagged for update, not quietly worked around. Documentation that nobody accesses gets flagged for deprecation. A context system that can’t sense its own decay will decay.

  • Decision history: ADRs for every non-trivial decision, including rejected alternatives. Code shows what, documentation shows why.
  • Domain language: A Ubiquitous Language that shapes how code is written, how APIs are named, how teams communicate. This is more than a glossary, it’s about design choices for the whole system.
  • Codebase context: Architecture decisions, conventions, commands. Documented where they’re used, not buried in wikis nobody reads.
  • Procedural knowledge: How to deploy, how to respond to incidents, how to onboard. Runbooks as code.

Active Communication

The information existed. It didn't reach the person who could act.

Systems mirror the communication structures that build them. Information follows the paths that already exist. When those paths don’t connect the people who need to share context, no amount of documentation compensates.

The test of communication isn’t whether information exists. It’s whether the person who needed it had it when the decision was made.

What makes communication a system

Signal paths alone aren’t enough. You need to sense what’s happening, get it to the right person, make sense of it across teams, and know when signals are dropping.

Sensing. Routing starts with noticing. There are soft and hard signals. Soft sensing reads the room: the confusion in a Slack thread, the hesitation in a standup, the decision that was made but never broadcast. Hard sensing is automated: contract tests that catch API drift, dependency trackers that flag timeline slips, integration checks that catch misalignment before humans notice it.

Routing. Broadcasting isn’t routing. A decision posted to Slack reached an audience, not a recipient. Routing means the specific person who needs the signal received it, understood it, and had it in time to act. Each degree of separation between a decision-maker and the person affected is a point where the signal can be dropped or changed.

Synthesis. Someone must connect signals across teams and make the whole legible, whether TPM, tech lead, architect. “Everyone owns it” means no one does. These roles rarely carry direct authority. They influence through credibility by understanding the technical reality well enough to challenge estimates, and by driving clarity relentlessly enough that people trust their synthesis. But be honest about the limits: influence without authority works best when people already agree. When they don’t, an escalation path must exist.

Feedback. The deeper job of communication is closing the distance between decisions and their consequences, so it’s short enough to feel. Routing customer needs, usage data, and support cases back to the people building the product. On the negative side, every cross-team surprise is a signal that routing failed and needs to be tracked. For example: a dependency that slipped, a requirement that changed without notice, an integration that broke.

Infrastructure doesn’t build itself.

Incentives. Saying “stop, this doesn’t fit” can make things awkward with no upside. Routing work is invisible in most performance systems. Until the organization rewards it as visibly as feature delivery, ownership is theoretical.

Motivational debt. Every directive that bypasses explanation accumulates a quieter cost: the compound interest on telling people what to do rather than making them part of the decision. Leadership that dictates instead of synthesizing may ship faster this quarter, but they’re borrowing against the team’s willingness to think for themselves.

The most honest communication channels a team has are its committed artifacts: code, tests, design docs, ADRs, merged pull requests. Status reports can be gamed. Engineering practices aren’t just supporting infrastructure for communication, they are communication infrastructure.

  • Decision routing, not just recording: Every ADR or significant technical decision includes a distribution list of affected teams. The author summarizes the impact for each audience. Filing a decision in a wiki isn’t routing it — posting it to affected teams with a “this changes your assumptions because…” summary is.
  • Cross-team syncs with structure: Structured around dependency changes, upcoming interface modifications, and blocked work. A rotating facilitator prepares the agenda from each team’s board, forcing everybody to see the same things. Open-ended “is there anything to share?” meetings are where signals go to die.
  • CI as signal infrastructure: Daily integration forces alignment. Integration failures expose misalignment within hours, not weeks. Contract tests between services are automated routing — they verify that one team’s changes don’t break another team’s expectations. Feature flags communicate what’s finished versus what’s in progress.
  • Receipt verification, not broadcast: Status communications name their intended recipients and request acknowledgment. The TPM’s weekly synthesis tags specific people with specific implications. If the recipient can’t describe the signal, it wasn’t routed.

Other ideas: Have a developer from Team A attend Team B’s meetings occasionally. Not as a status relay, but as a live sensor for drift. Maintain a dependency map reviewed during sprint planning; when a dependency changes, the map owner notifies affected teams directly with a summary of why it matters to them. And define escalation criteria concretely: if two teams can’t resolve a shared interface after one dedicated sync, the engineering manager mediates within 48 hours. Named paths, named criteria, named timeframes.

Shared Culture

We know more than we can tell.

Culture is tacit knowledge — the patterns, instincts, and shared judgment that can’t be written down or automated. It isn’t something you build from scratch. Every team already has one. Alignment work (clear context, active communication) nudges culture in the right direction. And good culture pays it back: teams with shared judgment maintain better context and route signals without being told to. As AI commoditizes implementation, this tacit layer becomes the differentiator — the thing no model can replicate.

You can tell culture is working when people can predict the team’s opinions without asking, and when they can articulate how their work connects to what the team and organization are trying to achieve. Polanyi calls this indwelling: knowledge that lives in practice, not in documentation. Runbooks encode the decisions you’ve already made. Culture is what lets people make the ones you haven’t anticipated.

What makes culture a system

Shared judgment doesn’t appear by decree. People have to make decisions together long enough for patterns to form, test those patterns against real outcomes, and stay together long enough for the learning to stick.

Exposure. The more decisions people make together, the faster shared intuition forms. Code reviews where people explain why, not just what. Pair programming that forces contact with others’ judgment in real time. Walk through shipped features for feel, not bugs. Each shared decision is a data point; enough data points and patterns emerge.

Calibration. Judgment trained against outcomes. You make decisions together, study how those decisions aged, and adjust how you decide next time. Codebase archaeology (what held up? What broke?) builds judgment faster when done as a group. Without calibration, exposure just reinforces whatever habits the team already has, good or bad.

Continuity. Culture forms between people. Churn your team faster than culture can form and you’ll have process compliance without judgment. Protect the conditions under which shared judgment accumulates — stable teams, consistent pairing, long enough tenures for patterns to take root.

Feedback. Have people present how their scope fits into the broader vision and how customers use what they build. Can the engineer explain why their component matters to a user three hops away? Can the team trace what they shipped last sprint to a customer problem? If yes, a shared vision is forming. If no, the vision declared at the top isn’t reaching the people doing the work.

The environment usually works against this.

When leadership rewards velocity over judgment, the culture system starves. Nobody presents how their work connects to the vision because there’s no time. Nobody studies how past decisions aged because the next sprint is already planned. Extrinsic pressure narrows focus to what’s measured, and what’s measured is rarely what matters most. Check the base rates: only 30% of U.S. workers are engaged, and half of managers are estimated to be ineffective. These aren’t broken organizations, they’re typical ones.

  • Feature walkthroughs for feel: Walk through shipped work as a team. The goal isn’t finding defects, it’s calibrating shared aesthetic judgment. Does this feel right? Would we build it this way again?
  • Codebase archaeology sessions: Pick a module that’s been in production for a year. What held up? What would you change? Group archaeology calibrates judgment faster than individual review.
  • New hire documentation sprints: Have new hires document the implicit rules nobody writes down during their first month. The newcomer learns the norms by articulating them; the team sees its assumptions through fresh eyes.
  • Individual contributors as leaders: Periodically ask team members to present how their current work connects to customer outcomes. Everybody should be able to articulate why their work matters, and why the bigger vision matters. If they can’t, the vision has been lost somewhere, or was never there to begin with.

The uncomfortable truth: Context can be fixed in weeks. Communication structures can be redesigned in months. Culture takes months with the same team — and there’s no shortcut.

AI Accelerates Everything (Including Misalignment)

AI makes building faster, and makes misalignment compound faster too. Every shortcut, every bit of tribal knowledge, every fuzzy definition, AI amplifies them. The same dynamic runs in both directions, virtuous or vicious, depending on the context you feed it:

The vicious loop:

Team lacks context → AI hallucinates → hallucinated code becomes context → more hallucination

The virtuous loop:

Team documents context → AI uses structured input → output is validated → context improves

This is a structural choice, not a collective one. AI erodes the moats that made people indispensable: tribal knowledge, translation monopolies, implementation skill. The people whose power base shrinks have every reason to resist the virtuous loop. And AI agents make it worse: they’re non-deterministic and need explicit contracts, not cultural intuition, through defined boundaries, validated outputs, clear ownership of what they produce.

Stress-testing your thinking

Most ideas go unchallenged, not because they’re good, but because challenging them is socially expensive. The devil’s advocate gets labeled as negative. Spot a flaw and you’re blamed for killing momentum.

AI removes the social friction from intellectual conflict. The mechanisms are already available:

Premortem. Imagine the project has already failed. Now surface the risks nobody wanted to voice.

Persona conversations. Have AI adopt specific expert perspectives to challenge or support your thinking from angles you wouldn’t consider. A systems thinker, a skeptic, a domain expert, a customer advocate.

Devil’s advocate at scale. Red-team a strategy, a design doc, an architecture decision. AI can argue against your position without relationship damage. The challenge network Adam Grant describes (people who point out your blind spots) can be to some extent approximated with AI.

Stress-tested ideas outperform unchallenged ones. The cost of challenging your own thinking just dropped to zero. The only reason not to do it is that you’d rather be comfortable than right.

The Flywheel

These three pillars (clear context, active communication, shared culture) reinforce each other.

Now, this is starting to sound like a corporate framework. Instead of the Conjoined Triangles of Success, you have the Flywheel of Reinforcing Momentum, right?

Not quite. As tempting as it might be to just solve all issues of modern work, we’re focused on nudging in the right direction. You don’t control a complex system, you disturb it. Introduce a constraint, change an incentive, surface a signal, and watch how it responds. The three pillars aren’t levers that produce predictable outputs. They’re perturbations that shift the system’s behavior in a direction you can’t fully predict but can continuously sense and adjust.

But every coordination mechanism has a throughput. A dependency map can track a certain number of dependencies before it becomes unreadable. A person synthesizing across three teams may have capacity; five teams exceeds what any individual can hold. When the situation generates more complexity than the mechanism can handle, signals drop, context decays, and surprises multiply. Nobody failed. The mechanism was just undersized for the job.

The instinct is to add more process: more meetings, more reports, more checkpoints. But more process adds signals without adding capacity to process them. This makes the problem worse! The real response is to either simplify what needs coordinating, with fewer dependencies, clearer team boundaries, more autonomous teams, or increase the capacity of what does the coordinating. This could be automated checks that handle routine signals so humans handle exceptions, multiple synthesizers instead of one, tools that surface misalignment before anyone has to notice it. AI tools can definitely help there.

The flywheel slows down faster than it speeds up. The asymmetry is the sharpest insight: the virtuous cycle takes months to build. The vicious cycle takes weeks to destroy. Building coordination capacity is slow. You design mechanisms, train people, establish habits. Exceeding it is fast: one reorg, one added dependency, one team boundary that shifts. The flywheel doesn’t degrade gracefully. It runs until the capacity is exceeded, then it stalls.

The vicious cycle has a human cost that grows faster than the technical one. Toxic culture predicts attrition ten times more powerfully than compensation. You can’t outpay a broken culture.

Which raises the question the manifesto has been circling: who owns this? Someone must be accountable for alignment. When the flywheel stalls, it must be someone’s problem, visibly and structurally. Someone trying to make that happen requires skin in the game, not just influence through credibility.

What about the calibration of this very manifesto, which is about continuous calibration?

Yes, indeed. This is v1, a snapshot of current thinking, not a final position. It will only get clearer, more fun to read, and with more operational guidance as it evolves.

Emotional intelligence makes the system run

The alignment system depends on infrastructure, signal paths, and shared judgment. But it runs on human skill — specifically, the ability to sense what dashboards don’t capture.

Soft sensing — reading the room — is where emotional intelligence does its work. The engineer who goes quiet after a decision, the tone shift in a Slack thread, the PR that nobody reviews. Empathy to sense them, self-regulation to raise them without burning relationships, social fluency to frame them in terms the organization can act on.

The guidance system needs a human operator — someone whose judgment is calibrated by experience and whose credibility is earned by consistently driving clarity.

Giving existing process a purpose

Guidance systems don’t replace project management discipline. They give it purpose.

A status report is a sensor — redesign it to track what changed since last week and why, not just what’s green/yellow/red. A risk register is a controller — add soft signals alongside probability-impact scores. A RACI is a routing map — verify that people marked “I” and “C” actually receive the signal. A retro is a calibration exercise — don’t ask “what went well,” ask “where did our sensing fail?”

We’re not adding process. We’re giving existing process a purpose.

Complex systems designed from scratch never work. Start with one team, one practice, one feedback loop. When you have enough context, communication, and culture to make good decisions without asking — stop adding process.

1