Human oversight without bottlenecks.

Governance5 min read

A protocol without oversight is just a plan running in the dark. But oversight that creates friction defeats the purpose of systematic supplementation. The solution is layered governance—checks that happen automatically at different levels, with human review reserved for what actually requires judgment.

Safety and correctness come from layered governance, not hand-holding.

The layer model

Think of governance in three layers, each operating at a different cadence and with different triggers:

Layer 1: Automated monitoring (continuous)

The base layer runs continuously and catches obvious issues without human intervention:

  • Dose limits that can't be exceeded by protocol design
  • Interaction flags when potentially competing supplements are scheduled too close
  • Adherence tracking that notices when execution diverges from plan
  • Trend detection on logged metrics (sleep, HRV, training load)

This layer should be invisible when things are normal and surface only when thresholds are crossed.

Layer 2: Periodic review (scheduled)

The middle layer operates on a schedule—typically monthly or quarterly—and asks structured questions:

  • Are the original goals still relevant?
  • Have any inputs changed (training phase, body composition, life stress)?
  • Do biomarkers support continuing the current approach?
  • Is adherence sustainable or showing decay?

This layer can be self-administered or involve a coach/practitioner. The key is that it happens on a schedule, not "when something feels off."

Layer 3: Expert review (triggered)

The top layer involves qualified human judgment and is triggered by specific events:

  • Biomarkers outside expected ranges
  • Persistent symptoms that don't resolve with standard adjustments
  • Major life changes (pregnancy, new medications, significant illness)
  • Protocol changes that involve high-stakes interventions

Not every protocol decision needs an expert. But some do, and the system should know which ones.

What triggers a flag

Flags should be specific and actionable, not vague warnings. Examples of well-designed triggers:

  • "Ferritin increased 40% in 8 weeks—review iron protocol"
  • "Vitamin D dose exceeds 10,000 IU/day for >12 weeks—schedule retest"
  • "Three consecutive weeks of declining HRV trend—assess recovery load"
  • "Adherence below 70% for two weeks—simplify protocol or identify barriers"

Bad triggers are generic ("something might be wrong") or hypersensitive (flagging normal variation). The goal is signal, not noise.

How overrides become durable rules

When a human overrides an automated recommendation, that override should be captured and learned from:

  • Document why the override happened
  • Track outcomes—did the override improve results?
  • If the same override recurs, consider encoding it as a rule

This creates a feedback loop where individual exceptions can become systematic knowledge. The protocol gets smarter over time instead of just following static rules.

Avoiding bottlenecks

The failure mode of governance is becoming a bottleneck—requiring approval for every change, creating delays that break momentum, adding friction that reduces adherence.

Good governance is invisible 95% of the time. It runs in the background, surfaces issues when they matter, and gets out of the way when they don't. The athlete shouldn't feel governed—they should feel supported.

That's the balance: enough oversight to catch real problems, not so much that the protocol becomes unworkable.