A short companion note describing how to read and interpret this body of work is available here .
David B. Forbes is an independent systems researcher focused on resilient distributed systems, governance-first autonomy, and safety-constrained behavior in autonomous and semi-autonomous systems.
His work examines how authority, legitimacy, and execution boundaries must be explicitly defined and enforced to prevent unsafe or unjustified system behavior under uncertainty. He is particularly concerned with failure modes arising not from insufficient intelligence, but from ambiguous, degraded, or improperly constrained authority.
Forbes’ research emphasizes architectural invariants such as authority contraction, refusal as correct behavior, quorum-gated execution, and degraded operational states. These principles apply across software systems, cyber-physical platforms, and human-supervised autonomous environments.
He is affiliated with BLOCK VECTOR Technologies, LLC, where his research informs long-lived system architecture, safety modeling, and patent-backed engineering work. These publications are not product documentation and do not disclose implementation details.
Doctor of Engineering (D.Eng.) Dissertation
A practice-based doctoral dissertation examining authority boundaries, refusal, and failure modes in autonomous and distributed systems. The work is dense and intentionally formal, framing authority contraction and designed refusal as safety invariants rather than optimization outcomes.
Intended for systems engineers, architects, and reviewers seeking a rigorous, boundary-focused treatment of autonomy, governance, and failure behavior.
An observational lens for identifying engineering seams—where system invariants thin, authority becomes implicit, and decisions go unstated.
Identifies the missing architectural constraint that causes modern autonomous and AI-driven systems to fail at boundaries, even when implementations appear correct.
Defines correct behavior when coordination is absent—treating silence as a valid operating condition and non-action as a legitimate outcome.
Reframes resilience as purpose preservation under uncertainty rather than continuous operation, challenging availability-driven thinking and metric selection.
Formalizes bounded authority, quorum-gated action eligibility, and refusal as safety invariants for autonomous and distributed systems operating under degraded legitimacy.
Examines observed boundary failure patterns in AI-driven systems when authority, legitimacy checks, and silence-aware behavior are missing or violated.
Defines refusal as an explicit enforcement action that preserves legitimacy and safety when coordination degrades and authority cannot be established.
Additional context and related publications are maintained at BLOCK VECTOR Technologies .
© 2026 BLOCK VECTOR TECHNOLOGIES LLC. All rights reserved.