The authoritative normative specification of the Human–LLM Responsibility Framework.
This release defines the structural, procedural, and epistemic invariants required
to maintain deterministic responsibility boundaries.
Status: FINAL — Normative Publication
Canonical Source:
GitHub Release — v1.1
Version 1.1 provides the first complete and formally validated public release of the NS-L6 Standard. It includes the core specification, supporting RFC, and a set of foundational normative appendices establishing semantics, proofs, and threat models.
Informative Annex (Not Normative)
This appendix outlines the adversarial model underlying NS-L6 responsibility boundaries. It identifies threats that could exploit cross-layer violations, responsibility projection, or time-index manipulation. The appendix is informative but essential for understanding the security assumptions of the standard.
Threats are grouped according to layer interactions and responsibility invariants.
Attempts to reconstruct lower-layer states from higher-layer observations, violating the non-invertibility guarantee. Examples include:
Attacks where responsibility is incorrectly assigned to a non-responsible layer or actor. These violate axioms A1–A5:
Attacks that distort, reorder, or falsify time indices across layers. Examples:
Attacks targeting the boundary between L4 (System) and L5 (Interaction):
Attempts to circumvent L6 responsibility or governance rules:
The NS-L6 threat model assumes adversaries may have:
The following objectives must be preserved for systems inspired by NS-L6:
NS-L6 provides a responsibility framework that is resistant to cross-layer inference, time manipulation, and responsibility corruption. Appendix C consolidates these assumptions and defines the threat landscape for implementers and auditors.
This concludes the public threat-model appendix for the NS-L6 v1.1 specification.