SPECIFICATION · VERSION 1.1
NS-L6 v1.1 — Human–LLM Responsibility Framework

The authoritative normative specification of the Human–LLM Responsibility Framework.
This release defines the structural, procedural, and epistemic invariants required to maintain deterministic responsibility boundaries.

NS-L6 Standard — v1.1

Status: FINAL — Normative Publication

Canonical Source:
GitHub Release — v1.1

Version 1.1 provides the first complete and formally validated public release of the NS-L6 Standard. It includes the core specification, supporting RFC, and a set of foundational normative appendices establishing semantics, proofs, and threat models.

Included Documents

NS-L6 Standard: Appendix C — Threat Model

C.1 Status

Informative Annex (Not Normative)

This appendix outlines the adversarial model underlying NS-L6 responsibility boundaries. It identifies threats that could exploit cross-layer violations, responsibility projection, or time-index manipulation. The appendix is informative but essential for understanding the security assumptions of the standard.

C.2 Threat Categories

Threats are grouped according to layer interactions and responsibility invariants.

C.2.1 Downward Inference Attacks

Attempts to reconstruct lower-layer states from higher-layer observations, violating the non-invertibility guarantee. Examples include:

  • Reconstructing model activations (L2) from outputs (L3).
  • Inferring computational states (L1) via response timing.
  • Projecting normative decisions (L6) into system logs (L4) to infer causality.

C.2.2 Responsibility Projection Attacks

Attacks where responsibility is incorrectly assigned to a non-responsible layer or actor. These violate axioms A1–A5:

  • Assigning responsibility to LLMs (L2/L3).
  • Attributing responsibility to tools (L4).
  • Shifting L6 responsibility into automation pipelines.

C.2.3 Time Manipulation Attacks

Attacks that distort, reorder, or falsify time indices across layers. Examples:

  • Replay attacks on L4 logs.
  • Reordering output tokens to induce false inferences.
  • Cross-layer time compression used to merge boundaries.

C.2.4 System–Interaction Boundary Violations

Attacks targeting the boundary between L4 (System) and L5 (Interaction):

  • Injection of hidden system-level instructions.
  • Spoofing interaction context to override responsibility mapping.
  • Bypassing human oversight mechanisms.

C.2.5 Normative-Bypass Attempts

Attempts to circumvent L6 responsibility or governance rules:

  • Producing machine-generated “justifications.”
  • Delegating L6 decisions to automated systems.
  • Generating synthetic “authority logs.”

C.3 Adversarial Models

The NS-L6 threat model assumes adversaries may have:

  • Partial access to model outputs, logs, or prompts.
  • Capabilities to manipulate system orchestration (L4).
  • Ability to induce misleading interaction context (L5).
  • No capability to interfere with cryptographic anchoring or TSA-bound provenance.

C.4 Security Objectives

The following objectives must be preserved for systems inspired by NS-L6:

  • Maintain strict layer boundaries.
  • Prevent responsibility projection upward or downward.
  • Preserve local time indices without cross-layer collapse.
  • Ensure all L6 decisions remain human-authored and auditable.

C.5 Summary

NS-L6 provides a responsibility framework that is resistant to cross-layer inference, time manipulation, and responsibility corruption. Appendix C consolidates these assumptions and defines the threat landscape for implementers and auditors.

End of Appendix C

This concludes the public threat-model appendix for the NS-L6 v1.1 specification.