Internal vs External Alignment
The early research distinguished inner objective fit from outward rule compliance. That distinction later became more operational in the separation between objective anchoring and constraint enforcement.
Load-Bearing Function
The broader archive developed the idea that systems become fragile when a load-bearing function is preserved externally while the system loses participatory capacity. The AI branch translates this concern into production behavior and objective fidelity.
Misaligned Structures
The research then focused on cases where apparently functional behavior hides a deeper mismatch between what a system is doing and what it is for. In AI, this becomes the problem of fluent outputs that satisfy surface expectations while drifting from purpose.
Objective / Constraint / Realignment
The three-layer architecture consolidated the corpus: Objective Layer for purpose, Constraint Layer for boundaries, and Realignment Layer for allowed-but-off-center behavior.
Feature Extraction
Feature extraction made the framework evaluable by turning output traits into signals: certainty markers, genericity, unsupported authority, user-agency closure, source mismatch, and other detector inputs.
Detector Layer
The detector layer organized drift categories into repeatable review patterns rather than isolated examples.
Judge Layer
The judge layer entered for semantic uncertainty, where heuristics are insufficient and a more contextual evaluation is needed.
Universal Drift Metrics
Universal drift metrics summarize objective fit across cases, detectors, correction rates, escalation rates, and change over time.
Behavioral QA
The enterprise endpoint is behavioral QA for AI systems: a repeatable way to evaluate whether deployed AI remains ordered toward intended behavior across prompt batches, model updates, and policy changes.
How to Cite
CitationMichael Bower. (2026). Framework Evolution and Research Lineage. AlignmentTheory.org. https://alignmenttheory.org/pages/ai-alignment-lineage.html
@misc{bower2026aialignmentlineage,
author = {Bower, Michael},
title = {Framework Evolution and Research Lineage},
year = {2026},
howpublished = {AlignmentTheory.org},
url = {https://alignmenttheory.org/pages/ai-alignment-lineage.html}
}