Data Alignment
The model learned from what was true then, not what is true now.
Every model is trained on historical data. That data was generated by people making decisions inside a specific organizational context — with specific incentives, under specific pressures, at a specific point in time. The model learns from those decisions. It does not learn what the organization wanted; it learns what the organization measured and rewarded. When the context changes — new strategy, new leadership, new market conditions — the model keeps running on the old intent.
The problem is rarely visible in the accuracy metrics. The model is still accurate at predicting what the training population would have done. It is no longer accurate at predicting what the organization needs done. That gap widens silently over time, at whatever rate the world diverges from the training data.
Feedback Alignment
The metrics say it is working. The outcomes say it is not.
A model needs a feedback loop to stay calibrated — some mechanism that catches errors and surfaces them to the people who can act on them. In most production AI systems, that mechanism is either absent or systematically undermined. The model generates outputs. The outputs go into the world. Nobody tracks what happens next in a way that feeds back into the model's evaluation.
What fills the gap is usually a human: a domain expert who notices that the model's confident recommendations keep being wrong in a specific way, and who starts keeping a private list. That person is the feedback loop the system should have had built in. When they leave — or when the organization trains them to trust the model over their intuition — the last correction mechanism disappears. The model runs without error-correction, at whatever accuracy it had on the day it launched.
Value Alignment
Nobody agreed what "good" means. The model decided.
Before a model can be trained, someone must define the target — what the model should optimize for, what outcome counts as success. That definition is always a choice, and choices encode assumptions. Assumptions about which customers matter, which relationships are valuable, which risks are acceptable. In most organizations, those assumptions are never made explicit. They are embedded silently in the choice of training data, in the labeling decisions, in the metric selected as the optimization target.
The result is that the model resolves organizational ambiguity without anyone noticing. Different teams had different definitions of a good customer. The model picked one — the one implied by the data it was given — and began enforcing it at scale. The disagreement that should have happened in a meeting happened instead inside a training run. By the time it surfaces, the model has been running on the wrong definition for months.