Reliability Beats Intelligence
The team processing forty inbound sales orders per day has lived with the verification problem for six months. Automation handles extraction - line items, product codes, quantities, delivery addresses. Errors still occur, fewer every week, but enough that no order gets released without someone checking it first.
The response is predictable: the system needs to be more accurate. Get the error rate down and the checking becomes manageable. Get it low enough and it disappears.
This is the wrong solution to the wrong problem. The goal is not a system that gets every order right. It is a system that knows which orders it might have got wrong. Those are different objectives, and only one of them removes the checking.
What 98% actually produces
The team improves the system. Error rate falls from 5% to 2%. On forty orders per day, that means one error on a good day, sometimes none. This feels like real progress but the verification labour is unchanged.
Every order still gets checked. The checker is now looking for one error instead of two, across the same forty documents, with the same level of attention required to find it. That requirement does not change as errors get rarer.
The labour has not fallen because the checking is not driven by how many errors occur. It is driven by the fact that the system cannot tell you which order contains the error. A 98% accurate system with no ability to identify its uncertain outputs requires the same verification posture as a 95% accurate system with no ability to identify its uncertain outputs. The error is rarer but it is no easier to find.
Why verification stays permanent
The team did not consciously choose a 98% accurate system over a less accurate one. They chose the next improvement over the current state, because improvement toward a target makes sense when the target is reachable.
The target is not reachable. Sales order documents are unstructured, inconsistent, and produced by customers who do not follow instructions.
Product codes are mistyped, transposed, or referenced by customer-specific aliases that bear no relation to the supplier's catalogue and change without notice. Order lines are missing because the customer forgot them, or duplicated because they changed their mind between writing the email and attaching the purchase order. Delivery addresses are overridden in the body of an email that arrived three hours after the original document. Pricing references a contract version the system has not seen.
Each of these is a normal property of inbound commercial correspondence. The customers generating them are not going to change their behaviour because the receiving system finds it difficult.
The belief driving accuracy investment - that a sufficiently accurate system will eventually make checking unnecessary - cannot be satisfied by the system being built. At 98%, the team is closer to a ceiling than they know. The next two percentage points will cost more than the last three and will not change the verification posture, because the system still cannot tell you when it’s wrong.
Verification stays not because the system isn't good enough yet. It stays because the system is being optimised for the wrong output.
Accuracy Is Not the Exit
A system that cannot identify which outputs are unsafe requires universal verification regardless of its accuracy rate. Improvements in the accuracy rate do not change this. Unstructured document inputs will never be perfect.
A team investing in accuracy is investing toward a ceiling that will not remove the checking, because checking is not driven by how many errors occur. It is driven by not knowing where they are.
A higher accuracy rate doesn't tell you which specific outputs are wrong - and that's the only thing that reduces checking. Improving it does not change the amount of verification needed.
.png)