There is a phrase that has become the security blanket of every bank board presentation and every
regulatory submission concerning artificial intelligence in finance: human in the loop.
It sounds reassuring. It implies oversight, accountability, the steady hand of experienced
judgment guiding the machine. We have heard it invoked in Brussels, in Basel, in the boardrooms
of Canary Wharf and Wall Street alike. It is, in our view, becoming dangerously misleading.
The difficulty is not one of principle but of practice. When an algorithm processes four million
credit decisions in a single day — as Goldman Sachs's new Meridian platform now does — the
notion that a human being is meaningfully reviewing each decision is a fiction. The loop exists;
the human within it has become decorative. What banks call oversight is, in most cases,
retrospective audit: examining a statistical sample of decisions after they have been made,
flagged, and filed. This is not oversight. It is archaeology.
We do not argue that machines should operate without constraint. We argue that the constraint
must be architectural, not theatrical. If the industry wishes to maintain public trust — and it
must, for without it the entire apparatus rests on sand — then it must be honest about what
human oversight can and cannot achieve at computational scale. The alternative is a regime of
comfortable fictions, which will endure precisely until the first serious failure.