AI translation, human verification, regulated sectors · Peter Guest · peterguest.biz · Menorca
In March 2026, the European Fact-Checking Standards Network (EFCSN) published a white paper titled The Great Retreat, documenting how major technology platforms have systematically abandoned their commitments to information integrity just as generative AI makes the problem dramatically worse. The report is primarily about disinformation. But its core arguments apply with uncomfortable precision to the use of AI translation in regulated professional sectors.
The Misinformation Premium: when automation rewards confidence over accuracy
The EFCSN cites research showing that on major platforms, low-credibility content consistently outperforms high-credibility content in engagement — by up to eight times on YouTube and seven times on Facebook. The authors call this the \"Misinformation Premium\": platforms designed to reward confident, fluent output more than accuracy.
AI translation systems have an equivalent dynamic. Large language models are trained to produce fluent, confident output. In general consumer contexts, this works well enough. In specialist regulated sectors — maritime documentation, pharmaceutical submissions, financial documentation, legal contracts — fluency and confidence are precisely the properties that make errors dangerous. A mistranslated technical term reads exactly like a correct one. Nobody flags it because nothing looks wrong.
The Liar's Dividend: when doubt undermines even accurate output
The EFCSN report introduces a concept from academic research: the \"Liar's Dividend.\" The most damaging effect of AI-generated disinformation is not the success of individual fakes, but the generalised shadow of doubt they cast over authentic content. When everything might be AI-generated, nothing can be automatically trusted.
Or worse, everything is.
The same dynamic operates in AI-assisted translation. As AI translation becomes commonplace, a new epistemic problem enters regulated-sector documents: the person receiving a translated maritime survey, clinical trial protocol, or financial prospectus cannot assume that specialist terminology has been handled correctly — because they know AI was probably involved, and they know AI fails at exactly this kind of content. The authentic, validated translation and the unvalidated one look identical on the page. Only independent expert verification restores the document's authority.
Community Notes: the failure of crowdsourced expertise
One of the report's most striking findings concerns Community Notes — the crowdsourced fact-checking system adopted by X and increasingly by other platforms as a replacement for professional fact-checkers. Research shows that fewer than 10% of proposed notes ever become visible to users, they arrive too late to limit the spread of false content, and they systematically fail on technically complex or polarised topics where expert knowledge is most needed.
The lesson for AI translation is direct. Asking bilingual staff, internal reviewers, or non-specialist colleagues to validate AI-translated specialist content produces the same structural failure. Very often they're the most junior on the staff — and in 2026, junior professional staff frequently cannot write coherently in any language, including their own. This is not a peripheral observation. The dominant written register in large professional services organisations is now AI-generated or AI-assisted, which means fluent-looking output is no longer evidence of the underlying competence to produce or evaluate it. In organisations where staff cannot politically challenge line management on the quality of their Spanish, they are certainly not going to flag that an AI-translated corporate communication written by a superior contains a critical terminology error. The verification is theatre. The error remains.
Human-centred verification: the consensus is growing
The EFCSN report is unambiguous: \"human-centred verification remains a vital tool for mitigating this crisis.\" This is not a sentimental attachment to traditional methods. It reflects a documented pattern in which automated systems fail specifically at the intersection of high stakes and thin training data — which is precisely the definition of specialist regulated-sector content.
The report draws an additional lesson from the resource asymmetry between disinformation actors and fact-checkers: organisations producing misleading content operate at massive scale with minimal cost, while verification requires sustained expert investment. AI translation creates an identical asymmetry. Generating a translated pharmaceutical document costs almost nothing. Verifying that its technical content is accurate and compliant requires genuine expertise — and that expertise has a real cost that automated systems cannot eliminate.
The regulatory dimension and risks of non-compliance
Platforms that dismantled their professional fact-checking programmes are now subject to investigation under the EU's Digital Services Act for failing to mitigate systemic risks. The EFCSN explicitly argues that these withdrawals may constitute DSA non-compliance.
Enterprises deploying AI translation in regulated sectors face an analogous trajectory. The EU AI Act classifies translation systems used in high-risk contexts as requiring documented human oversight. The compliance question is not whether validation will eventually be required — it is whether organisations build it into their workflows before or after a regulatory or reputational failure forces the issue.
The argument from the EFCSN report
The white paper states that \"the consensus is growing that AI needs human expert oversight — not because AI is useless, but because it fails specifically where the stakes are highest and the training data thinnest.\" It documents how platforms that replaced professional expertise with automated alternatives made their users measurably worse off, and calls for hybrid models that combine the scale of automated systems with the accuracy of human specialists.
This is the model that specialist translation validation offers: not a rejection of AI, but its professional completion. The AI does the work. The expert certifies it. That is not a broken spell — it is the only responsible workflow for content that carries legal, regulatory, or safety consequences.
Want to assess the quality of your AI translation? Request a segment audit with no obligation.