AI is moving from “helping lawyers find documents faster” to quietly rewiring how regulation itself is interpreted, implemented and audited — and the real action is now at the intersection of legal-tech, RegTech and AI governance. Recent advances in large language models and retrieval‑augmented generation have made it feasible to read, classify and connect millions of pages of regulatory text, enforcement actions and policies in ways that were impossible even five years ago, yet financial institutions and in‑house legal teams still struggle to turn experiments into reliable, defensible systems of record for compliance.
Against that backdrop of promise and risk, Corlytics has emerged as one of the specialist players trying to industrialise “AI for regulation”, not just by building models, but by embedding AI into the full regulatory‑risk lifecycle. Corlytics focuses on what it calls “regulation actioned”: turning sprawling rulebooks, supervisory statements and enforcement cases into structured, machine‑readable intelligence that banks, insurers and other regulated firms can actually operationalise across horizon scanning, obligations management, controls mapping and policy governance.
Rather than offering generic contract analytics or e‑discovery tools, the company has built its platform specifically for prudential and conduct regulation, ingesting and analysing tens of millions of pages of regulatory material across jurisdictions and regulators, then layering domain‑specific machine learning on top.
As Chief Data Officer, Dr Oisin Boydell sits in the middle of this shift. With a background spanning applied AI research, industry collaboration and now specialised RegTech, he is responsible for how Corlytics designs, builds and governs the models that power its regulatory‑intelligence products. In practical terms, that means deciding how AI is embedded “by design” into workflows, what guardrails are needed for high‑stakes use cases, and how to keep humans firmly in the loop even as automation scales.
In this interview with AI & Data Insider, Boydell unpacks what an AI‑by‑design organisation actually looks like from a CDO’s chair, how Corlytics balances automation and human agency in compliance workflows, and why he believes that, in 2026, CCOs should prioritise integrated, end‑to‑end AI‑powered compliance platforms over a patchwork of narrow point tools.
What differentiates the organisations that are turning AI into real regulatory and legal advantage from those stuck in experimentation?
Organisations often get stuck in the experimentation stages of AI development and roll out. While it can be relatively straightforward to develop and trial AI demos that work impressively well on a small scale, with limited and carefully curated data, it is much more challenging to implement and deploy AI systems that are robust, scalable, verifiable and can handle the variety and complexities of the real world.
This is particularly true in the regulatory and compliance space where AI systems must be highly accurate and consistent, and produce verifiable and explainable outputs linked back to source data.
Whilst some organisations might have the expertise, resources and capability to propel their own regulatory and compliance related AI experiments towards real world value, a more effective approach for most organisations is to leverage a trusted and validated AI solution from a proven RegTech provider. They can then leverage the advantages of a specialist product that has been tried and tested, versus grappling with challenges that a well designed, dedicated compliance solution has already solved.
From a CDO perspective, what does an AI-by-design organisation look like in practice, and how different is that from simply sprinkling models onto existing workflows?
An AI-by-design organisation integrates AI capabilities at the heart of its systems and processes, and views AI not just an add-on but as an opportunity to fundamentally improve how things are done. This brings to mind Henry Ford who is famously quoted as saying that if he’d asked customers what they wanted, they’d have said “a faster horse” — instead, he innovated the automobile. Similarly with AI, it’s about understanding the current and emerging capabilities of AI technology, as well as the limitations and constraints, and designing those into products and workflows from the ground up to create real innovation.
Another key aspect in practice is to ensure all the frameworks and supporting processes around AI are in place’ for example, strong and integrated AI governance across the organisation. In our case, we have implemented an AI Management System (AIMS), which is independently certified to ISO 42001 standard, which ensures best-practice around all aspects of ethical and responsible AI development.
How do you design Corlytics’ workflows so that compliance professionals stay engaged and in control, without turning the AI into a black box they blindly trust?
The involvement of compliance professionals at every level of the compliance process is critical for oversight and accountability. But there is a balance needed between automation to improve efficiency and reduce complexity, and the need for manual oversight and control.
At Corlytics, the workflows we design across our regulatory compliance solutions take this balance into account, and are designed to support human agency and decision making rather than try to replace it. Our AI systems are all designed with explainability built in, not as an add-on feature. We are also very aware of the risk of ‘automation complacency’ – the tendency for users to overly trust in automated systems especially if full transparency of AI generated decisions is not provided.
ALSO READ: 6 Enterprise Tests to Expose Hidden AI Compliance Risks Across Borders
Where is AI indispensable, and where do humans still have to make the hard calls?
AI is an indispensable tool for processing and analysing huge volumes of rapidly changing information, such as filtering and assessing large numbers of regulatory horizon scanning alerts, or analysing controls coverage with thousands of controls across hundreds of distinct regulations. However, compliance teams still have to make the final decisions based on this information – accountability always goes back to the humans in charge.
At Corlytics, we see AI as supporting compliance professionals and compliance teams to make better decisions, more efficiently, through providing better context, more relevant information, and AI-powered recommendations to support those decisions, rather than trying to replace human decision making.
How do you prevent and validate against AI hallucinations so model outputs remain reliable in high‑stakes regulatory use cases?
Preventing AI hallucinations and maintaining high accuracy and trust in AI systems begins with grounding AI models with complete, up-to-date and relevant data and context, and then applying rigorous human-in-the-loop evaluation and testing.
Corlytics’ AI systems are grounded on the extensive datasets of regulatory and related data that we have been aggregating over the 10+ years since the company was founded. Our AI systems also leverage detailed context at inference time from the client and user perspective. However, by design we do not use any client data to train or tune our AI models and we maintain strict data privacy standards. We also have an in-house team of compliance experts who work closely with the data science team on validation and testing, as well as providing annotations and feedback to continually improve model performance and accuracy.
ALSO READ: Why AI Governance Is Becoming a Board-Level Issue for Multinationals
If a Chief Compliance Officer asked you where to start with their AI roadmap—given limited budgets and a long list of potential use cases—what would you advise them to prioritise in 2026?
Rather than to try to pick off individual, narrow compliance AI use cases one-by-one which will end up with a tangle of disconnected and disjoint solutions, I would definitely advise investing in an AI-powered compliance solution that addresses the full compliance lifecycle, from a trusted and experienced RegTech vendor.
The benefits of AI really come to the fore for compliance management where you have a fully connected end-to-end solution: horizon scanning and regulatory change notifications are linked through to impacted regulations, which then trigger updates relevant obligations, which in turn are automatically mapped to client controls and all the way to internal policies. AI has the ability to leverage this interconnected data in a way that point solutions cannot achieve, with the benefit of delivering a full end-to-end audit trail, and turning what can be a reactive exercise into proactive management.
ALSO READ: The Procedural Friction Eating Relationship Banking — and How AI Can End It