By Dane Richards, CEO, JMR Software

As artificial intelligence continues to reshape the insurance landscape, it brings with it both powerful potential and pressing responsibility. While AI technologies are becoming increasingly embedded in policy administration systems, supporting everything from underwriting to renewals and claims handling, questions around accountability and transparency have become unavoidable. Who is responsible when AI makes a mistake? Can these digital tools be held to the same standards we expect of their human counterparts?
In my view, the answer begins with a clear understanding of what AI is, and what it is not. AI is not a legal or moral agent. It’s a tool, a piece of technology designed to assist in decision-making by recognising statistical patterns. Responsibility, therefore, must remain with the organisations that choose to deploy it. This includes not only insurers themselves but also their chosen technology partners.
When we talk about accountability in AI, we are essentially talking about visibility and responsibility. Any AI-assisted or AI-automated decision must be clear, traceable, and actionable. This means that insurers must implement systems where decisions made with the help of AI can be fully audited. They must be able to explain why and how a decision was reached, who initiated it, which model was used, and what data informed it.

The future of human intelligence isn’t artificial
It’s amplified
JMR Software’s cloud cover empowers insurers with intelligent, cloud-based solutions that enhance, not replace human decision making.
This is particularly important in an industry where compliance and customer protection are paramount. Modern platforms must embed traceability and governance into their very architecture, enabling what I call “auditability by design.” This includes detailed audit trails, version control, and structured logs that capture every step in the decision-making process, whether human- or AI-led. These elements are not only essential to ensure operational integrity but also align closely with international standards like ISO 27000 and SOC 2.
But can AI really be held to the same ethical and operational standards as human decision-makers? I believe it can, provided we adapt our approach. AI should be held to the same outcome standards: fairness, transparency, non-discrimination, and reliability. However, the mechanisms to enforce those standards must evolve.
Humans apply judgment, often shaped by years of experience. AI relies on statistical modeling. Therefore, our ethical frameworks must be embedded into how AI models are trained, governed, and deployed. That includes bias testing, explainability tools, and operational safeguards such as confidence scoring and escalation triggers. For instance, if an AI model delivers a low-confidence decision, that should automatically trigger a human review.
At JMR Software, we’ve taken this to heart, and is a strong design principle within our policy administration platform. From the ground up, the platform is built with full auditability in mind. Whether it’s a manual entry or an AI-assisted outcome, every transaction, decision, and change is captured in a comprehensive audit trail. This ensures that we can trace every action back to its source, crucial when regulators or stakeholders need clarity on what happened and why.
It’s also important to view AI as a tool, not a replacement. In practice, AI should augment human expertise, not replace it. It’s a way to increase efficiency, reduce human error, and enhance decision-making, but only if implemented with the right safeguards in place. That means integrating AI into processes that already have strong oversight and designing checks and balances that kick in when the AI strays from expected behaviour.
As we move forward, the role of internal auditing will become even more critical. From compliance implications to claims fraud prevention, AI is not just transforming how we operate, it’s changing how we govern. The industry must be ready to ask hard questions and demand transparency from both its systems and its partners.
This topic is front and centre for many industry leaders, and it’s one I look forward to exploring further at this year’s TechFest. In our upcoming fireside chat, we’ll dive deeper into how insurers can future-proof their operations, build trust in automated systems, and remain accountable in an increasingly intelligent world. Because in the end, no matter how smart our tools become, it’s the people behind them who must take responsibility. And that’s where true accountability lies.