Preventing costly chatbot errors through smarter, safeguarded AI adoption
Recent studies, including Dext’s latest industry report, highlight a growing financial risk; businesses are increasingly suffering losses due to AI-generated errors. The report warns that continued reliance on public AI tools could lead to higher insolvency risk, increased HMRC scrutiny and greater misuse of AI outputs to justify inappropriate or fraudulent claims, with 70% of firms calling for formal AI regulation.
Framing AI as something that needs restricting misses the real issue. AI isn’t the problem; uncontrolled, unguided adoption is. AI is here to stay, and innovation won’t slow down because regulation tightens. Instead of trying to hold innovation back, businesses should be encouraged to embrace AI, provided they understand its limitations and train their people to use it safely.
When implemented responsibly, with clear boundaries, transparent citations and human oversight, AI can be hugely valuable. The question isn’t whether businesses should adopt AI, but how they can do so safely, confidently and effectively.
This is where advisers play a crucial role. Instead of discouraging AI, accountants help clients develop a thoughtful AI strategy that includes robust controls, validated outputs and safe workflows that enhance, not replace, professional judgement.
If clients are turning to AI instead of calling their accountant, that’s not a technology problem. It’s a relationship one. It suggests clients perceive AI as quicker or more accessible, reinforcing the need for stronger engagement, not fear of replacement.
Why relying on AI advice alone creates risk
- AI lacks business-specific context: AI cannot see your financial history, commercial intentions, risk appetite or sector nuances. It doesn’t always consider HMRC or regulatory updates. In finance and tax, context isn’t optional, it’s essential.
- Automation bias is rising: Under pressure, teams often trust AI without challenge, assuming speed equals accuracy. Treating AI as a decision maker instead of a support tool significantly increases risk.
- AI carries no accountability: When AI-generated advice leads to an error, the liability still sits with the business. “The system told us to” will never stand as a defence with HMRC or regulators.
A safer, smarter way to use AI
AI should absolutely be part of modern finance functions but only within a structured, well governed framework:
- Put governance first. Clear policies, approval workflows and usage standards help prevent errors before they happen.
- Use AI for efficiency, not authoritative advice. Drafting, summarising and producing initial analysis are ideal; final decisions require human review.
- Never rely on AI alone for tax or compliance decisions. Always validate with a qualified adviser.
- Treat AI as a prompt, not a verdict. Review assumptions and challenge the output.
- Prioritise transparent citations. Ensure AI tools provide linked sources, references or supporting data for every material claim.
How Moore Kingston Smith can help
Moore Kingston Smith’s digital transformation and outsourcing teams work with businesses to adopt AI safely, strategically and with full control. We facilitate responsible innovation and ensure businesses get the benefits of AI without exposing themselves to hidden financial risks.
If you’re experimenting with AI in your finance processes, or considering doing so, involving our team early will help ensure that innovation becomes a strength for your business, not a liability. Get in touch to find out more
