The risks in using AI and the cost of getting it wrong

Nov 24, 2025

Nov 24, 2025

5 mins

5 mins

The risks in using AI and the cost of getting it wrong

AI can lift the quality and speed of professional work, but getting it wrong carries real costs. Recent Australian cases show what happens when organisations deploy AI without the right checks, governance, and professional oversight. This recap distils the lessons and sets out how Praxio AI - Tax Assistant helps firms use AI safely and productively.

What recent Australian cases tell us

Customer service disruption and reputational damage
A major Australian bank introduced an AI voice bot and reduced call centre roles. Instead of easing pressure, call volumes rose, clients were dissatisfied and the bank reversed the job cuts. The public backflip created reputational risk and operational disruption. The lesson is clear. Do not remove people before you prove the benefit at scale.

False citations and professional penalties
In an Australian first, a lawyer was penalised after court filings included AI generated references that did not exist. The court treated the failure to verify sources as a professional conduct issue, with consequences that can include suspension or loss of the right to practise. For accountants, the parallel is obvious. If a report or advice cannot evidence its sources, the risk is disciplinary action, complaints, and loss of trust is real.

Quality failures in paid work for government
A professional services firm refunded fees after an Australian government report contained errors linked publicly to the use of AI. Incorrect references required rework and corrections. Beyond the repayment, the incident drew national attention and damaged credibility.

These examples are not about AI being bad. They are about weak design, poor controls, and removing the human in the loop. The costs include refunds, remediation, brand damage, and in some cases penalties.

Common risk patterns and practical tips

Hallucinations and fabricated references
Large language models can produce confident but wrong answers and can even invent sources. A draft advice note may include a reference that does not exist, which is only discovered at review or, worse, after it reaches a client.

Practical tip: require citations from credible sources every time, click through to verify, and record that check in your file.

Overreliance and loss of human review
AI is sometimes treated as self-checking, which leads teams to reduce reviewer time or skip second partner sign off. Errors then pass into client deliverables.

Practical tip: keep human review as a non-negotiable step and make clear that AI outputs are research and draft outputs, not final advice.

Weak governance and unclear boundaries
Unrestricted chatbots can drift into giving what looks like client advice or public promises that do not reflect firm policy or the law.

Practical tip: set internal guidelines that define acceptable use, tone, disclaimers, and clear handoff points where a human must take over.

Privacy and confidentiality gaps
Sensitive client information is sometimes pasted into public tools that reuse prompts or store data outside firm control. Once released, the information may not be recoverable.

Practical tip: sanitise prompts and inputs into AI before use and do not include names, TFNs, bank details, addresses, or other identifiers. Keep any client prompts and information in secure, purpose built tools that provide encryption, tenant isolation, audit history, and the ability to export or delete data.

Operational risk and people risk
Rushing into production can increase workload rather than reduce it. Teams are told to rely on a new tool, quality dips, rework rises, and morale suffers because staff feel they are being replaced rather than supported.

Practical tip: pilot in one team with clear success measures, compare error rates and cycle time against your current baseline, invest in training, and scale only when quality and throughput improve consistently.

How Praxio AI helps firms get it right

Praxio AI - Tax Assistant (www.praxio-ai.com.au) was built for Australian public practice with ethics and accountant requirements at the centre. Our design choices are built on the following guiding principles

Credible sources only
Research is grounded in Australian legislation and ATO materials including Tax Rulings, Practical Compliance Guidelines, Law Companion Rulings, Practice Statements, and other authoritative guidance. Every answer provides citations and links so you can verify before advice is issued.

Human in the loop
The practitioner remains in control. You review the research and the client ready draft. The platform does not bypass professional judgement or send advice on your behalf.

Transparent and auditable
Your full chat history is saved. You can see the questions asked, the facts that changed, and how the research evolved. This supports internal review, quality assurance, and training.

Interactive by design
You can add facts and refine assumptions. The system also asks clarifying questions where it sees uncertainty so the research can be tightened before you send advice.

Privacy and security first
Data is encrypted in transit with modern TLS including TLS 1.3 and at rest with AES 256. Our framework aligns to recognised controls similar to SOC 2. Your content is not used to train upstream models. You can download or delete your data at any time.

Efficiency and profitability
Reduce time spent on technical research and redrafting so teams can focus on higher value work. Improve recovery and utilisation, make fixed fee work more viable, and increase throughput without compromising quality.

Some practical tips for adopting AI

Set firm guidelines that require citations, source checking, and documented human review

  • Sanitise prompts and AI inputs and avoid including any sensitive client details in external tools

  • Keep client information in secure, purpose built systems with encryption, tenant isolation, and audit history

  • Pilot internally on research and redrafting tasks, measure accuracy and time saved, then scale

  • Separate outputs into a client ready draft and a technical research note for the file

  • Train teams on prompt design, verification steps, and confidentiality expectations

  • Track outcome metrics such as time saved, error rate, and review findings to prove value

The bottom line

The real risk is not AI itself. It is removing professional judgement, skipping verification, and pushing change faster than the evidence supports. Australian cases show that the costs of getting it wrong are real and public, and in regulated professions penalties can extend to suspension or loss of practising rights. With the right workflow and controls, AI can lift quality and speed without sacrificing trust.

Praxio AI - Tax Assistant is designed to help Australian accountants work faster while protecting quality and confidentiality. Try it on your next complex query and review the citations, research trail, and client draft side by side. If you would like a short live run through for your team, we can arrange a session that focuses on safe adoption and immediate time savings.

About the Author
William Young FCPA GAICD is a contributor of Praxio AI, a purpose-built AI platform for Australian tax and accounting professionals. He is a recognised speaker on AI in accounting, known for delivering practical, ethical, and forward-looking advice to the profession.