In February 2026, Wesfarmers announced an expanded AI partnership with Microsoft covering agentic commerce, supply chain automation, and workforce productivity across its retail brands, including an AI assistant already deployed for Bunnings store staff. Wesfarmers employs more than 118,000 people across some of Australia's largest consumer-facing businesses. They are not an edge case. They are a preview of where every major Australian employer is heading.
Now ask the compliance question that nobody in those partnership announcements addresses: if the law requires a business to give specific instructions to a customer, and an AI agent gives those instructions wrong, who is liable? Under Australian law, the answer is unambiguous. The business is liable. The AI did not hallucinate on its own behalf. It hallucinated on yours.
Australia does not have a standalone AI law. It does not need one. The existing legal framework already creates binding obligations for any business using AI in customer-facing, employment, or financial contexts. And the enforcement infrastructure is tightening fast, with a hard compliance deadline arriving on December 10, 2026 and penalty ceilings that doubled in March 2026.
If you are deploying AI in Australia or serving Australian customers, this is what you need to understand.
The December 10, 2026 Deadline: ADM Transparency Becomes Law
The Privacy and Other Legislation Amendment Act 2024 has already passed. It amends Australia's Privacy Act 1988 to introduce new Australian Privacy Principles (APP 1.7, 1.8, and 1.9) requiring organizations to disclose when automated decision-making systems use personal information to make decisions that significantly affect individuals' rights or interests.
These obligations take effect on December 10, 2026. They require regulated entities to include in their privacy policies the kinds of decisions that are substantially automated and the kinds of personal information used in those decisions. The Office of the Australian Information Commissioner (OAIC) is publishing guidance progressively throughout 2026.
This applies to AI systems used in hiring, lending, insurance underwriting, customer analytics, benefits eligibility, and any other context where personal information feeds into an automated decision with material consequences for the individual. If your AI system takes personal data as input and produces a decision, recommendation, or classification that affects someone's rights, you need to disclose that process and be able to explain it.
The penalties for Privacy Act breaches are already significant. Since the 2022 reforms, serious or repeated breaches can attract penalties of up to the greater of A$50 million, three times the benefit obtained, or 30% of adjusted turnover. For organizations running AI at scale across customer-facing operations, the exposure compounds with every undisclosed automated decision.
The Attorney-General's Department is also developing a consistent legislative framework for ADM in government services, responding to the Robodebt Royal Commission's recommendation for transparency and safeguards around automated government decision-making. While the government framework is still in development, the Privacy Act obligations for the private sector are final.
Australian Consumer Law: Your AI's Hallucinations Are Your Liability
This is the part most international companies miss about the Australian market. The Australian Consumer Law is not a technology regulation. It is a strict liability regime for misleading conduct that applies regardless of how the misleading content was generated.
Section 18 of the ACL prohibits misleading or deceptive conduct in trade or commerce. The critical feature is that it can be contravened without fault. A business that acts honestly and reasonably still breaches the prohibition if its conduct is misleading. Intent does not matter. The output matters.
Applied to AI, this creates a straightforward liability chain. If an AI-powered chatbot produces a hallucination containing false information about a consumer's rights, a product's features, a service's terms, or a company's policies, the business operating that chatbot has engaged in misleading conduct under the ACL. The business did not intend to mislead. The AI generated the false content. The liability sits with the business anyway.
The ACCC has explicitly flagged this risk. It is actively monitoring AI-enabled practices including reviews, claims, and pricing models, and has identified "AI-washing" (misleading claims about AI capabilities) as an enforcement concern for 2026-27. The ACCC in 2022 successfully obtained penalties against a business for misleading representations that arose from the operation of an algorithm, establishing the precedent that algorithmic outputs are subject to the same consumer law scrutiny as human-generated claims.
The penalty exposure is severe and just increased. From March 28, 2026, maximum penalties for ACL breaches doubled. The first limb jumped from A$50 million to A$100 million. The other limbs (three times the benefit obtained, or 30% of adjusted turnover) remain unchanged. For a large enterprise with significant Australian revenue, a single AI-driven misleading conduct case could produce nine-figure penalties.
The Scenario That Has No Current Solution
Consider the operational reality for any large Australian employer scaling AI across customer-facing workflows. Companies like Wesfarmers are deploying AI assistants for store staff, rolling out Copilot across business units, and exploring agentic commerce where AI takes actions on behalf of the business. The same trajectory is visible across financial services, telecommunications, healthcare, and government. This is not experimental. This is the operating model.
Australian law frequently requires businesses to provide specific instructions, disclosures, or warnings to customers. Financial services companies must give prescribed product disclosure statements. Healthcare providers must deliver informed consent information. Insurance companies must present policy terms in specific ways. Telecommunications companies must disclose contract terms and pricing. Employers must provide workplace rights information. Retail businesses must accurately represent product claims, warranty terms, and refund policies.
When a human delivers these instructions, compliance teams can train them, audit them, and correct them. When an AI agent delivers these instructions, the compliance exposure changes fundamentally.
The AI may omit a required disclosure because the retrieval system did not surface it. The AI may paraphrase a legally prescribed statement in a way that changes its meaning. The AI may combine accurate information from multiple sources into a response that is misleading in aggregate. The AI may confidently state something about a product or policy that was true six months ago but changed after the last training data cutoff. The AI may deliver different versions of the same required information to different customers based on how they phrase their questions, creating inconsistent treatment that itself becomes a compliance issue.
In each case, the business has failed to deliver a legally required communication. Under the ACL's strict liability regime, the fact that the AI generated the error rather than a human is not a defense. The business is liable for the output, not the intent.
At the scale these deployments are reaching, manual review of every AI output is not feasible. The volume of interactions is too high. The review latency would destroy the operational value of using AI in the first place. And periodic audits, by the time they catch a pattern, the noncompliant output has already been delivered to thousands of customers.
There is currently no widely deployed system that evaluates AI outputs against legally required disclosures and business-specific policy constraints at runtime, before the output reaches the customer, while producing the audit evidence that demonstrates compliance was enforced at the moment of delivery. That is the gap.
What Australia's Regulatory Architecture Looks Like in Practice
Australia does not have a single AI regulator. It has multiple regulators with overlapping jurisdiction, and all of them are active.
The OAIC enforces the Privacy Act, including the new ADM transparency obligations. The ACCC enforces consumer law, including misleading conduct by AI systems. ASIC regulates AI in financial services, including algorithmic trading and AI-driven financial advice. APRA expects governance and risk management for AI in banking, insurance, and superannuation. The TGA regulates AI classified as medical devices in healthcare. The eSafety Commissioner oversees AI-generated harmful content under the Online Safety Act.
The government's position, reinforced by the April 2026 response to the Senate Select Committee on Adopting AI, is that existing laws are technology-neutral and already apply to AI. The Treasury review of AI and Australian Consumer Law concluded that the ACL is "fit for purpose" for AI-related consumer harms. No standalone AI legislation is expected in the near term. The expectation is that businesses comply with existing frameworks and that regulators enforce them.
The AI Safety Institute, operational from early 2026 with approximately A$29.9 million in government funding, coordinates risk assessment and provides guidance, but it does not create new legal obligations. Australia's Voluntary AI Safety Standard and AI Ethics Principles provide frameworks for responsible AI, but they are voluntary. The binding obligations come from the Privacy Act, the ACL, and sector-specific regulators.
For companies deploying AI in Australia, this means compliance is not a future concern that depends on legislation passing. The obligations are live. The enforcement infrastructure is active. The penalties are real and recently doubled.
What Runtime Enforcement Looks Like for Australian Compliance
The combination of ADM transparency obligations and strict liability for misleading AI outputs creates a specific compliance architecture requirement. You need three things operating simultaneously.
First, every AI system that makes or contributes to decisions affecting individuals using personal information needs to be inventoried, classified, and disclosed under the new APP 1.7-1.9 obligations before December 10, 2026. If you cannot answer "which AI systems use personal information in automated decisions" you cannot comply with the disclosure requirement.
Second, AI outputs in customer-facing contexts need runtime evaluation against the specific claims, disclosures, and instructions the business is legally required to deliver. This is not generic content moderation. It is policy-specific enforcement: does this output contain the required disclosure? Does it accurately represent the product terms? Does it omit information that the law or company policy requires? Does it make claims the business cannot substantiate?
Third, you need an audit trail that demonstrates, for every customer-facing AI interaction, what policy was in effect, what the AI produced, whether it was evaluated, and what the outcome was. When the OAIC asks how you comply with ADM transparency, or when the ACCC investigates a misleading conduct complaint, the question will be "show us the evidence." A policy document is not evidence. An audit log of enforced policy decisions is.
Why Australia Is a Bigger Market Than It Looks
Australia's regulatory posture creates an unusual market dynamic. There is no standalone AI law generating headlines and driving urgency the way the EU AI Act or the Colorado AI Act do. But the existing legal framework is arguably more immediately enforceable, because the obligations are already in force (ACL) or have a fixed compliance date (Privacy Act ADM, December 10, 2026), the enforcement agencies are well-funded and aggressive (the ACCC brought A$100 million in penalties against Optus in September 2025 for misleading conduct, and the penalty ceilings just doubled), and the strict liability standard for misleading conduct means businesses cannot argue "we didn't mean to" when their AI produces false outputs.
The scale of AI deployment is accelerating fast. Wesfarmers alone plans to more than double its Copilot footprint. Microsoft committed A$25 billion in Australian digital infrastructure in April 2026, its largest global investment. Across financial services, retail, healthcare, and government, the pattern is the same: AI moving from pilot to production at a pace that governance programs have not kept up with.
For companies operating AI across Australian customer-facing workflows, the practical question is not whether governance is required. It is whether the governance operates at runtime, at the speed AI generates outputs, with evidence that an auditor or regulator can verify.
We built Aguardic to enforce AI governance policies at runtime across every surface where AI touches the business. If you are deploying AI in Australia and need to comply with ADM transparency requirements and ACL obligations, extract enforceable rules from your existing compliance documents and see where the runtime gaps are before December 10.

