The Brief
Anthropic and the Australian Government signed a memorandum of understanding covering AI safety evaluations, university research collaboration, and notably, data sharing from Anthropic's Economic Index on how AI is reshaping work across Australian industries.
The initial focus: natural resources, agriculture, healthcare, and financial services. Anthropic also confirmed a Sydney office opening in 2026, its fourth in Asia-Pacific. The Economic Index data is revealing: Australia ranks seventh globally on AI usage intensity, ahead of every English-speaking country except the US and Canada. New South Wales captures 37% of conversations, Victoria 31%.
Australian users also demonstrate the most diverse range of AI tasks among English-speaking nations. The government now has a direct line into how AI is actually being used, not surveyed, not estimated, but observed.
The real story this week is in the boardroom: two out of three directors are using AI, yet barely one in seven boards has anyone who knows how to govern it and the governance gap cuts both ways.
ARTIFICIAL INTELLIGENCE
Only 13% of Boards Know How to Govern AI but That's Not the Biggest Problem
Two out of three Australian directors used AI for board work in the past six months. 40% used more than one AI application. That's from the AICD's own survey, not a tech vendor's press release. Now look at the governance side;
Only 13% of boards have appointed a director with AI expertise.
Just 21% require any AI-related training.
Only 37% have audited how AI is actually being used across their organisation.
Across the broader market, 88% of organisations are deploying AI, but only 25% have a board-level policy governing it.
Australia's regulatory position hasn't simplified things. The government moved away from mandatory AI guardrails in favour of technology-neutral regulation, standing up the AI Safety Institute with $29.9 million in funding. Meanwhile, the 2024 Privacy Act amendments are taking effect late 2026 and will require organisations to disclose automated decision-making in their privacy policies. The rules are coming, but they're arriving in pieces, not as a single compliance event boards can prepare for.
The pattern here is worth paying attention to
Most commentary frames this as a governance gap: boards need to catch up, establish policies, appoint AI-literate directors. That's true. Under section 180 of the Corporations Act, directors have a duty to be informed of material risks. If AI is reshaping your operations, your customer experience, and your competitive position and the board has no visibility, that's a governance failure with personal liability implications.
But there's an equal and opposite failure that gets far less attention: applying analogue governance to an exponential capability.
If your AI governance framework is a list of restrictions, approval gates, usage policies, committee reviews designed for static, predictable systems, you haven't protected the business. You've frozen it.
Here's the maths.
Your competitor's team of five, working with AI as a force multiplier, is now producing the output of twenty-five. Your team of five, operating under a restrict-first governance framework, is still producing the output of five. The governance didn't reduce risk. It manufactured a different risk: competitive obsolescence.
5 Questions To Ask
Where is AI already operating in this organisation and where should it be but isn't? You can't govern what you can't see but the audit should also surface where you're under-deployed.
Who is accountable when an AI system gets it wrong and who is accountable when we're too slow to adopt? Liability runs in both directions. If accountability only covers AI failure and not strategic inaction, the board is managing half the risk.
Is our AI policy enabling adoption or restricting it? 61% of boards have set restrictions on employee AI use. Far fewer have built adoption frameworks. A policy that's only a list of don'ts is a brake with no engine.
Are we measuring AI ROI at board level or just AI risk? If AI only appears on the risk register and never on the strategy slide, the board is governing with one eye closed.
Is AI on our risk register as a strategic risk AND a strategic capability? The AICD's 2026 governance priorities flag AI ethical and regulatory issues as a top risk but competitive obsolescence from failing to adopt AI belongs on the same register.
What This Means For You
Good AI governance is not a compliance exercise. It's a competitive capability. The boards that get this right will do three things:
Audit: Know where AI is and isn't operating;
Enable: Build frameworks that accelerate smart adoption, not just restrict risky use
Measure: Track AI's output multiplier at board level, not just its risk profile. The question isn't whether your board is protected. It's whether your board is governing at the speed your competitors are moving.
SOCIAL MEDIA
Public Sector Trials - GovAI Chat
The federal government's secure AI chatbot for public servants “GovAI Chat” entered alpha trials this month, with beta planned for July.
Built to government security standards, it gives APS employees access to multiple AI models from a single platform with document upload capability. The government committed $166 million to the project.
Critically: prompts and uploaded documents are not retained by AI model providers for training. This is the government putting its own AI adoption into practice, not just regulating everyone else's.
CAREERS
AI Jobs Are Now the Fastest Growing in Australia
Demand for AI professionals has grown more than 40% year-over-year, outpacing accounting, law, and medicine.
AI literacy, not AI engineering, is now the most in-demand skill Australian employers are hiring for, with eight in ten company leaders saying they'd prefer a candidate comfortable with AI tools over one with more experience but less AI proficiency.
Finance leads AI-focused hiring at nearly 12% of job ads, followed by tech and communications at 7%. The shift is clear: AI competence is moving from a specialist requirement to a baseline expectation.
AI IN PRACTICE
AI In Practice
Board Directors: Run a 30-Minute AI Audit Before Your Next Meeting
Most boards don't know where AI is operating in their organisation. The AICD found only 37% have conducted an audit. Here's how to run one in 30 minutes — no technical expertise required.
Step 1: Ask three questions of each direct report (10 minutes)
Send a one-line email to your CEO and each executive team member:
"Can you list any AI tools, automated systems, or machine learning models currently in use in your area including any your team uses informally?"
You'll be surprised what comes back. Marketing is using ChatGPT for copy. Finance has an AI forecasting tool. Customer service deployed a chatbot six months ago. HR is screening resumes with an AI plugin. Most of this was never reported to the board.
Step 2: Map the responses to a simple risk-and-value grid (15 minutes)
For each AI tool or system identified, ask two questions: (1) Does it touch customer data, make decisions about people, or affect financial reporting? If yes, it's high-risk and needs governance. (2) Is it saving measurable time or money? If yes, it's high-value and needs scaling.
Plot each one on a 2x2: high-risk/low-value (shut it down or fix it), high-risk/high-value (govern it properly), low-risk/low-value (monitor), low-risk/high-value (scale it).
Step 3: Put it on the board agenda (5 minutes)
Take your grid to the next board meeting. You now have a one-page view of where AI is, what it's worth, and where the governance gaps are. That's more than 63% of boards currently have.
Want to go deeper? The AICD's Directors' Guide to AI Governance provides a full eight-element framework for safe and responsible AI oversight. Start with the audit. The framework comes after.
In Summary
The governance conversation is shifting. It's no longer just about whether your board has an AI policy.
It's about whether that policy is a brake or an engine. The boards that treat AI governance as a competitive capability, not a compliance checkbox are the ones that will still be relevant in five years.
Start with the audit.
Ask the five questions.
Govern at the speed your competitors are moving.
Until next week,
The AI Brief

