In a recent move, the Australian government released voluntary artificial intelligence (AI) safety standards and a proposals paper, pushing for more regulation of AI in high-risk situations. Alongside this, they’ve also called for increased adoption of AI across industries and sectors. On the surface, this might seem like a positive step toward innovation, but the reality is more complicated—and potentially dangerous.
With AI systems being trained on vast datasets that most people can’t even begin to understand, they produce results we often can’t verify. Even the most advanced AI models, like ChatGPT and Google’s Gemini, have been known to make embarrassing and sometimes troubling errors. ChatGPT, for instance, has shown declining accuracy over time, while Google’s Gemini suggested putting glue on pizza! It’s no wonder public skepticism about AI remains high.
The Australian government’s push for more AI use seems to ignore some important realities about its risks and limitations.
The Real Risks of AI
We’ve all heard the alarm bells about AI potentially causing job losses, but the harms go much deeper. AI systems are already being used in ways that expose real dangers, ranging from the obvious—like self-driving cars that fail to avoid pedestrians—to the more subtle, like biased recruitment algorithms that discriminate against women or racial minorities. AI-powered legal tools can exhibit similar biases, making decisions that unfairly affect people of color.
Deepfake technology, which can convincingly replicate voices and faces, has led to increased fraud, creating fake versions of co-workers and loved ones to manipulate and deceive.
And here’s the kicker: even the Australian government’s own reports show that, in many cases, human workers are still more efficient and effective than AI. But in the age of shiny new tech, everything starts to look like a nail to AI’s proverbial hammer. The problem is that not every task requires AI, and it’s critical to understand when AI is the right tool—and when it’s not.
Should We Be Trusting AI—Or Our Government?
As the government calls for increased AI use, we should ask: what does it gain from pushing this narrative? One of the biggest risks comes from data collection. Every time we use AI tools like ChatGPT or Google’s Gemini, they gather enormous amounts of private information—our thoughts, intellectual property, and sensitive data—often processed offshore and outside Australia’s jurisdiction.
While these tech companies promote transparency and security, we rarely know what really happens to our data. Is it used to train new models? Is it shared with third parties or governments? These are questions we often don’t have clear answers to.
Recently, Government Services Minister Bill Shorten proposed a “Trust Exchange” program that raised further concerns about data collection, with the support of major tech players like Google. The potential for a mass surveillance system in Australia is a real and pressing threat.
But beyond data collection, the influence AI has on our behaviors and politics is even more concerning. Automation bias—our tendency to believe that machines are smarter than us—leads to over-trusting AI systems, which can have dangerous consequences. If we blindly adopt AI without adequate education or regulation, we could be sleepwalking into a society controlled by automated systems, where privacy and trust are eroded.
Regulate AI, Don’t Overhype It
The conversation around AI regulation is vital. The International Organization for Standardization has already established guidelines on AI management, and their implementation in Australia could ensure more cautious and thoughtful AI use.
The Australian government’s push for regulation is a step in the right direction, but the problem lies in the uncritical hype surrounding AI’s adoption. Encouraging more AI use without first educating the public about its risks and limitations is a recipe for disaster.
Instead of blindly pushing Australians to adopt AI, the focus should be on protecting people from its potential harms. AI may be part of our future, but its integration needs to be thoughtful, measured, and backed by strong safeguards—not a rushed embrace of technology for technology’s sake.
In short, let’s dial down the hype and take a closer look at what’s truly at stake with widespread AI adoption. The goal should be to protect Australians, not mandate trust in a tool that still has a long way to go.