Over 10 years we help companies reach their financial and branding goals. Engitech is a values-driven technology agency dedicated.

Gallery

Contacts

411 University St, Seattle, USA

engitech@oceanthemes.net

+1 -800-456-478-23

Article Report

KJR’s Feedback on the Mandatory Guardrails for High-risk AI​

Article Report

In Australia, we’ve seen a wave of new frameworks and policies aimed at ensuring the responsible use of AI. From the Voluntary AI Ethics Principles introduced in 2019 to the recent National AI Safety Framework and proposed mandatory guardrails for high-risk uses of AI, it may seem overwhelming at first. But despite the apparent complexity, a clearer picture is emerging. These policies are designed to create a unified approach, helping both public and private sector organisations navigate AI’s risks while still reaping its many benefits.

Fear-based Approach vs Risk-based Mindset

At KJR, we’re committed to helping organisations understand these frameworks and adopt AI responsibly. Many organisations are hesitant to adopt AI due to perceived risks. The key is to move from a fear-based approach to a risk-based mindset. Start small – identify a single AI use case, develop proportional controls, and build from there. The Voluntary AI Safety Standard provides a good checklist of things to consider when putting appropriate governance controls in place, and the Foundational AI Risk Assessment guideline developed by the QLD government provides a very practical approach to risk assessment. These approaches are built on more detailed international standards, but are more accessible for everyday use. 

In our a recent VDML podcast episode on practical AI assurance, we had the pleasure of having two distinguished guests: James Gauci founder of Ethē, an AI governance platform, and Sean Johnson of Lakefield Drive, who works extensively with boards on developing AI policy and adoption strategy. It was great conversation and there’ll certainly be a need for a part two, but some key takeaways include:

Ignoring AI is not a practical option: turning off AI to avoid regulations is more prevalent than expected at the moment, but as AI becomes embedded in our everyday tools, the need to have the right governance processes, staff training, and most importantly, the right organisational culture of responsible AI usage will be essential to avoid harm. Attempting to block legitimate use of AI-enabled tools will only serve to create a culture of clandestine usage which will simply increase risk, rather than reduce it.

Appropriate regulation is an innovation enabler: Australian frameworks for responsible AI usage give clarity around the steps required to ensure that AI can be used for the benefit of all. By setting out specific expectations around processes like risk assessment, testing and monitoring, and transparency around AI usage, organisations wanting to use AI to enhance the way they work can be more confident about investing in AI-enabled solutions without exposing themselves to unexpected risks that might emerge from inappropriate usage of AI or from AI solutions that are inherently unsafe in a specific context.

KJR's Feedback on the Mandatory Guardrails for High-risk AI

The journey to responsible AI usage is, of course, ongoing and there is more work to be done, especially in areas of high-risk AI usage. Like many organisations around the country, KJR provided feedback on the federal government’s proposal to introduce mandatory guardrails for high-risk AI.  A formal regulatory framework which goes beyond voluntary compliance is essential in order to deliver certainty around appropriate usage of AI in contexts such as healthcare, and public safety. The key to the effectiveness of such a framework will be to ensure that the responsibility for AI safety is distributed appropriately across the technology supply chain: vendors of AI models and solutions have a duty of care to develop technology which meets the legal and ethical requirements of their users, deployers of AI solutions also have a duty of care to apply that technology safely. Developing the domestic capability to build and deploy AI responsibly is an essential step in Australia being an active participant in the AI industry.

By embracing AI responsibly, we can transform this technology into a tool for meaningful impact. Whether through mandatory guardrails or proactive governance, the goal remains the same: to harness the benefits of AI while ensuring its safe and ethical use.