Category | Article
Software Quality Assurance is about trust and resilience, not just bugs. Discover why SQA matters in 2026 and how to apply it at scale.
Australia’s new AI guidance makes an AI governance framework essential. Learn how accountability, testing and human oversight drive trusted AI.
Discover how responsible AI adoption starts with hands-on AI leadership. Learn practical steps, risks, and strategies discussed in KJR’s Trusted AI podcast.
As AI adoption accelerates across government and industry, recent incidents have exposed a critical truth: most AI failures aren’t caused by the technology itself.
The promise of AI in healthcare is enormous, from predicting patient deterioration to reducing preventable errors. But the real challenge isn’t technical; it’s human.
Human-in-the-loop represents a vital approach in AI development, where human expertise actively contributes to AI decision-making processes.
KJR is thrilled to celebrate our ongoing collaboration with the Goondoi Aboriginal Corporation, a partnership that has allowed us to work together on groundbreaking initiatives blending technology with culture. Our shared mission is to preserve and empower.
At KJR, we are passionate about harnessing cutting-edge technology to support and preserve Indigenous culture, art, and land. Our recent collaboration with Vernon Ah Kee is a testament to this commitment. This project brought together advanced technology and ancient heritage, resulting in a unique and impactful piece of work.
The ACS Digital Pulse 2024 report highlights Australia’s evolving technology landscape. KJR proudly supports the report and remains committed to fostering diversity in technology. Read more.
Australian government departments face a complex AI regulatory environment with voluntary standards, national frameworks, and international guidelines for responsible AI, each with overlapping elements. Departments must assess and align the appropriate tools and frameworks to their unique needs and responsibilities to ensure compliant, ethical, and effective AI use.
ISO 42001 provides a comprehensive framework for responsible AI governance, covering risk management, data privacy, bias mitigation, and regulatory compliance. Closely aligned with global regulations, it supports organisations in managing AI risks, building trust, and ensuring ethical, secure AI implementation.
The journey to responsible AI usage is ongoing and there is more work to be done, especially in areas of high-risk AI usage. KJR provided feedback on the federal government’s proposal to introduce mandatory guardrails for high-risk AI.





