Over 10 years we help companies reach their financial and branding goals. Engitech is a values-driven technology agency dedicated.



411 University St, Seattle, USA


+1 -800-456-478-23


In the news: In AI we trust – but we shouldn’t


KJR CTO, Mark Pedersen joined Davey Gibian, founder of Calypso AI and former White House technical advisor in conversation with David Braue from Information Age on AI explainability and what this means for businesses and society as a whole.

This is an excerpt of an original article published in Information Age on 25th November 2019. Read the full piece here.

In AI we trust – but we shouldn’t

While AI’s inarguable potential has seen it rapidly reshaping every area of business and government, its genesis in data science has left a blind spot around cybersecurity.

This meant the integrity of critical transportation, health and other systems is being placed in the hands of data scientists for whom cybersecurity is generally of passing interest, or no concern at all.

“Most of these applications are built by data scientists that very rarely have a traditional cyber background,” Gibian explained, “and therefore the risks and challenges of adversarial thinking aren’t drilled into them the way they are for cyber professionals.”

“As we build more digitally centric systems and a digitally centric society, we need to build that trust in at the same time,” said Mark Pedersen, chief technology officer with strategic advisory firm KJR, which has been working with Calypso AI to trial its models in Australian businesses and is fielding “daily” enquiries about AI.

“As people look at where AI fits within their business, they are asking questions about its implications,” he continued.

“We are increasingly being asked to provide this kind of assurance as people look at where AI fits within their business, and where the explainability isn’t there, it is a huge threat to AI’s uptake.”

Ultimately, Gibian said, the key is being aware of the risks – and not taking anything for granted.

“We’re talking about life-and-death situations,” he explained, “and those areas are where we really have to slow down, and say we’re not necessarily going to do that until these criteria are met.”

“The only way to ensure [compromise] doesn’t happen, is to be worried – because if we’re not always worried that’s when bad things are going to happen.”

Read the full article in Information Age.

Want to know what your organisation is up against when it comes to AI and cybersecurity risk? Contact KJR today for an initial discussion – 1300 854 063