Remember when PlayStation’s entire online community got hacked for their credit card details? Thousands of gamers were forced to press pause and call the bank while their sack-faced characters were left dangling in Little Big Planet. Not fun. What about the Great Census Debacle of 2016? Or the chaotic Red Cross personal data breach? These infamous bungles have gained a place in the halls of history. And they all could’ve been avoided (or, at the very least, mitigated). Let’s learn from their mistakes by taking a closer look at three of the most prevalent forms of software screw-ups.
The external data breach
If Anonymous – who hacked PlayStation – targets you, you’re in real trouble. Because this group has the entire internet hive-mind to help them leap through obstacles. Not to mention the element of surprise. So, what can you realistically do to steel yourself against a legion of faceless troublemakers?
‘You have to look at it from a risk mitigation perspective,’ explains KJR’s Victorian General Manager David Poutakidis. ‘Let’s say it costs you one million dollars to get to 99% assurance. To get to 99.5% might cost you another million, but to get to 99.9% will probably cost something like 100 million.’ It’s at this point you must determine whether it’s more effective for your business to pay for complete assurance, or instead develop a watertight data breech back-up plan.
The internal data breach
Internal data breaches are where the biggest software debacles, screw-ups and messes get catalysed. Something like Edward Snowden’s whistleblowing fiasco would’ve been very difficult to see coming. But more often than not, internal breaches are caused by simple human error. Last year a contractor maintaining the Red Cross website accidentally punched in the wrong thing and published personal information on 550,000 donors – including their blood type and whether or not they had engaged in ‘at-risk sexual behaviour’ in the last 12 months. Ugh.
‘That’s down to a release process problem,’ says David. ‘If you verify your data release using a quality, automated process there’s nobody to accidentally push that database onto a server.’ It just goes to show, sometimes there’s no substitute for a worker that doesn’t require sleep, food or caffeine to concentrate.
The failed IT transformation project
Perhaps the most common screw-up is rolling out an expensive new system before it can handle what it’s being asked to do. Which brings us to census 2016.
‘Ideally, they would’ve tested to make sure the census site could handle 14 million people wanting to log on within the first twelve hours,’ says KJR New South Wales General Manager Adam Bird. ‘And you would’ve done some disaster recovery plans as well.’
It sounds simple enough, but often the combination of misjudged levels of complexity, tight deadlines and competing interests can mean standards slip and the digital transformation fails. ‘Changes often get rushed through at the eleventh hour to meet a business stakeholder’s expectations, but they don’t get tested through the general process’, says David. Taking a step back and being willing to accept a little extra cost (and patience), can make all the difference despite the pressure to go live as soon as possible.
Evidently, there’s no such thing as 100% assurance. Instead, it all comes down to having a realistic disaster back-up plan and learning to trust in those who know how to expect the unexpected (aka the team at KJR). While a solid risk mitigation strategy can’t protect you from every kind of screw-up, it is guaranteed to avoid large scale disaster and keep you out of the headlines.