Wild speculation and simultaneous head scratching were the order of the day when the trio of computer glitches recently befell the NYSE, United Airlines and the Wall Street Journal.
The rickety position :
All three organizations have arguably some of the best IT staff in the world, yet the simple fact remains neither computers nor humans are infallible. According to our own internal studies, almost 90 percent of downtime is caused by mundane technical issues rather than coordinated cyberattacks or natural disasters. In fact, some are saying that a network router failure is one of the causes for the NYSE outage. Today’s IT leaders sit in the rickety position of second guessing threats to their infrastructure and balancing expenditures to proactively or reactively neutralise those threats. While billions are spent on cranking out software upgrades, next rev hardware and computer infrastructure assets, any of these can inadvertently cause an organisation’s IT services to grind to a halt. So one would hope somebody is also spending at least a portion of those R&D billions on mitigating risks and unexpected outcomes to avoid the ugliest word in the IT dictionary—downtime.
The question remains, if you can’t prevent the inevitable failures in technology that cause downtime—whatever the cause—where does that leave you? Quite honestly, that leaves you to minimise the impact while you pick up the pieces. The glaring lightbulb in the middle of the room is the ongoing and ever-present need for a disaster recovery (DR) and a business continuity plan, regardless of the size of your organisation. The fact is, if all your business eggs are sitting in the proverbial technology basket, there’s not a day that goes by that you aren’t at risk of downtime or worse, losing data or systems permanently.
The cost to NYSE, United and WSJ :
While it will take a while to determine the real cost and impact of downtime to NYSE, United and WSJ, it will certainly be more than the $100,000/hour cost often stated by industry analysts for average organisations. Although business continuity is a key responsibility for IT, organisations have struggled with DR planning and budgets because traditional methods have been overly complicated or extremely costly to implement and manage.
Business continuity and disaster recovery planning can make all the difference :
But the good news is there are approaches and technologies today that can simplify disaster recovery, even make it as simple as clicking a button on the screen, and provide recovery of applications, servers and data in a few minutes instead of days. The choices are growing for IT leaders, and options such as DR as a service (DRaaS) are increasingly becoming a key consideration for CEOs and CIOs alike. By combining backup, DR, archive and cloud capabilities with support for recovery, both on-premises and in hosted machines, DRaaS opens the door to flexible recovery techniques — whether on-demand or ready-to-run — that best align with organisational requirements.
Businesses can choose the protection and recovery method that balances cost with meeting internal or external recovery point and recovery time objectives. DRaaS also affords an efficient way to conduct live, repeatable DR testing to provide peace of mind and ensure your staff is always trained and ready to go in case of a common technical glitch or rarer natural disaster. Additionally, DRaaS can turn a network used for testing business continuity into a virtual workbench or sandbox for testing new apps or processes prior to putting them into production. This application test/dev capability speeds time to value and extends the benefit of DR beyond downtime avoidance. By rethinking their approach, companies of any size could transform disaster recovery from technical glitches into a business opportunity.[su_box title=”About Kemal Balioglu” style=”noise” box_color=”#336588″]Kemal Balioglu leads the product team for Quorum, building on more than 15 years of global leadership in the software industry at companies ranging from startup phase to Fortune 100 and in various domains including automation, security, energy management, and data protection.He has extensive experience in building teams that deliver high quality and easy to use products, and driving technology integration both within and across companies.Prior to joining Quorum, Kemal was at Symantec where he was responsible for deduplication and new backup technologies for NetBackup, including bringing to market the first deduplication appliance for Symantec.Kemal previously led teams at Shavlik Technologies (now LANDesk) and Siemens in U.S. and Europe, and holds a M.S. EE from Technical University of Karlsruhe (Germany).[/su_box]
The opinions expressed in this post belongs to the individual contributors and do not necessarily reflect the views of Information Security Buzz.