This impacts costs for maintaining mission-critical service levels because organizations now have to provide higher levels of overall cybersecurity, and corresponding event monitoring, across a broader number of applications. I should note that the figure has come down slightly in recent years as companies implement better procedures to deal with data breaches, but the cost is still highly significant on top of the reputational issues involved. According to the Ponemon Institute, the average annual cost of remediating a successful data breach has generally risen over the last several years and is now at $5.40M per breach. In maintaining service levels, another costly area is remediation against a successful data breach. Of course these are extreme examples, but they serve to show the high price of downtime in addition to reputational concerns that organizations need to consider. In addition, Google’s five minute outage a week before is said to “have cost the search giant more than $545,000”. As an example, when Amazon Web Services suffered their outage that “took down Vine, Instagram, and others with it”, it was estimated that “the company could have lost as much as $1,100 in net sales per second”. One final factor, that’s re-defining mission-critical, is that expectations for mission-critical service levels have increased to the point where there is little tolerance for downtime of any nature. At the same time, business processes became increasingly delivered and conducted via the Internet, thus deepening our global reliance on cyberspace. These mission-critical applications were opened up to a far larger number of end users, customers and citizens with increasing expectations for 24×7 access and reliability. In the Web era, the scope of mission-critical computing was expanded to include web applications and electronic commerce. The servers running these applications were highly-secured “silos” in that they were running on dedicated, proprietary systems within the data center. These applications were utilized by a limited number of end users, typically employees, and were accessed via PCs and terminals in physically secure locations. ![]() In the pre-Web era, mission-critical systems were typically transactional applications such as airline reservation systems and vital enterprise-level applications such as ERP. I’ve defined these three eras specifically in the context of mission-critical computing, and for the purpose of this discussion, since each one has ushered in new forces of change that have added to those preceding it. To understand these factors, it’s worth noting that mission-critical computing has evolved through three distinct eras of change from the pre-Web era (before 1995), to the Web era (1995 to 2010), and now to the consumerization era (2010 and beyond). A number of key factors have shaped today’s expanded scope for mission-critical systems. Today, the scope of what is considered “mission-critical” within the enterprise has considerably expanded. Mission-critical systems generally require high transaction volume capabilities such as those within banking or retail systems, border security, airline reservations or logistics. In some cases, as with certain critical infrastructure, government and military systems, if they go down, they may also have an impact on national security. ![]() The operations are mission-critical because they are core to the company’s mission and, if they fail, they can cause significant financial or reputational damage to the organization. These are the processes and operations that directly support an organization’s end users and customers. Mission-critical computing has historically been defined as secure, reliable and scalable computing and process environments that support a company’s front office processes and operations.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |