In recent years, one of the reasons why there has been significant growth in the deployment of enterprise level access control systems is that advances in technology have enabled the delivery of significant benefits over and above the basic provision of access control to a building. As a result, in addition to security managers being responsible for the security of people, property and assets, there’s likely to be a number of other stakeholders involved in the decision to justify and fund the installation of a bespoke access control system. These could include, but are not limited to, Health and Safety, operations and HR managers, writes Duncan Cooke.
A key word likely to be continuously on the lips of these stakeholders is ‘compliance’, as any failure to comply with Government regulations or local laws could have serious consequences for organisations that have a Duty of Care to the general public.
An inspector’s visit to a food processing plant, for example, could prove costly and may even result in temporary closure unless it can be verified that everyone working at the plant has undertaken appropriate training and has a valid hygiene certification. The same smart access control cards which facilitate staff access through an entrance to a building may also be configured – through integration of the various systems – to produce a report of all those whose hygiene certificates are due to be renewed.
In this and many other scenarios, the hardware and software elements of an access control system need to be working effectively 24/7/365. The weak link will most likely be the server upon which the various software applications are operating. Unfortunately, even a well-designed and maintained system is still vulnerable to downtime as server manufacturers cannot provide a 100% guarantee that there will not be a component failure at some point. Furthermore, we should never forget the potential impact of a determined cyber attack upon the associated software applications.
Knowing your options
Data back-ups and restores
Having basic back-up, data replication and failover procedures in place is perhaps the most basic approach to server availability. This will help to speed the restoration of an application and assist in preserving data following a server failure. However, if back-ups are only occurring daily, significant amounts of data may be lost. At best, this approach delivers approximately 99% availability.
That sounds pretty good, but consider that it equates to an average of 87.5 hours of downtime per year, or more than 90 minutes of unplanned downtime per week.
High availability includes both hardware-based and software-based approaches to reducing downtime. High availability clusters are systems combining two or more servers running with an identical configuration, using software to keep application data synchronised on all servers. When one fails, another server in the cluster takes over, ideally with little or no disruption.
However, high availability clusters can be complex to deploy and manage, and you will need to license software on all cluster servers, thereby increasing costs.
High availability software, on the other hand, is designed to detect evolving problems proactively and prevent downtime. It uses predictive analytics to automatically identify, report on and handle faults before they cause an outage. The continuous monitoring that this software offers is an advantage over the cluster approach, which only responds after a failure has occurred.
Moreover, as a software-based solution, it runs on low-cost commodity hardware.
High availability generally provides from 99.9% to 99.99% uptime. On average, this means from 52 minutes to 4.5 hours of downtime per year, which is significantly better than basic back-up strategies.
Continuous availability solutions are able to deliver 99.999% uptime. This is the equivalent to just five minutes of downtime per year.
Supported by specialist continuous availability software, two servers are linked and continuously synchronised via a virtualisation platform that pairs protected virtual machines to create a single operating environment. If one physical machine should fail, the application or software platform will continue to run on the other physical machine without any interruptions. In-progress alarms and access control events, as well as data in memory and cache, are preserved.
Easy decision to make
Continuous availability means that no single point of failure can prevent a security software platform from running and, unlike high availability, back-up and clustering solutions, there’s no failover or reboot required and therefore absolute minimal downtime.
In a business environment where non-compliance can have serious consequences, adding a continuous availability solution to support an existing or new access control system would seem to be one of the easiest decisions to make.
Duncan Cooke is Business Development Manager (UK and Europe) for Stratus Technologies