Why uptime and performance are key to cloud security
Imagine you just bought a brand new car featuring all the latest bells and whistles. You're itching to take advantage of speed camera detection and try out the heated seat backs and armrests.
Unfortunately, when you try to turn your car on, nothing happens. If the vehicle doesn't work, what's the point of having it and all its exciting features? Ideally, you want something that provides the latest bells and whistles and performs as promised whenever you want.
This is equally true for cloud security solutions.
Over the past several weeks, the importance of uptime and performance for cloud security solutions has been on full display. During that time, my company received calls from other SASE vendors' customers as they wrestle with their existing security solutions' downtimes.
Unfortunately, one competitor's product recently went down for 14 hours, while another suffered from six outages in 15 days. Such events not only expose organisations to increased risks but can disrupt the normal flow of operations and even grind business continuity to a halt.
Typically, such service outages find their origins in the underlying infrastructure on which a vendor's products are built. Most SASE providers have opted to create and maintain their own networks of private data centers to power their solutions.
However, this approach essentially amounts to an attempt to match the level of service provided by public cloud companies that have dedicated entire businesses to it.
As we observe these outages, a common question is raised: how has just one company maintained its industry-leading uptime of 99.99% since 2014?
The company's platform is built-in — and delivered through — the public cloud, meaning it can focus on driving innovation with its security technologies rather than managing a fleet of data centers.
The cloud already has virtually infinite redundancy, storage and compute power, so why try to reinvent it? True cloud security should be delivered from the cloud itself.
Polyscale architecture
There is a broad range of cloud security services on the market with varying functionality. Some operate inline for real-time security. Others operate out-of-band for visibility and control. In either case, the most crucial buying criterion is service uptime and performance.
Some cloud security services may be sold as network services with a fixed capacity priced at an annual fee per Gbps. Such pricing is suitable for network security services such as firewalls or secure web gateway proxies.
Other cloud security services such as email security, data loss prevention (DLP) or cloud access security broker (CASB) may be sold at an annual fee per user. When there is a mismatch between the technology stack and the business model, uptime and performance are compromised.
Legacy security products are designed for single-tenant usage operating at fixed throughput loads, for example, 1GB/sec firewall or secure web gateway proxy.
When these products are offered as cloud services, vendors simply deploy these devices in a data center and charge customers based on the throughput. Pricing and architecture are aligned, and if a customer overloads the network, congestion ensues naturally.
The customer may elect to purchase additional capacity. Other customers are not affected. However, when the legacy architecture is used for services such as email security, DLP or CASB, uptime and performance suffers. Such services are licensed on a per-user basis, and the customer expects performance and uptime independent of the time of day, user mobility or usage trends.
For example, a customer with 10,000 users expects the same performance and uptime even if half the users fly to a remote offsite meeting. In practice, such a temporary migration of users would overload the remote data center that has a fixed capacity, bringing it down for all users and possibly for all other customers.
Security services licensed on a per-user basis, such as email security, DLP and CASB, require a broad range of technology components such as proxy, scanning nodes, hadoop clusters, mail servers, databases, search indexes and so forth. Furthermore, such services must scan multiple applications and protocols simultaneously.
In a polyscale architecture, each component is stateless, multi-tenant and can handle any application. When the load rises in a component — for instance, exceeding 50% over a five-minute interval — the component clones itself. For example, when a large customer has an offsite, the remote data center grows towards the load profile of that customer automatically.
In contrast, vendors with legacy network architectures have struggled to deliver performance and uptime.