Think big. It’s a guiding principle in business and, in the Internet-age of cloud computing and globalization, it’s also a good piece of advice for technology systems architects and C-suite directors overseeing the development of their IT stacks.
Among the core IT department goals, missions and objectives is a need to create a clear channel to enable applications, data repositories, IT services and all manner of internal system mechanics and functions to scale. But scalability itself doesn’t come in a bottle as some packaged defined IT solution that any organization can simply purchase, so what does it really mean and how do we achieve it?
Scalability itself is a term applied to an application, an IT system process, an algorithmic model, a portion of data (or entire database), a technology component (such as an Application Programming Interface (API), a system agent or an orchestrator function of some kind), a software engine (such as an AI machine, or a big data analytics service), or in fact a productized piece of technology not necessarily defined as an application or service.
Scale, why worry?
Most of us (users) have gained some appreciation of scale issues when we have tried to use a website for some service or requirement that has ‘gone viral’ and crashed. Whether it be ordering Covid-19 tests, trying to get concert tickets or an attempt to gain access to our home gas supplier website during the current energy crisis, we’ve all seen systems go down.
Sometimes (in the above example) it’s a question of web services, sometimes it’s a question of the cloud backbone’s ability to connect users… and sometimes it’s something more infrastructural, such as a database overload, or an Operating System (OS) mismatch if an application has been rescaled over to a new system, but not thoroughly stress tested.
“Most people think of scalability as the ability for a system to increase capacity to meet sudden increases in demand. However, scalability is about much more than this one aspect,” explained Nelson Petracek, global CTO at enterprise data platform company Tibco.
“From a technical standpoint, scalability involves having systems (hardware or software) operate at the ‘right’ size for current or expected contexts, without pause… and without the need for sudden refactoring or rearchitecting activity, which can introduce additional risks or complexity Scalable systems can meet the demands being asked of them, at any point in time… and they can do so in a transparent fashion. But as important as the technical elements are when it comes to scalability, so are the non-technical elements, including people and processes. Both sides must co-exist to achieve maximum value in the enterprise,” added Tibco’s Petracek.
As we scale up your applications, data and infrastructure, our approach to security will also have to cope with that scaling as well. As we grow an IT infrastructure to support applications and transactional increases, so our security tooling and support should scale up too. This can lead to some tough questions related to whether we are able to protect our traditional on-premises technology as well as any cloud-based solutions – and whether we can cover all aspects and potentially different types of cloud models and vendors?
“Being scalable means being secure about whether or not your cyber security strategy is flexible and can cope with change,” said Paul Baird, chief technical security officer for the UK, at cloud-based IT, security and compliance solutions company Qualys. “This means looking at several areas – for instance, can your developers make use of pre-built and approved security capabilities within the organization? Rather than developing new methods each time, providing an authentication framework that R&D can plug into, can actually help get secure software out faster. This can help them remove common security issues in their code, such as broken access control measures, code injection attempts or insecure designs.”
Additionally says Baird, scanning applications for security issues before they are deployed into production sounds simple, but doing this in a DevOps world requires huge speed and scale. Ensuring secure code is deployed into these hyper scaling environments right from the start saves time and money and removes the risk of potential downtime.
“Having a cyber security strategy that is aligned to the business helps predict these potential challenges. Staying away from siloed tooling will help keep the view of your organization’s cybersecurity risk posture clear,” added Qualys’ Baird.
In purist software programming terms, we can say that scalability is a measure of an application or network or algorithm or protocol or wider software system’s ability to be able to move upwards to serve a wider remit of more functions and more users and be able to take advantage of its enlarged status and form. Let’s also remember that in mathematics, scalability mostly refers to closure under scalar multiplication.
Scalability can of course move in the other direction and move downwards. This is important in the context of the ‘only pay for what you need’ world of cloud computing. But even so, we generally talk about upward scalability. In the world of storage, open source software platform company SUSE reminds us that there is both vertical (scale-up) scalability and horizontal (scale-out) scalability.
“Vertical (scale-up) scalability increases the capacity of hardware or software by adding resources to a physical system, such as adding processing power to a server to make it faster. For scale-up storage, this means adding more devices, such as disk drives, to an existing system when more capacity is required,” notes the company, on its definition pages.
As additional clarification, SUSE also explains that horizontal (scale-out) scalability connects multiple items in order to work as a single logical unit. For scale-out storage, this means adding devices in connected arrays or clusters. Each cluster can have many nodes (devices), and nodes can be separated geographically.
Scalability in a real-time world
With so many of our software systems now required to work in real-time and support users with near-instantaneous interactivity, we should also think about scalability in the context of software services that need to be highly available, often under extreme environmental conditions.
“Scalability is intimately related to high availability (HA), by which I mean availability in the sense of system uptime so that users can get access to the applications and services they need,” argues Eric Raab, CTO of KX, a company known for its real-time data analytics and decision intelligence technologies.
“Organizations are increasingly demanding that their IT systems remain available under all conditions, including network outages, vendor failures, natural disasters – and sudden, increased load. By architecturally approaching scalability from a high-availability perspective, a system can be designed to ‘scale through’ failure modes and deliver reliable performance under variable load. This approach, which I call the ‘HA scalability model,’ provides a consistent design pattern that satisfies stringent corporate performance objectives,” added Raab.
In the greater-than-linear growth zone that typifies systems under pressure to exhibit scalability is essential for software systems to be able to handle rapid growth without bottlenecks on an effective level.
“When system bottlenecks occur, organizations are typically driven towards various re-architecting procedures, which of course can have far-reaching implications if experienced at critical moments operating mission-critical applications and services,” said Rob Tribe, vice president of systems engineering for EMEA region at Nutanix.
“Approaching cloud computing with scalability in this context at front of mind should drive enterprises to look for platforms and tools that are capable of offering hybrid cloud in the most usable, digestible, manageable and essentially scalable way possible. Characteristics like simpler packaging, metering and pricing models, when combined with automation to provide self-healing and disaster recovery (DR) features can be key here; these are the ingredients to enable a business to respond to unpredictable workload expansions and frequent technology updates more competently,” added Nutanix’s Tribe.
IT & business scalability
All said and done so far then, surely cloud is the answer i.e. it enables us to work at what many people call webscale level, a suggestion that the network we tap into via cloud services is as potentially expansive as the web itself. But shouldn’t that notion dovetail with business strategy as well?
The answer is obviously yes.
“The industry has pretty much cracked scaling the IT stack. But the next question is; how do you scale your people and specifically your developers? That’s the scalability challenge most organisations are facing. No code and low code technologies are starting to provide answers, helping developers become more productive and empowering more people in IT to build apps. Given the global shortage of developers, the solution to scaling developer resource can’t just be more coders. Instead, organisations should look to enable techies that can’t code to build solutions that solve business problems,” said Mat Rule, CEO of Toca, a company known for its no-code enterprise development platform allows organisations to accelerate development outcomes and free developers to focus on mission-critical development.
A scalability-centric IT stack operating model can only stem from and be governed by a scalability-centric enterprise operating model where all corporate assets (technology, physical buildings, people, raw materials and business culture) exist as a ready-to-scale element of the total commercial proposition that a business seeks to build.
“We often think about scale around access (more users) or transactions, but a massive point of scale is data. As opposed to the other aspects of scale, data is typically not ephemeral. That means that as more users engage, transactions are created, more data is created. That growth in data can come with a cost – lack of flexibility, increased attack surface area, dark data that exists within silos. In the cloud era, data systems need to be flexible in order to scale, allowing for seamless moving in a multi-cloud world, dynamic rebalancing based on workload shifts, and doing so with the principles of zero trust security applied to the data access itself,” said Matt Waxman, chief solutions officer at Cohesity.
Scalability’s next mission will be sustainable scalability, a term and an issue that has not widely been discussed thus far. If we are scaling technology, then our responsibility should now arguably be focused on looking at how much growth we can create from reusable technology resources as we now champion the circular economy – and that goes for reused virtualized software resources as well as physical hardware components.
Scale is good, but green clean carbon-neutral scale is so much better, let’s clean up before we grow up.