Existing infrastructure was designed to cope with relatively predictable and slow-growing demand. But today, customers, partners, and employees are demanding new services, delivered faster, on an ever-greater diversity of devices. The Internet-of-Things ratchets workloads up another notch.
Data is often too large to be stored and effectively analyzed with traditional techniques. It’s often unstructured. Unused data can have a negative ROI.
Unsurprisingly, this mismatch can lead to problems. The applications needed to meet a new level of customer expectations take too long to bring to market. (What do you mean you don’t have an app for that yet!?) It’s hard to manage the proliferation of virtual machines and storage.
Scaling up may no longer work. You just can’t scale up to be big enough. Not for systems and not for storage. Instead, scaling out using software on volume hardware is becoming a necessity. In other words, IT needs to fundamentally change its approach to infrastructure.
That’s where cloud comes in. When public cloud providers got going, they set a new benchmark for internal IT departments. Spinning up a public cloud instance in minutes made taking multiple weeks to get the same resource from IT… Well, unattractive is one way to put it. These expectations are further fueled by the experiences we all have as consumers. We expect applications to be attractive and interactive, accessible from all of our devices anytime and anywhere, and with new features and capabilities added on Internet time, not enterprise software time.
A wide range of organizations use public clouds. But most also want to maintain systems under their direct control. Doing so may give them greater visibility and control. Certain workloads and data storage may be cheaper on-premise. The ability to customize and co-locate can also simplify integration with existing applications and data stores. And concerns about compliance and governance, especially for mission-critical production applications or those that touch sensitive customer data, always need to be taken into account.
For these reasons and others, hybrid approaches are becoming a norm. A 2017 Forrester study commissioned by Red Hat; Hybrid cloud: an obvious reality or a conservative strategy, states “Firms must evaluate the goals of migration efforts in the larger context of their digital strategy and examine the best environment for every individual workload. Not every application is suited for movement to the public cloud, nor should they all remain private. An informed migration strategy takes these considerations into account.”
The industry’s thinking around hybrid clouds has also become more nuanced. Hybrid originally focused most on rapid, even real-time, shifts of workloads from a private cloud to a public cloud (aka “cloudbursting”). Today’s hybrid encompasses workload and data portability or diversity more broadly. This, in turn, leads us to topics like containers, management, and the best way to deploy private clouds within a broader set of IT infrastructure and services.
Private cloud implementations often take place alongside IT optimization projects such as creating standard operating environments (SOE), tuning and modernizing existing virtualization footprints, and improving management and integration across heterogeneous infrastructures. They take place within the context of new application architecture, development process, and integration efforts. However, for the purpose of this series of posts, I’m going to concentrate on three primary aspects of private cloud mostly through the lens of operations.
These three areas are:
These technologies, which I will discuss in future posts, provide the basis for a private cloud that can improve the ability of an organization to more quickly meet the requirements of both internal and external users.
For more information on Red Hat Linux, or other distributions of Linux or Unix:
Please call Race Computer services at (973)343-5479.