Visualising the value of reduced complexity in infrastructure
In IT, change is a constant. However, that change is not without structure. Every five to ten years we see another paradigm shift, new technology, that changes the landscape in our datacenters significantly, adding features that were initially missing or solving problems that the previous revolution created. In the seventies large FICON connected storage boxes set the scene for the first storage arrays (thank you IBM!). Subsequently x86 computing in combination with the first commercially viable fiber channel storage made virtualization as we know it today a possibility. In this overview I’m sketching the road to the next paradigm shift we find ourselves in today. Whilst trying not to loose sight of the constants, which we find mostly on the business side of things.
Where we came from
IT history from a business perspective is a bit longer and more complex than the example below shows. But as a reference with 80% of all workloads in the datacenter over the last two decades or so, the following picture is representative.
What I mean to visualize here is that any given company has a vision and corresponding goals it wants to reach. Usually these goals are not IT related, it’s the services designed to reach these goals that require IT support, in the form of one or more applications. Often IT is part of the how is it done and getting it done pillars of the business model pyramid.
Let’s have a look by using an example: We’ll make up company of a made-up size specialized in transporting goods. This transport company will have need for a planning service to make sure that each truck is filled as efficiently as possible for each route. This way the maximum use can be made of each driver and truck with the least amount of miles & time required. (which also reduces maintenance, wear and tear, petrol, etc.).
The application (in blue) that does the planning is operated by a person with knowledge of the planning service. Here is the closest link of IT and business, at the application level. (it’s a bit of a generalization but bear with me).
What follows is a multi-layered stack of products (in orange), each with their own interface, licenses and required training. These products then need to be integrated with each other to ensure they form a consistent platform that is always available for the application. In this example the operational supporting pillar Operational Management represents services like backups and monitoring. All together essential components of what makes up the supporting stack for a core business service supporting application, like planning software is. Normally I might advise a second stack in a remote datacenter as well but that is another story, for another time and for another fictional transport company.
Note that this orange stack does not add any direct value to the transport company in this example. That and fact that there are 5 people needed to ‘keep the lights on’ and one for supporting the application. I believe this is the basis for the feeling lots of C-level execs and managers have, that IT is a cost center. This needs to change, fortunately human ingenuity has added some new technologies to the mix.
Where we are now?
Over the last few years IT has taken some important clues from the consumer markets. Complexity is more and more frowned upon, services need to be spun up when the business needs them, not when IT can deliver them. To facilitate these requirements new areas of tech have come and been added to the stack.
With the addition of Automation all those mundane daily tasks that IT admins do can be repeated, exactly the same way every time, reducing errors in the operation and thus increasing overall quality of operations. Orchestration takes the operation one step further, we can make things happen based on system events or over multiple parts of the stack. We can build parts of, or even an entire, business service with the click of a button. It does however add some overhead, IT teams specialized in this type of work need to be added or existing admins trained. Whichever way you slice it, adding this type of value takes extra time, extra training, time and therefor extra manpower.
Then business also want to add analytics over different datasets. Adding the ability to provide self-service might give us some extra headroom, but requires setup, structuring and maintenance. Ideally, we’d like to have some burst capacity to external cloud service providers such as public cloud as well. But that again opens a whole different can of worms. Is the increased value we can leverage from all this good stuff worth the investment? Should we not just add another stack of what we already know? While certainly an option, I don’t believe that is necessary, there are better options.
The thing is, infrastructure over the last 20 years, has evolved from individual x86 servers that require an OS and an application to complicated 3-tier stacks. That need lots of extra software and services surrounding them to keep providing a consistent service for applications to land on. That complexity has spawned an entire industry to provide these services, even though they don’t provide any direct value to end-users. It was a necessary evil that some of us had to deal with, and others could benefit from.
If there is anything that public cloud vendors have shown customers is that the technology itself is not important, it’s the service it provides that is either directly valuable or not. If you run applications on a hosted cloud, do you care what hardware it runs on? Which hypervisor? Of course not, you pay for the service, turn the features on that you need and go about your business. (which is certainly not IT in our example of the fictional transport company!).
Why can’t we have public cloud functionality on premise I hear you ask. Well, it’s not that easy. Most of us, like the transport company in the example, have what many people call ‘legacy apps’, big monolithic buckets of code that need to be installed somewhere and reached by a client. The problem is these ‘legacy apps’ are not legacy at all, I’d rather call them ‘classic applications’. In my experience 80% of all companies rely on them for their core services! It is true that a lot of new apps are being developed in a ‘cloud native’ way but for most companies it will take many years before their core apps are ready to move fully to a public cloud. That is providing at that point it can be done securely and for a decent price. Some of our apps may be ready now, but quite a few are not.
Most customers need both, both on premise and public, both classic applications, cloud ready and cloud native apps. But how do we balance the effort it takes to keep the lights on with regards to our storage, servers, network and lord knows what else if we also want to continue to innovate?
It would be good to have the orange stack somewhat simplified; this is where HCI in general, and Enterprise Cloud in particular come in, to reduce the complexities of the necessary evil that is infrastructure, effectively reducing it to a utility like power or water, lowering its TCO and making sure we have a platform that supports our business needs, both now, and in the future.
Building a software-layer on top of the infrastructure like that is something that public cloud providers do, and it’s a big job. That’s the reason only the likes of Amazon, Google, Microsoft and Facebook, to name a few, can do it themselves.
But landing all your applications in the public cloud might be expensive, and not as secure as you’d like, providing you can get a good user experience running your ‘classic’ application in a public cloud. You can choose an ‘on premise’ version of a public cloud but then you’re immediately locked into that vendor.
The best way is to do it the other way around, to have your own, commercial of the shelf product, that supports both classic and cloud native applications, that wraps around the hardware you like, supports the hypervisor you like and supports the public cloud that you like. A cloud of choices. Ideally this product would have a slew of extra services integrated that you could turn on easily. I’m thinking file, block and object services, security services, disaster recovery services, capacity management services, automation, orchestration and container services. Because we all want a lower TCO with less operational issues, increased speed of innovation whilst being able to shift focus on running the business instead of IT, enhancing business outcomes.
Thank you for reading. Enjoy reading some of the other blogs for more technical information, and if you have any questions, let us know. We’re here to help.