The words “digital transformation” are on the lips of every person in technology and tech media, as well as many business leaders – from company CIOs and CTOs to technology to business line managers to writers in news publications and tech blogs.
At its core, a digital transformation is the enablement of technologies and workplaces tuned to today’s digital economy. The beating heart of this digital economy is the API, and is being followed now by emerging technologies like IoT (Internet of Things) and FinTech technologies like Blockchain.
Today the transformation of processes, IT services, database schemas and storage are proceeding at exponential rates with Cloud, AI and Big Data currently taking center stage as new ways of working in the enterprise.
The glue to the majority of developments in the technology world is the adoption, proliferation and acceptance of Open Source technologies. Community developments such as Hadoop, Apache Spark, MongoDB, Ubuntu and the Hyperledger project are some of the names that freely fall from any Open Source discussion.
The question is where do you run these workloads? How do you run these workloads? And in what form should these workloads take?
Most major companies will have mainframe systems at the core of their IT systems, so the question is really whether to run new workloads there, or on other platforms?
Any building architect will tell you that before tearing down the walls of an old house or doing any significant structural changes, you should always consult the original architectural plans. In the same way, any systems architect would look very closely at what a mainframe system is doing now before considering running workloads elsewhere.
However, it is imperative to understand the difference between mainframe and midrange server technology at a very high-level:
- Mainframe systems are designed to scale vertically not horizontally – originally; they scale in both dimensions now
- Input / Output is designed in mainframe systems to move processing away from the core and very fast I/O is built in to the core of a mainframe system, even at the hardware layer
- Centralized architecture is a key feature allowing mainframe systems to manage huge workloads extremely efficiently – catering for 100% utilization without any degradation of performance
- Resilience is built in to every key component of a mainframe; redundancy at the core
- At a transactional level there is no other system that comes anywhere near the level of processing of data that a mainframe system can process.
An argument against the mainframe could be to decouple software systems onto commodity hardware or cloud systems; but this tends to create server and cost sprawl, particularly if an important goal is to mirror the mainframe’s performance, security, transaction throughput capacity, reliability, maintainability and flexibility.
But as we move further into the world of IoT, with databases and Big Data systems acquiring vast amounts of ingested social media and transactional data, how are we scoping the growth and security of these systems? We are not if we simply just keep adding to existing IT infrastructures, we need to be able to scale access and throughput to manage, integrate and optimize the hottest commodity we have, data!
Data is becoming a currency in its own right but we need to secure this new currency in a way we do today with traditional monetary systems. And perhaps the best way to leverage this valuable asset is via APIs that allow enterprises to take advantage of the mainframe investments already made – enter a LinuxOne Open Source ready mainframe that assists with creating a familiar Open Source tooling stack on steroids!
The argument here is that a digital transformation is more than just empty words and data thrown on cloud servers, it is a state of mind, and an architecture that should encompass current and future systems towards overall business goals.
At the heart of this goal is the end-user consumer, something that every system architect should be very mindful of; however, downtime and security are quite often understated when creating the initial framework for key infrastructure projects. These key elements must be baked into every project and at the very core of future technology initiatives – something that the Open Source Ready LinuxOne infrastructure delivers extremely well.