Why Cloud will not Kill the Mainframe

By: Megan Oster

Decades ago, Mark Twain allegedly remarked, upon reading an obituary prematurely penned in his memory, that reports of his death were greatly exaggerated. The same might be said today of the mainframe.

As the popularity of cloud computing has increased, some “techies” have proclaimed that cloud will kill the mainframe in a matter of a few years. Some have even declared it dead already. But they are dead wrong.

According to Gartner, 92 of the 100 largest banks in the world use z System mainframes. Gartner also reported that IBM recently attracted more than 1,300 software companies to develop their products for the mainframe. This is huge, as software partners have an enormous impact on the viability of platforms.

While cloud offers many benefits, the mainframe is equally as appealing, albeit for different reasons. And companies recognize this. Today, a notable 80 percent of the world’s corporate data is still managed by mainframes. This percentage makes sense when considering the platform has always been a technological workhorse.

The mainframe’s performance remains impressive decades after its first appearance. For example, the Customer Information Control System (CICS) application server on the IBM mainframe processes 1.1 million transactions per second – significantly more than the number of Google searches at 60,000 per second. Surprised? It is actually not that shocking, considering the evolution of the platform.

The mainframe has survived since its creation in 1964 in large part because rather than resisting technology advances, it has continually reinvented itself to embrace them. As a result, it is relevant not only for cloud computing, but for mobile computing and today’s most popular application server packages, as well. Given the platform’s steadfast survival amid a deluge of competing technologies, cloud is a minor blip on the horizon.

However, this does not mean the mainframe’s evolution has been smooth sailing. The most daunting challenge in recent years may be the sheer volume of data requests coming in from devices including PCs, tablets and smartphones. While the mainframe is well equipped to process vast amounts of data efficiently, demands can be expensive because the more CPU cycles a mainframe completes, the more the cost rises.

The complexity of data that platforms must process today has also presented challenges. Complex applications typically involve continuous user interaction, and it is difficult to ensure their performance. In a society where end users expect immediate response times, faltering applications can have serious business implications.

One solution for this issue is developing a holistic view of the mainframe; users need to tend to all of the systems that feed into the mainframe to ease stress on the system as a whole. IT must be vigilant in monitoring the system for inefficiencies while also adjusting for increasing data loads. When they are educated in how to mitigate demands on the system, IT can take the appropriate measures to resolve performance issues quickly, postpone costly hardware upgrades and accelerate time-to-market for new applications.

Ultimately, mainframes were designed to survive all of those years ago. They are flexible, secure and virtually bulletproof in terms of reliability. These characteristics are a primary reason why they have remained a relevant platform amid the rise of competitors equal to the cloud in popularity and benefits. Much like Mark Twain decades ago, mainframes are still alive and kicking. 


Recent Stories
ICYMI: SHARE President Harry Williams featured on Terminal Talk Podcast

Growing the Next Wave of System Programmers: RSM Partners’ Mainframer in Training Program

Mainframe Education: Ron Ash on the Master the Mainframe Hackathon