By Frank DeGiglio, Chief Architect for Cloud, IBM Systems Group
In the March 1991 issue of InfoWorld Stewart Alsop wrote, "I predict that the last mainframe will be unplugged on March 15, 1996.” Now, 25 years later, people are predicting that the cloud will replace the mainframe. These naysayers are just as wrong. Cloud does not mean the end of the mainframe, but the dawn of a new era of mainframe superiority.
Of course, when most people think of cloud they are thinking of Linux instances on x86 machines, but cloud is much more than that. As people start looking at cloud as something more than Linux infrastructure served on x86 machines, they can see that the direction cloud is going actually favors mainframes.
Today the main business focus for cloud capability is on providing services that enable businesses to create new user-centric applications quickly. Businesses are driven to provide user-centric applications that take advantage of leading edge technologies while providing traditional business functionality quickly. Because of this trend, applications programmers leverage Application Programing Interfaces (APIs) that allow developers to loosely couple different systems together.
These APIs are implemented as services on multiple platforms that are invoked using the Representational State Transfer (ReST) protocol. This web-based protocol is simple enough that people with minimal web programming skills can take advantage of it. It also allows a programmer to take advantage of a capability without having to match languages, hardware or operating systems.
Combine this with XML and JSON, ways of moving data in a standard way without having to worry about differences in systems, and you have a solution that allows programmers to take advantage of a systems capability without having to know anything about the system.
Up until now, cloud has been somewhat problematic for the mainframe. Everyone is looking for everything to be the same. Our differences were a liability. As we move to APIs, our differences become a strength. A person calling the API doesn’t care how it is implemented, as long as it comes back with an answer quickly, is always available, secure, and can be provisioned quickly.
These things play to the strengths of our systems. The service can reside as a task in an address space and take advantage of z/OS capabilities like WLM and Sysplex. Services can exist in “Legacy languages” like COBOL, Assembler, and Java because the consumer of the services never touches the code. In fact, the consumer may never know he or she is connecting to a mainframe.
This is not only good for the mainframe, it’s valuable to the company. The company has business assets running on the mainframe. Rather than having to rewrite those assets on another platform, which will take time, introduce risk and cost significant amounts of money, a business can leverage existing code and repurpose it as service that can be consumed across the enterprise. Additionally, these assets can become a new source of revenue for a company. As a company takes these assets and sells them, they become a service provider turning IT from a cost center to a profit center.
The mainframe is perfect for the cloud applications of the future. It allows a business to take advantage of the assets they have been developing for the last 50 years. This is obviously an advantage for a company, so why hasn’t it been taking off? Why haven’t companies been embracing this technology? The key inhibitor to adoption is culture, not technology.
The distributed community does not accept mainframe as leading edge. They can’t believe that the mainframe can provide fast efficient services. To be successful, mainframers have to create services that the distributed programmers can consume easily. Businesses that have built mainframe services that can be consumed by distributed programs have not only provided new capabilities, they have generated new interest in the mainframe. Companies that have distributed and mainframe programmers collaborating are finding new ways of solving problems by blending mainframe and distributed capabilities. The discussions are not about how a platform can do everything, it’s about figuring out which platform can solve particular problems.
The mainframe community needs to be flexible to become appealing to distributed people. They are viewed as people who constantly say no. They are viewed as people who are rigid and unyielding. Many times these perceptions are based on mainframers’ adherence to a set of processes that always have been focused on ensuring the high quality of the services that they provide. While there is value in such a stance, the time has come to reexamine how we provide services to our consumers. We need to figure out how to move with the kind of speed and agility that the business demands. This is not to say mainframers should abandon their devotion to the traditional value of mainframes and their processes, rather they should temper them with the kinds of processes that enable distributed systems to meet their clients’ needs quickly.
The mainframe has the technology to become a powerhouse in the cloud. We need to work together across the enterprise to tap that power. By changing the culture, showing both the distributed and mainframe people that they can learn from each other’s successes, we restore the prominence of the mainframe, centralized IT, and the power of the business developers. The business needs to make the mainframe the business cloud service provider of the future. Cloud is not the end of the mainframe. It is the foundation of the clouds of the future.
To learn more, watch Frank’s full presentation, “The Mainframe Strikes Back! Reclaiming IT!” from this winter’s SHARE in San Antonio. You can find that presentation and others from recent SHARE events in the SHARE Live! content library.