As SHARE turns 65 and the mainframe turns 56, it’s significant that Moore’s Law has been around for 55 of those years — since 1965. Over the last five decades, the world of computing has come to rely on ever-denser and faster computing technology to mitigate the constantly increasing amount of functionality packed into a given amount of capacity and response time. We also have come to take for granted that every hardware aspect of computing followed this “law” that was assumed to suggest that the capacity and performance of computers would double every 18 to 24 months.
Of course, in its original form, Moore’s Law was much simpler. As Wikipedia tells us, Gordon Moore, who went on to co-found Intel, first observed that “the complexity for minimum component costs has increased at a rate of roughly a factor of two per year. Certainly, over the short term, this rate can be expected to continue, if not increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years.”
But, as memory and storage capacity and CPU speed also seemed to follow this trajectory, they ended up lumped in with the assumed implications of this observation. Today, half a century later, we’ve seen and come to taake for granted this regular doubling that meant roughly a thousand-fold increase in capacity approximately every decade. For example, consumer disk drive storage increased from kilobytes in the 1980s and megabytes in the 1990s to gigabytes in the aught-naught decade and terabytes now.
Two things in the world of computing did not typically follow this “law”, even from the beginning: network bandwidth (which has continued to increase at its own rate, as described by Nielsen’s Law and Edholm’s Law), and software efficiency (which may seem designed to soak up all the capacity gains of these other advances).
Well, according to various industry visionaries like Nvidia CEO Jensen Huang, Moore’s Law is now dead. That’s a problem for platforms that have relied on the law to compensate for inefficient software — rather ironic, considering Bill Gates’s famous observation that “the first rule of any technology used in business is that automation applied to an efficient operation will magnify the efficiency. The second is that automation applied to an inefficient operation will magnify the inefficiency.”
Indeed, the world of processor speeds seems to be stranded at just over 5 GHz, which was achieved well over a decade ago. And, while the growth curves for memory/storage and bandwidth haven’t flattened out just yet, there is no reason to assume they won’t.
What keeps this from being all doom-and-gloom, however, is that the solution to these problems actually predates the temporary laws whose demises portend them: responsible, frugal computing (i.e., a proven legacy as seen on the IBM Z mainframe).
Yeah, legacy: “It works.” Right from the beginning, pioneers such as Rear Admiral Grace Hopper, Dr. Fred Brooks, and Gene Amdahl realized that computing limitations might be mitigated over time, but never sufficiently to permit wasteful design and implementation. This idea filtered into the culture of those who build and maintain COBOL and other “legacy” languages and systems.
Of course, that’s not to say that we can ignore the limitations of capacity growth. Just the opposite — that traditional, responsible frugality, which was essential during the early days when there genuinely weren’t enough resources to go around, remains the best strategy to deal with the return of such conservative times. Efficient, quality code beats bloatware, and now that unlimited expansion of capacity is no longer available to obscure that fact, the time is coming when over half a century of responsibility is about to come to roost in the design of quality systems.
In other words, it’s a great time to be a mainframer!