Big data and big iron.
With IBM’s investments in both — its Vivisimo acquisition, for example, brought Hadoop to the mainframe, making it easier to analyze petabytes of data — the two terms are increasingly interdependent.
And industry-wide spending and investment in both big data and the infrastructure to support it is only expected to increase.
Earlier in January Forrester predicted that the smart computing software market, propelled by data analytics and business intelligence applications, will generate $41 billion in spending this year and $48 billion in 2014. Gartner predicted that spending on data center systems will rise from to $141 billion in 2012 to $154 billion in 2014.
Even further evidence: In December, billionaire hedge fund manager Paul Singer, of Elliot Management fame, lobbed an unsolicited $2.3 billion bid for mainframe provider Compuware. The bet: That revenues can be eked out of the mainframe business for the foreseeable future (no doubt driven by the demands of big data and cloud computing).
What all this interest and investment points to is an oft-heard refrain, particularly at the beginning of any major “movement” in IT and business: A skills shortage.
Data scientists have emerged as a new breed of data analysts that are uniquely equipped — through education, experience and outlook — to wrangle big data. These are computer scientists and physicists, statisticians and mathematicians that not only can write code and develop new technical tools, but that have the curiosity and out-of-the box thinking to ask of it the right questions. And communicate answers effectively to executives. They are, indeed, a rare breed.
And mainframe engineers? This too is a rare breed, if only for the fact that the majority of today’s mainframers are nearing retirement age. In a recent USA Today article, Compuware CEO Bob Paul estimated that as many as 40 percent of the world’s mainframe programmers would be retiring in the near future. He pointed out that it’s no small feat to interest young Millennials in learning a technology many of them have never even heard of: "It is not as sexy as developing new mobile apps,” he said. “But if you want a secure and highly valued career, this is a great place to go.”
There is, clearly, a long and short view with regard to talent shortages. The long view goes something like this: a trend emerges, skills shortages are identified and organizations begin to react by implementing training programs. These programs are largely undertaken by those organizations that have deep resources and deep pockets: colleges, universities and big enterprise (that often fund programs at colleges and universities).
The long view has one distinct draw back: Time. There are courses of study being taught today to train both data scientists and mainframe technologists. But a student entering in the Fall of ’13 won’t graduate with a Bachelors Degree until 2017, or with a Masters until 2015. And even then, it doesn’t guarantee that they cwill be able to make an immediate impact in today’s IT environment. Business — and technology — evolves in nanoseconds by comparison.
For now, that leaves companies with short-term prospects. And the enduring question: What to do?
In his Datacenter Dynamics blog, Steve Totman, data integration business unit executive at Syncsort, advises organizations to bring in additional skills to their data warehouse/ETL teams that will enable them to create collaborative data scientist teams.
“Organizations need people who are not just technically proficient, but are also visionaries – people who will be able to ask the right questions of the data to come up with analytical insights that really add business value. In this respect the scientific community could offer many of the necessary skills. Scientists already deal with vast amounts of data and, importantly, are coming up with the right questions to query against that information.
It’s equally important to give the IT team the tools to simply and efficiently move data into Hadoop, (e.g. mainframe sources which can be particularly tricky), cope with the steady stream of complex data from disparate data sources including relational, non-relational, cloud-based and SaaS, and emergent, less structured data types – speeding the time and reducing the resources required to collect and query against it.”
Totman also suggests that rather than outsourcing big data projects, businesses bring talent in house — and develop in-house talent, a sentiment mirrored by Vinnie Mirchandani. In his Enterprise Irregulars blog, Mirchandani offers this advice for talent sourcing, particularly skills in the so-called SMAC stack — social, mobile, analytics and cloud:
Most outsourcers do not have the SMAC talent themselves. When it comes to analytics, sure they can show they have had some (SAP) HANA training, but how many of their staff have even rudimentary pattern recognition skills? (Editor’s note: HANA is an in-memory database technology designed to be SAP’s solution to Big Data challenges at the lower end of the Big Data scale.) What really makes them cloud qualified? We need more disciplined sourcing – buy qualified teams, not the brand name. Even better buy smaller qualified teams and more IP and automation to do the job. If companies feel they need to outsource the newer skillsets, they should be at least be looking at bringing some of the more mature skillsets in-house.
For those organizations taking the longer view, IBM is making serious investments in mainframe skills development, an effort it began nearly a decade ago as it fretted whether retiring mainframe engineers would be replaced. According to a New York Times article, more than 1,000 schools in 67 countries participate in IBM’s academic initiative for mainframe education. SHARE, a long-time IBM partner, is also taking the lead, having featured more than 500 technical sessions on a wide range of mainframe topics at SHARE in San Francisco this past February 3-8.
The best bet for the long term: Find those students and attract them, quickly.