New research reveals just how much, and how, banks and financial market organizations are using Big Data.
Despite, or maybe because of, the fact that financial service organizations are highly regulated, with specific mandates on how much data to keep and store (a lot, for a long time),  it turns out they are among the early benefactors of Big Data.
A recent report, Analytics: the real world use of Big Data in financial services, by the IBM Institute for Business Value, in conjunction with the Saïd Business School at the University of Oxford, found that Big Data is “especially promising and differentiating” for financial services organizations. The report boils its findings down to an advantage over peers in the marketplace:
With no physical products to manufacture, data – the source of information – is one of arguably their most important assets. The business of banking and financial management is rife with transactions, conducting hundreds of millions daily, each adding another row to the industry’s immense and growing ocean of data. So the question for many of these firms remains how to harvest and leverage this information to gain a competitive advantage?
IBM found that 71 percent of banking and financial market firms are finding a competitive advantage from the use of information, which in the study’s parlance, includes Big Data. That’s a massive increase – 97 percent – from just two years ago, when only 36 percent of respondents reported a competitive advantage.
Among the mainframe’s biggest users, financial organizations are gaining an extreme competitive advantage with Big Data. They are, in other words, the beneficiaries of the mainframe’s biggest selling points: the ability to handle massive amounts of data from a variety of sources, lightning fast transaction processing, and a reliable and secure platform that translates to high systems availability.
What’s interesting, and highlighted in the recent research by IBM, as well as in a separate report by NewVantage Partners, is exactly how much – and how – these financial services organizations are using Big Data.
Financial services and healthcare companies in the US are among the leaders in Big Data adoption—with financial services particularly ahead in terms of usage—and their data strategies are indicators of trends across the verticals. A July 2013 survey from data and analytics consulting company NewVantage Partners found that US executives from these industries reported that on average, 68% expected to spend more than $1 million on Big Data initiatives this year, while 19% said they would spend more than $10 million. And by 2016, half of execs expected to spend more than $10 million.
NewVantage also found that financial data holds a special place in the hearts and minds of survey respondents. According to the report, 75 percent said that financial data holds the most interest to them. In terms of how they would like to organize and invest in that data, 70 percent of respondents said they would like to accelerate analytical processes and make analytics more sophisticated, while 69 percent said they would like to more effectively integrate existing sources of data.
The IBM report provides even more specific survey findings about how financial services firms are applying Big Data analytics. When asked to name the top three objectives for Big Data, financial services organizations responded, in order of importance: Customer centric outcomes (55%), risk/financial management (23%) and new business model (15%).
IBM asked respondents currently managing Big Data projects to identify the state of the infrastructure. Only slightly more than half, 53 percent, reported integrated information, though 87 percent said they have the infrastructure required to manage the ever-growing volume of data, among others:
- 63% report security and governance
- 64% report scripting and development tools
- 47% report complex event processing
- 47% report workload optimization and scheduling
- 25% report analytics accelerators
- 31% report stream computing
The report found more than four out of five banking and financial markets respondents with active Big Data efforts are analyzing transaction and log data. This is the data that has, in many cases, been collected for years, but never analyzed.
The mainframe’s presence within the financial services industry is incredibly widespread. A recent SHARE President’s Corner post notes that the world’s top 100 banks run on System z; and mainframes process roughly 30 billion business transactions per day, still making it the preferred backend system for a large majority of financial services organizations.
Which leads to the question: why not take advantage of the processing power already residing on those mainframes to perform this type of large-scale analysis? Currently many banks and financial institutions take the data out of the mainframe and bring it into a distributed computing environment for analysis. They are beginning to find that to be an expensive, time-consuming, unnecessary step – which could lead to leaving the data where it is, and moving analytics workload back to the mainframe to make sense of it.
However they go about it, expect banks and financial institutions to continue to lead the way in gaining new insights through analysis of the data they have been collecting. And then to use those insights to deliver new offerings that more closely match their customers’ behavior. The potential is already there. Now they just need to tap into it.
 For example, the Sarbanes-Oxley Act of 2002 in the United States requires publicly traded companies, accountants, attorneys, and even firms that intend to go public, to retain electronic business records for five years and financial data for seven years after an audit. Other countries have their own data regulations.