“The banking sector, especially when focusing on aspects that involve core banking functions, has largely been behind in the adoption of modern computing and data technologies.”
In a January 2016 research study by the Filene Research Institute (The State of Data Technology in Credit Unions: The Sink‑or‑Swim Crossroad Ahead), Professor Jignesh M. Patel argues that credit unions are falling behind when it comes to using data to make strategic decisions.
The study was based on an online survey that was followed up by in-depth interviews with a sub-set of survey respondents. While the study as a whole was full of valuable insights, the results of the interviews were particularly meaningful for credit unions that want to do a better job of harnessing their data.
One finding that stands out is entitled, “Data Analysis is Largely Manual”. This section of the report makes some very important observations.
Rotten to the (Old) Core
Legacy core processors are restricting innovation at many credit unions. The anticipated pain of core conversion often causes organizations to stick with obsolete technology for much too long. A big part of this is the perception that preserving historical data from the existing core for ongoing analysis can only be accomplished at great cost. The alternative is to relegate the old data to difficult-to-use archives that even the most skilled technicians will struggle to analyze and integrate with new core data reporting.
Best of Breed Silos
Few credit unions rely entirely on their core processor to support all operational functions. Best of breed products are often acquired to fit the particular needs of the organization. Yet, achieving a 360 degree view of members is a challenge in these circumstances. Credit unions devote an extraordinary amount of time to manually integrating data across these silos, frequently using error-prone tools like Excel.
Send Out the Data
One way credit unions try to solve internal data deficiencies is to ship their data to outside vendors for analysis and augmentation. This can be a smart move for organizations with limited internal resources. However, a weak data infrastructure makes data preparation costly since these vendors usually require uniform files composed of data that may reside in multiple silos in various formats. After all this work to integrate and format siloed data, the resulting information is point-in-time and therefore less relevant with each passing day until the next expensive batch of files is sent off to the vendor.
Consider the Lowly Transaction
The study found “surprisingly minimal use of transactional data for analytics”. Having data available at the transaction level is the gold standard because it allows the greatest possible analytical flexibility. Yet, managing data at the transactional level is a very difficult task without the right tools. Excel or other less sophisticated products are simply unable to handle the typically large volume of transactional data a credit union generates each day. As a result, analysts are forced to rely on summarized data to cut down on the size of their files. A very valuable level of precision is lost because of this. Also, available analytical options are severely limited.
Upping the Game
Once a credit union understands the not-so-hidden costs of a weak data management infrastructure, steps must be taken to build a foundation for levering the analytical opportunities locked up in its own systems.
- Capture core data in a more analytics-friendly format – Core data can be extracted and stored in a database that is tuned for reporting. This is a three stage arrangement in which all data potentially relevant for reporting and analysis is first captured at the transaction level. This first stage can also contain as much reliable archival core data as the credit union decides to include.
- Break down the silos with “industrial-strength” data integration – The second stage of the three stage arrangement is the heart of the standard data model. This model is populated from the first stage data and “agnostic” to the original source of the data. In this way, subject matter areas like lending and membership are stored in an industry standard format that can use data from multiple silos. This makes analytics such as a 360 member view possible because all data of what products a member owns (or doesn’t own) is available in a single repository.
- Make the data “business-friendly” - While the standard data model rationalizes data across subject matter areas, it is often too complex for the average business user to use it to build reports or perform analyses. Relying on skilled technicians for all reporting needs can be an expensive and inefficient proposition. As a result, a third stage called a “semantic layer” presents frequently used data using common business terminology using familiar summarization levels and time-series.
There are multiple benefits of adopting the three-stage arrangement. Overall, a credit union’s analytical options increase.
- The cost of preparing data is greatly reduced for outsourcing solutions and for using innovative in-house applications. The standard data model with uniformly formatted data integrated from multiple sources makes possible the use of many new analytical tools.
- The availability of data at the transactional level that is updated daily provides a quantum leap in analytical opportunities. Analysts are free to explore cause and effect relationships down to the smallest detail. At the same time, the ability to summarize data in many different ways becomes possible.
- The semantic layer allows self-service options for business users to be an important component of the analytics ecosystem.
- Core conversion risk is greatly reduced. Once established, the first stage becomes the “insurance policy” for avoiding high core conversion costs of using data from the legacy system.
Overall, credit unions need to up their data management game in order to better serve their members and compete successfully in the increasingly innovative and fast-paced financial services world.