Why Truth in Data Sets Your Bank Free
“Information is the resolution of uncertainty.”
– Claude Shannon
There’s a common misperception that bankers are risk averse. In fact, the opposite is generally true. Banking is all about risk. Effectively managing the risk of various bank functions, from lending to liquidity management, is how banks make money.
What bankers hate is uncertainty.
Risk is measurable, predictable, and can be accounted for in any number of ways, such as the rate a bank offers for a loan, or the duration of a wholesale funding purchase. But uncertainty is unknowable, incalculable, and stems from not having enough information, or not having faith in the accuracy of that information.
Uncertainty comes in many varieties in banking. Some of this uncertainty is unavoidable, like economic uncertainty. After all, it’s virtually impossible for anyone to guess how the economy is going to perform from one year to the next, although plenty of people offer up opinions (some better than others).
However, there are other types of uncertainty that can limit a bank’s performance, but that can also be remedied with the right tools and techniques. And that’s where the value of establishing a Single Source of Truth for bank data comes in.
The Cost of Information Gaps
Consider the following basic example of two potential borrowers:
At first blush these borrowers look very similar. They both have strong FICO scores and solid monthly incomes. Neither appear to be a significant risk, given the available data, and both would qualify for a prime fixed 30-year mortgage in the current lending environment.
However, one of the borrowers has corroborating data that gives greater insight into their likelihood to repay. The other borrower has minimal data, which creates uncertainty about whether they would be a good risk or not.
Now, very few lending officers worth their salt would be willing to finalize a loan before getting the full and complete data picture from both borrowers, and yet at a higher level, banks are often forced to make important decisions with significant gaps or potential inaccuracies in the information that they’re using to make those decisions.
The fact is, a lot of the information that a bank needs is already available somewhere within the bank. While some types of business struggle to collect the data that they need to inform decisions, banks are often awash in data, and a variety of analyses can be run to predict anything from likelihood to repay, to the probability of fraud. The biggest challenge is getting to that data, connecting it with other data, and ensuring its accuracy.
As the business of banking gets faster and faster, and with more customers relying on online interfaces to open accounts, apply for loans, or access other bank services, it’s even more vital that banks have a strategy for bringing data together to make decisions that can keep up with the pace of business.
How Data Impacts Bank Decision-Making
Data* is talked about so much that the word has been worked into a formless, shapeless concept without edges or meaning. So, let’s get away from the generic concept of big data, data analytics, etc. and talk about something that is clear: Decisions.
Banking is all about decisions. Some decisions are enormous decisions, made at the highest level of bank management, such as whether to acquire another bank or close branches.
Some decisions are mid-sized decisions, such as whether to approve a specific loan with characteristics that are a little outside the norm.
Some decisions are very small, but may add up over time, such as whether a customer service representative should waive fees for long-time customers that call to complain.
The aggregate outcome of these decisions is the difference between being a top-performing bank, and one that struggles to generate return.
Making right decisions generally comes down to two things: the ability to collect and analyze the necessary information, and the speed at which that information can be accessed.
Speed vs. Accuracy
There’s a debate about whether it’s better to make a wrong decision quickly or a right decision slowly. One argument says that a fast decision allows you to get out ahead of an opportunity, and if you’re in the wrong place, you have time to adjust. The other argument says that if you wait to get enough information you can assess the options effectively and take a single action that is more likely to be right.
I won’t weigh in one which of these positions is more correct. There appears to be virtues in both, and I can remember times in my own experience where one or the other of these approaches yielded positive results.
However, few would argue with the point that the best possible option would be if one could act both quickly and accurately (hey, that’s cheating!).
And that’s the true promise of data in banking. Data technology, when effectively applied, can dramatically shorten the time it takes to turn the data a bank collects into information that is comprehensive, complete, accurate, and applied in a way that establishes a true picture of the issue and the likely outcomes.
In a data-driven decision-making process that leverages a Single Source of Truth for its data, many of the aspects of traditional decision-making become faster, more informed, and more connected to the people and processes that put decisions into action.
TRADITIONAL DECISIONING PROCESS
Siloed by Channel/Department
With information confined to specific departments or functions, such as lending or deposit management, decisions are often made in isolation without significant awareness about the actions and decisions that are being made in other areas of the bank.
With all departments working with the same source of data, and that source ingesting data from each department, departments are able to adjust their actions based upon the actions of others within the bank. So loan officers, for example, might be able to adjust offers based on awareness of the changing cost of funds.
Because of the time it takes to collect, unify, and analyze data, most reporting is done periodically, and often presents a backwards-looking view of bank performance. Even when predictive models are applied to identify future possible outcomes the source data is frequently weeks, if not months, out-of-date.
With a connected, integrated source of truth, analysis can be conducted on information that is perpetually up-to-date. Predictive analytics can adjust automatically to shifting factors within the bank, such as recent changes in interest rates or credit quality, and to provide a statistically accurate picture of future outcomes.
In a traditional decision-making process, leaders are presented with reports that have been compiled by an analyst and formatted into a PDF or printed document with views and visualizations that are estimated to provide the most relevant perspective.
By making data accessible through an analytics dashboard, leaders get a current view of the entire bank system. With well-built data visualization, leaders can actively interact with the analytics themselves, adjusting assumptions and changing views to uncover new insights.
Possibility of data error
In traditional data analysis, data often goes through multiple stages of manual processing before it is finally presented to decision-makers. During this process, it’s not uncommon for errors to be made. Even the most precise analyst can slip up, particularly if the work is being delivered on a deadline. Even worse, the mistake can often be buried beneath layers of further analysis, so it may never be discovered leading to flawed assumptions.
Limited data error possibilities
When data processing is automated using advanced data technologies that ensure accuracy, comprehensiveness, and consistency, the opportunities for manual error are much more limited. Additionally, because the processing is conducted according to an established set of rules, it’s always possible to return to the original source to check the accuracy and identify the exact point where an error may have been made.
Non-integrated with workflows
Even after a decision has been made, there’s often a lengthy process to implement that decision across the bank’s various departments and stakeholders. By the time a decision has been fully operationalized, the environment may have changed and opportunities may have been missed.
Integrated with workflows
When decisions can be integrated into the workflows using the bank’s various business systems and software, it’s possible to identify an opportunity and adjust processes, strategies, and even employee communications almost immediately to get out ahead of the market, while limiting the inevitable breakdowns in communication that often happen during implementation.
How Unified Data Creates Bank-Wide Opportunities
Once a bank can turn its data into a single, unified, resource, there’s a huge amount of value that can be gained from the decision-making of senior management to department-level opportunities.
Analysis can be done across the entire operational structure to identify areas of inefficiency, enabling operational leads to better apply staffing resources, technology upgrades or even branch investment.
Having the complete picture on customers and sales activities can support improved prospecting and lead management, enabling sales leaders better identify the right product offerings for prospects while limiting waste in time and resources.
By having a unified picture of a customer’s experience with the bank, across all channels, enables customer service departments to proactively identify issues and resolve problems before they become crises that might lead to customer run-off.
By modeling and segmenting existing and prospective audiences, marketing can better target its messages and offers across optimal channels, to improve performance and ROI.
By improving the accuracy, timeliness, and automation in data collection and analysis, the bank can improve accuracy of its reporting, while significantly reducing the effort needed to maintain its compliance reporting.
By unifying data from both sides of the balance sheet in real-time, the bank can make more accurate predictions about net income and take proactive action on its funding decisions to optimize return.
Obstacles to Establishing a Single Source of Truth
Engineering a single source of truth for your bank’s data, while conceptually straightforward, can be difficult in practice without understanding the obstacles that are endemic to most organizations and that can torpedo an effort from the outset.
Many bank software systems are fairly rudimentary and even archaic in their technological infrastructure. There’s usually a good reason for this, starting with security. However, this also means that these legacy systems aren’t built for easy integration with other systems, and a full integration process may be difficult, or even impossible.
The key to working with legacy system often lies in smart integration architecture that limits the number of total integrations necessary, which then feed into flexible database structures.
Difficulty in “fusing” multiple records into single entities
One of the key aspects of creating a single source of truth in data is associating all the data about the same entity (either an individual or a business) with that one entity. Fusing records across multiple data sources generally requires finding natural unique identifiers (e.g. social security numbers, tax ID numbers, etc.). However, when multiple data sources are combined, identifiers that persist across the entire system are scarce.
Some advanced data unification systems use advanced statistical programs, including algorithms and machine learning, which combine multiple factors in the data to calculate complex unique identifiers and ensure accurate data relationships across a full data system.
Go through any departmental file system, and you’re likely to find at least a few Excel files, or even an Access database set up by an ambitious member of your staff, which has now become a core data source for some essential function within that department. These rogue databases are extremely common as departments try to figure out work-arounds when the core technology isn’t flexible enough to support a new initiative. However, these homemade solutions can stand in the way of a fully-integrated data environment for the bank.
Rather than attempt to eliminate these rogue databases, take a close look at the function that this database is supporting and look at how this function can be achieved through existing or new software. There are a number of low-cost options out there that can be fully integrated in a flexible data architecture system.
Inconsistency in data collection
The best data system doesn’t work if the data that’s put into it isn’t accurate, complete, or well maintained. Garbage in, garbage out, is how a database architect once described it to me (although they used a different word than garbage). Frequently, people collecting data don’t realize the importance of that data to other functions in the bank, and they may only focus on the data points that are most relevant to their particular task.
Consider how data governance can be built into systems to create positive feedback loops where people become aware of the impact of their data collection, see the direct results, and are rewarded for effective data collection. On a limited basis, it’s also possible to institute a punitive system for consistently poor data collection, although it’s a good idea to apply this only when absolutely necessary since it can breed resentment about the process of data collection.
A Process for Getting to Truth
Creating a system that can help your bank fully utilize the data that it is collecting, can be tricky, but it’s not impossible, and may even be easier than you think.
It all comes down to an essential process oriented towards bringing data together and putting it to use, centered around these five steps.
Step 1. Connect Data Sources
Each system that generates and consumes data should be connected to each other, ideally through a central hub, which can reduce the number of total connections and can serve as a translator to make each source familiar to the others.
Step 2. Unify Data
Once data sources are connected, and the data is flowing into a central database, data needs to be unified in name, format, and structure.
Step 3. Fuse Data
Using natural identifiers when available, or calculated joins as necessary, data is brought together so that individual records are fused across all sources. This creates a “Golden Data Set” that can be easily understood and utilized in models and analytics.
Step 4. Establish Trust
Through tools that can show the provenance, pedigree, and corroborative evidence for each data point, the system can provide users with the basis to ensure accuracy and easily identify errors and rectify them.
Step 5. Utilize Data
The system can then be applied to analytics, governance systems, and even automations to support workflows, decision-making, and even direct bank action.
For a more in-depth exploration of how to create a Single Source of Truth, read our Exploration Five Steps to Building a Data Management System that Makes Truth Sustainable.
Preparing for Tomorrow
Working with data is a bit like a magic trick. There’s a lot of work that goes on ahead of time, and behind the scenes, in order to make something seem extraordinary and effortless. However, when it all comes together the results are truly transformative. Data, properly applied, can improve efficiency, increase customer retention, and lay the foundation for short- and long-term growth.
There’s no telling how the world will be changing in the next decade. After all, the iPhone, which changed the way the world accesses the Internet—and increasingly how the world banks—was only launched around 10 years ago. It’s impossible to future-proof your bank entirely. However, you can feel confident that data will be at the center of any change, and if you focus on systems that are flexible, transparent, and built on open standards, you should be able to respond effectively to the world as it evolves.
Cover photo courtesy Ian Dooley.