Sunday, January 21, 2018   You are here:  Features   Search
  Industry News Minimize
 Print   
  It’s a Timing Thing … Analyzing Credit Data
It’s a Timing Thing … Analyzing Credit Data

By Carolyn Nobles

A friend once defined proactive as “being reactive sooner rather than later.” In truth, action always comes down to a matter of timing.
Looking for answers in a bank’s vast data repositories follows the same theory. Today most credit risk personnel go after data to find answers either in a proactive manner or a reactive manner. What drives your approach is the timing. 
In other words, do you need an immediate answer such as in a crisis situation?  (the bottom has dropped out and I need to know my concentration now!)    
Or is it in anticipation of things to come? (what happens if the bottom drops out?) As you pull together the necessary data to create reports to fulfill a particular request for information, knowing what is driving the request will help you identify the right data for the right situation. In turn, this will ultimately deliver stronger, more accurate results with which to effectively manage risk.
With that said, compiling the data for credit risk can be obtained from two completely different methods – we’ll call them “available data” and “required data.” Available data assumes data is already on hand to answer those reactive-type questions. Required data involves much more proactive, analytical thinking to first define the anticipated needs and then determine how to get the data required to produce predictions and trends.

Available Data
For the last several years, there has been a heightened focus in the credit risk management arena on available data spurred on by Basel II. The reason? To migrate various quantitative methods that other industries had been using for years and apply them to the banking industry. In turn, this created a sudden rush for analytic methods, processes and tools within the banking environment. Quants were assembled, technology took form and the air filled with smoke from the massive amounts of numbers being crunched. Then the realization hit —“we have tons of data available, but it’s not meeting our business needs.” Why? Because of the lack of data continuity and accuracy. That’s when financial institutions recognized the significance and seriousness of their data issues and began taking steps to correct them.
Taking on the challenge of cleaning up their existing data was no easy task considering the technology challenges alone, including the lack of integration between systems within the typical bank and the silos of disparate data this created, as well as the complexity of data warehouses necessary to consolidate the data. And then there were the inconsistencies between how one lending department gathered and maintained information over another in order to suit its own needs. Indeed, data itself was, and still can be, an elusive thing. For example, most banks have run into some or all of these data-related challenges:
Data can be represented in two completely different forms. A customer number in one system is different than the same customer number in another system. Or a collateral code representing the identical type of collateral has different codes, depending upon the system in which it’s being stored.    
Data can be wrong. NAICS Codes are a good example here. The codes may be correct in that a particular NAICS does actually exist, but if the wrong NAICS code is assigned then the problem becomes data integrity. If concentrations are grouped by NAICS codes, incorrect codes will skew your numbers. And bad data can be worse than no data. At least with no data you know where you stand.
Data can be misrepresented. Certainly data can change over time. The current purpose of a field’s content may not be what it was originally designed to capture. This can wreak havoc when looking at data over a period of time. A field may have initially captured a code representing a specific product type. When the product was discontinued, the same field became the source for a new product type. Now you have the same field containing different data with no relevance to each other. A seemingly harmless change at some point in time has now put limitations on the ability to extract meaningful data without knowing when things changed.
Different systems may store completely different information in the same field. Commercial might require specific data be captured in a particular field while Retail needs other information to be stored there. Even worse, fields in one system may be left blank, which may be considered unacceptable in the other system. These issues become extremely problematic when data from multiple systems is combined and then used to assess credit risk across the enterprise, delivering incorrect results.

Despite these challenges, many banks have made great strides in working collaboratively with IT and across their various lending departments to clean up their data, putting policies and procedures in place to provide a more consistent way to capture, store and maintain information from department to department. These banks are now much better positioned for improved data availability and data integrity in order to find answers to their most pressing credit risk questions. 

Required Data
Clearly, there has been much attention focused on improving data quality and integrity, and rightfully so. Banks are making great progress in this area. However, it is by no means the endpoint for effectively managing credit risk. As bank become more effective at using their newfound clean, available data to get immediate answers, they must continue exploring how to expand their data resources in new and creative ways to deliver new results – results that will provide the foundation to become more predictive in the ongoing quest to reduce credit risk.
Instead of solely relying on existing data to address the hottest issue of the day, efforts must now start to focus on defining what data is “required” for these new forward-looking views/models. This means that banks have to first define what types of results they are looking for up front and then locate the data to deliver those results. In other words, they must identify their objective and then draw from multiple data sources to reach that objective.
For example, if you could take information about the performance of your commercial real estate portfolio and combine it with financial spreading information, plus data from one of the subscription services that deliver economic performance indicators on a particular industry, then overlay that with how other commercial real estate within your portfolio have been performing over the last four quarters, you would have information to help determine what new business to book in that sector. It would also provide insight about the loans you’ve already booked and even help you determine if that is a line of business you want to continue to grow, freeze or reduce.
Some of the banks that are making headway in this area include a number of DiCom clients. One bank, a $17 billion institution, recently created what it calls its “Strategic Information Group.” Residing within its credit risk management department, the team initially began as a way to support the activities of the bank’s commercial lending team. Today, however, its role has expanded to provide vital information upward to the bank’s executive management team and board – information that has become vital in helping them more effectively manage credit risk across the enterprise.
To do this, they blended the foundational information they had, which was built using data from the commercial lending department, and augmented that with information from other lending departments. They also incorporated tools such as RiskCalc™, Baker Hill and Moody’s with DiCom’s portfolio management and analysis tools to rapidly manipulate and analyze the data. Additionally, the bank introduced data provided through one of its subscription services to provide up-to-the-minute economic indices, including commercial real estate trends. They are now building the foundation necessary to create their own set of key risk indicators which will, in turn, enable them to become much more predictive in managing credit risk across the entire bank.
As this client illustrates, they are effectively combining the use of both types of data – available and required – to create the more robust, information-rich environment that is necessary to meet the demands of today’s credit risk management initiatives.
The credit risk management industry is currently undergoing a dramatic evolution. Banks have painfully realized that without a foundation built on strong data integrity across their entire lending organization, they are facing serious impediments toward becoming the more predictive, more quantitative-driven organization that Basel II is driving.
It will therefore be those banks that have dedicated the time, effort and resources to ensure that their data is sound that will be poised to strengthen and improve their predictive capabilities. They are the banks that will be able to identify the key risk indicators for their organization and then bring in fresh, new data from various sources to become a financial institution that will be able to better position itself to proactively and expertly predict – and thereby avert – a credit crisis within their organization.

Carolyn Nobles is the chief executive officer of DiCom Software, a leading provider of credit risk management technology solutions for financial institutions nationwide. The company’s product suite, DiCom Credit Quality Solution (CQS), helps banks efficiently analyze, review and manage their loan portfolios while minimizing risk. DiCom’s solutions are the preferred choice of today’s best credit risk personnel at banks across the U.S. For more information, visit www.dicomsoftware.com.


Posted on Friday, October 03, 2008 (Archive on Thursday, January 01, 2009)
Posted by Scott  Contributed by Scott
Return

Rating:
Comments:
Save

Current Rating:
  

Privacy Statement   Terms Of Use   Copyright 2013 The Warren Group    Login