Quality Data Means Quality Decisions

Whenever any of us walks into our family doctor’s office for treatment or a routine checkup, one thing we expect is that the historic medical data the doctor possesses on us is complete, accurate and appropriate. If you previously suffered multiple fractures due to a road accident, your family doctor should already know about it. In case you are allergic to aspirin, too that should form part of your medical history. Such background data is crucial in ensuring that whatever drug prescriptions or lifestyle recommendations the doctor eventually delivers lead to better overall well being.

This principle is not any different in the business environment. Every day, the executives of businesses all over the world are confronted by crucial decisions. For such decisions to positively impact the business, they must be based on accurate, whole and relevant data. This is precisely why Solvency II and several other financial services regulations require that insurers include evidence that the regulatory reports they submit are based on data of impeccable quality.

The Role of Data Extraction

For the quality of data used to be assured, one of the most important phases of quality control is at the point of data extraction. In order to generate reports, relevant data must be retrieved from internal and external systems then fed into the risk data warehouse. Most times, the data extraction is in fact two processes in one - data extraction and data conversion. The conversion is necessary to ensure the data is in a format ready for upload into the data warehouse.

If insurers are to have any certainty on the quality of data that ultimately goes into the data warehouse and is eventually used to generate risk reports, controls around the data extraction and transformation process must ensure information is not altered or lost. This would be relatively straightforward if the data extracted was a complete copy of the source data. However, save for a few exceptions, a very small proportion of the extracted data is likely to be complete copies of the original.

Most data extracted for risk management and regulatory reporting will be a subset of the original record. For Solvency II, the original data may for instance be filtered for specific products or insurance policies. Of course, what ought to be included or excluded must be determined very early during the installation and/or configuration of the risk data warehouse in preparation for Solvency II.

Steps for Quality Data Extraction

Given the ever changing business environment especially for the larger insurers who will be most affected by this new regulation, the rules for data extraction must be flexible enough to allow for future changes to the business environment. Hierarchy, workflows and business systems change with time and too rigid an extraction process runs the risk of falling behind an evolving organization and inadvertently availing incomplete data.

Therefore, developing the data extraction process for Solvency II reporting must of necessity involve the following steps:

a) Identifying the total population of the source data.

b) Ensure that information is properly segmented for ease of retrieval by relevant source data e.g. by policy, product type, location etcetera.

c) Develop an inclusion list - what products and policy types must be included in the Solvency II reports?

d) Develop an exclusion list - Ordinarily, creating an exclusion list would simply imply getting rid of everything that does not fall in the inclusion list. However, data must make double sure by sifting through all excluded record types. Solvency II standards on data completeness require that no relevant and available data should be excluded without proper justification. Reviewing the exclusion list is also important for the broader risk management objectives of the insurance firm. Certain fields may not be directly relevant for Solvency II reporting but they may be vital in achieving internal actuarial and risk management goals.

e) Based on the inclusion and exclusion lists, develop a control alert for use during data extraction that would signal the occurrence of data that does not fall in either list. This would help to automatically capture changes to system data (e.g. due to the rolling out of a new product) that may have an impact on Solvency II reports.

f) Despite (a), develop a plan for regular review to ensure that data extraction is consistent with the current business processes.

Author's Bio: 

Graz Sweden AB provides financial services players with the most cost-effective way to access, manage, and analyze their data. Using the flexible data management platform HINC, Graz’s data warehouse infrastructure helps manage tens of thousands of investment portfolios for several institutions including 9 insurance companies, 120 banks and the largest fund manager in Scandinavia. For more information, visit www.graz.se