Information architecture for BCBS 239 and Solvency II

The new regulatory requirements of BSBC 239 and Solvency II sound more like a data governance framework rather than a good old-fashioned regulatory hustle. But what should the information architecture for their programmatic and sustainable implementation look like?

Authors: Peter Hora (Semanta) & Petr Bednařík (Deloitte CE)

 

Let's get more specific

 

In our previous blog we described how the new requirements of BSBC 239 and Solvency II sounds more like a data governance framework rather than a good old-fashioned regulatory hustle. Today we'd like to get more specific and describe how the information architecture for their programmatic and sustainable implementation might look.

The BCBS 239 regulations are aimed at “systematically important banks”. Consequently, the primary driver for implementing them will come from the various groups' headquarters - the part of the group that is "too big to fail". The sheer immensity of the task of documenting the whole data delivery chain of financial and risk data, from a local source system to a local data warehouse and then on to a group warehouse and group regulatory reports is mind blowing. It seems unfeasible when looked at from the current sorry position of data governance initiatives. In the case of insurance companies and Solvency II the situation is typically even more complex due to the large amount of key data processing and calculation tasks that are happening outside of standard IT systems (e.g. hundreds of complex and interconnected Excels, SAS scripts, actuarial models. etc.) and the need for expert judgement and manual inputs.

HQ decision makers have a bias towards the divide and rule approach: Let’s set the rules each country has to follow. The only certain common technology is MS Office so let’s use it to document the data flows, data models and transformations. It sounds very pragmatic and I’m sure that the goal of regulatory compliance can be achieved, but I would challenge this approach for the following reasons:

  • What's the added value of this approach to everyday business in local banks?

    • People in the bank's network and branches are being burdened with additional paperwork that doesn't help them in their everyday tasks. The quality of the documentation filed will reflect this.
  • How are you going to make sure the documentation remains up-to-date and sustainable?

    • A one time initiative can do it, but on the very day it’s approved and published it will be outdated, because at least one data element in the data delivery chain will have changed and no one will be bothered or even know that it's necessary to update an Office document over at HQ because of the change.
  • Even now the data/risk/reporting/actuarial teams aren't coping with the amount of data processing and calculations they have to do within one capital calculation run – adding more work (manual documentation for the purposes of data quality) is not feasible. The implementation of data quality should not add work but make practical life easier.

We recommend that the information architecture should instead be built based on the following principles:

  • Data models are loaded from the modelling tools used to build the information systems, or reverse engineered from the systems themselves.

    • This way the descriptive model of the system is sure to be in sync with reality.
  • Business definitions, rules and transformation logic are maintained in an online system linked to the data models.

    • If every definition is linked to a specific data element, it is no longer a definition sitting in isolation.
  • Information about people is loaded from the identity management system (Active Directory, LDAP…).

    • This results in data stewardship based on verifiable up-to-date information about stewards and users. When a steward moves on, the data he/she maintained is not lost or forgotten but stewardship is easily passed onto his/her replacement.
  • There is a clear link between the data processing / calculation process and the data being used and produced in each step and there is a system to support running the process (e.g. a workflow system which controls the flow of the data preparation and the calculations).
  • Demand, change and release management is linked to the information assets described above.

    • Linking any request or ticket to the information asset concerned helps transparency and visibility - plus the updating of a definition becomes an integral part of the development process.
  • Last but not least, the data quality rules and their implementation in a data quality engine need to be an integral part of the information architecture, with results linked back to the information assets as easy-to-understand data-quality labels.
  • Let each country within the network implement the information architecture for the regulatory requirements in a way that not only meets the actual regulatory requirements but can also be used for their own specific local business needs (e.g. focusing on the data quality of a customer's data).

    • The level of detail of the documentation and the maturity of metadata management will differ greatly among countries and cultures. Letting each local data-governance team find its own way to introduce governance will provide the group with the required information from the bottom up.

 

The key points for building a successful and sustainable information architecture that meets the regulatory requirements is to make sure that documentation

  • is linked to data models that are automatically updated and maintained.
  • is linked to the calculation process so that it is clear which data is used and in which step produced and by whom.

  • is linked to change management processes via the system that's being used to actually drive the change (e.g. an issue tracker like JIRA).
  • is linked to data quality rules, checks and statuses.
  • can provide information relevant to everyday business tasks so users can see its value and are motivated to both use it and update it.
 

Petr Bednarik – a manager in Deloitte Central Europe, focusing on the areas of data quality and data analytics in FSI. He has led several implementations of data quality management systems, including a system for the Solvency II internal model application of one of the two largest insurers in the Czech Republic.