AsianInvesterAsianInvester
Advertisement

Why the funds industry needs centralised data hubs

New regulations requiring asset managers to report extra data on collective investments schemes could be costly, unless regulators create centralised data centres, argues PwC.
Why the funds industry needs centralised data hubs

Following the global financial crisis of 2008, the International Organisation of Securities Commissions (IOSCO), initiated a series of hedge-fund focused surveys, to better understand and manage systemic risk in the asset management industry. 

Its goal was to promote a global approach towards systemic risk management and oversight by encouraging regulators to collect and share information. But it emerged that there were challenges to ensure consistency around common interpretations of questions, terms and fund metrics across participants and jurisdictions. That raised questions about the quality, accuracy, and comparability of the data that managers provided. 

In 2016, IOSCO asked regulators to prioritise fund data collection for open-ended regulated collective investments schemes (CIS), alternative investments and separately managed accounts. But asset managers may find meeting the regulators’ enhanced CIS data collection to be particularly expensive.

Firstly, the sheer volume of funds, asset, transactions, and investors relating to CISs represents a big uptick in the anticipated volume of fund data. Secondly, many CISs are distributed across multiple jurisdictions, which adds to their compliance burdens. 

Finally, enhanced fund reporting will add to the difficulties managers face as they seek to reduce CIS costs while enhancing “value-for-money” for investors.

DATA CENTRALISATION

However, there is a potential silver lining for the asset management industry: the concentration of fund data. 

Most fund data relating to regulated CISs is held by relatively few service providers and intermediaries, such as transfer agents, registrars, brokers and securities houses and fund administrators. Reputable fund administrators are particularly pivotal for data collection. As “fund accountants”, they often have a global reach and their business depends on being consistent and accurate in the accounting and reporting of fund data (which in turn requires them to employ sophisticated IT infrastructure). And many leading administrators are subject to quality standards, independent controls reporting and regulations in certain jurisdictions.  

Obtaining direct data feeds from administrators and other key sources would likely require establishing centralised data repositories, or hubs. This would theoretically be beneficial for several reasons. 

First, data inconsistencies can be resolved via fewer parties who would, ideally, have comparable levels of sophistication and IT infrastructure. Secondly, this model would be inherently scalable; once data from a service provider is processed and standardised, its entire client roster could be tapped with relatively little additional technical and administrative investment. 

Lastly, adopting a centralised data model based on direct and high-quality data feeds could set the groundwork for ‘big data’ assets. These may range from simple repositories to sophisticated ‘data lakes’ that combine structured and unstructured data. 

Developing such assets would also align with the regulators’ push to develop dedicated statistics and data analytics resources that can analyse multi-sourced, time-sensitive data in a consistent and unbiased manner. Advances in automation and artificial intelligence could also be incorporated, eventually offering levels of supervision and risk monitoring that are unimaginable today.

CHIEF CHALLENGES

However, a centralised data model would also present its own set of challenges. Firstly, it would require a lot of upfront investment to identify and coordinate data collection from managers, administrators, and others intermediaries, as well as to ensure consistent, timely and reliable data feeds. Secondly, questions surrounding the ownership, sovereignty, privacy and especially security of the centralised data asset would arise, particularly because it involves underlying investor information. Thirdly, there are many local legal restrictions about sharing data across borders. Lastly, this model would require traditional forms and questionnaires for some time. 

But while some information will always require qualitative input, a centralised model could be automated for data collation and analysis to alleviate the cost and administrative burden. Upgrades and modifications could also be added, depending on future data needs or as technology advances.

Shifting to a centralised data model would require changing mind-sets, large upfront investments and the ever-difficult task of building consensus among a wide range of participants, stakeholders, and regulators across multiple jurisdictions. But, conducted correctly, it could save asset managers time and money. 

¬ Haymarket Media Limited. All rights reserved.
Advertisement