What your peers are asking us about investment data management
Over the course of a month, we talk to a wide variety of organisations in different stages of their business and working with investment data. They all have issues in common around wanting to get more value out of their data, but they also have very specific questions that is near and dear to them.
Below are the top 7 questions that we have been asked in the last month.
Question 1 - For ESG, our biggest challenge just now is to get enough information, in particular related to climate accounts enabling us to measure the CO2 intensity in our investment portfolio. How do you work with your clients to solve this?
We support our clients in managing their ESG data and their mission to reach net zero emissions. Typically, we see organisations utilise scores and ratings from various agencies. For the more advanced organisations, they dig deeper into the underlying ESG data points, to identify outliers from the initial score-based screening process.
For those organisations specifically focused on CO2 intensity, we can support them in sourcing and managing data specifically in this realm, structuring it into the existing investment data model on the platform.
For new unexplored data sets, our clients can leverage our Data Extension Framework, allowing them to rapidly include abstract data to explore various themes and impact investment theories.
Question 2 - How does your solution fit into our current architecture?
Our initial focus is to be complimentary with your current systems and platforms. Being a fully hosted SaaS solution, we have a very light footprint and can be rapidly deployed. This allows our clients to very quickly gain usage of the investment data management platform, aggregating and validating data, realising immediate benefits, while at the same time, gaining an insight into the broader capabilities of the system.
As such, over the longer term we support our clients by providing optionality around scaling or helping them review existing platforms.
Question 3 - How much effort is involved in running your platform?
Without exception, all our clients have implemented a data steward role. This gives them greater control of the day-to-day management of the data as well as deeper insight and understanding if issues arise.
That said, due to the automated processes in data aggregation, validation and report generation, the work involved for this type of role on a day-to-day basis is usually no more than a few hours per week.
Considering the amount of time spent previously on managing data manually, this task is often done by an existing team member, who gets reallocated and takes on the task of the data steward.
Question 4 - How do you manage large data volumes, ensuring the environment is always performant?
We work with clients of all sizes and the larger ones can generate millions of rows of data every month. Storage is not the issue, but it is ensuring that the end user has a good experience with the environment as it grows in size.
There are multiple ways we manage this for our clients.
First, we incorporate strategies on the platform front end that allows for faster performance. As the user has access to basically any data point via the platform, we strive to give them a great user experience, not only through a user-friendly GUI, but that the performance is very high as well.
Second, as the platform is hosted in the cloud, we are able to leverage the ease of computing power at scale. If our clients want more or less power, we can simply dial it up or down as needed.
Third, for our clients with very large data sets, they obviously have the option to migrate some data into either their own hosted data warehouse, or we offer them the opportunity to move data across to the high-performance Microsoft Synapse Analytics engine.
Making a system experience excellent for the end user is a constant journey, all about understanding the latest technologies available combined with the skills of our development team and the needs of our clients.
Question 5 - Are you utilising AI/Machine Learning technologies in your platform?
We strive to provide our clients with the tools to make the more technical aspects of a data platform a bit more intuitive. The technical aspects are often considered to be in the area of ETL.
As such, we have made great progress in deploying machine learning strategies to help our users automate the configuration of integration files.
Question 6 - How quickly can you respond to market/ client/ regulatory changes and requirements?
We have monthly software releases with new features, enhancements, and fixes. Typically, if our clients have an urgent or critical need, they will see the development done within a couple of months.
This is in addition to our clients’ usage of the Data Extension Framework, which allows them to expand on the data model instantly.
For planning purposes, we arrange quarterly sessions with our clients where we walk through our roadmap, and they can actively vote for the items that their business regard as high priority. The items on the roadmap are a mix of client requirements as well as ICS research of market and industry trends.
Question 7 - What would an implementation look like and how do I leverage the benefits of your previous implementations?
We have a standard implementation model which focusses on;
Data structures and classifications
Custodian / Portfolio / Accounting / Holding Exposure / Performance data integration configuration utilising our configuration library
Market/ESG data interfaces setup
Risk Analytics integration
Training & handover
We have a library of standard system integrations, rules and reports which is available for use across our client base.
In addition, we have strong relationships with all major custodians and data vendors to help smooth over any client variances or inconsistencies.
Typical timelines for this process are 3 - 6 months.
Using this we establish a baseline configuration of key data sets upon which you can continue to build, but can gain immediate value from the structured and consolidated datasets.