|Name||Nordea Asset Management|
|Signatory type||Asset manager|
|Region of operation||Global|
|COVERED IN THIS CASE STUDY|
|Environmental objective||Mitigation and adaptation|
Nordea has a broad range of products with strong ESG overlays. Implementing the taxonomy is a very strong expectation, both organisationally and from our client base. We see a clear necessity to introduce objective criteria to differentiate genuinely sustainable investment products from those merely marketed as such. We also believe that proper implementation of the taxonomy will help direct capital into more sustainable business models. Indeed, a standard based on objective criteria is a necessity for the broad and maturing ESG field.
Other aspect you would like to mention?
We drew on the experience of a team member who contributed towards a taxonomy analysis of the revenues of 70 Nordic companies when he was part of the Nordea Markets team.
Principles, criteria, thresholds
We looked for a suitable data provider solution because in our experience a manual implementation of the taxonomy largely relies on best estimates. This is particularly the case for Do No Significant Harm (DNSH) and social safeguards assessments.
We used a five-step process (part of the GS Sustain Taxonomy mapping tool) to assess compliancy. We also based discussions with providers on this model (see graph). Currently, only Bloomberg covers steps 1-2 of this process, while ISS and MSCI are at an earlier stage of development.
Do no significant harm assessment
Both DNSH and the social safeguards assessments mainly depend on data that is generally not yet available. It is possible to adopt best estimates, or perhaps resort to controversy measurement tools as a proxy. However, comparable and reliable company disclosed data is currently scarce.
Social safeguards assessment
Please see the description above.
Please see the description above.
We met with MSCI, ISS and Bloomberg regarding their solutions. We believe we are ahead of the providers in these discussions, as neither MSCI nor ISS have a concrete product/proof-of-concept to test at this point. Only Bloomberg offers a solution that is currently accessible on their terminal and covers the first two steps of the five-step taxonomy assessment process.
We tested the Bloomberg solution with 443 issuers from our investable universe as a test dataset – 26.1% of the universe was taxonomy eligible. Companies range from 100% eligible to 0%. This is slightly below the benchmark (e.g. MSCI ACWI) but in line with suggested taxonomy eligibility from portfolio analysis based on similar mock datasets.
The initial discussions we held with the providers were positive. We have a good idea of what our requirements are and though it is difficult to progress at this point, we are ready to hit the ground running when the providers can deliver data to us. We will continue to track their progress and return to discussions after the summer. We have expressed interest with all providers to act as beta-testers for any functionalities they provide.
Exhibit 2: Where does the GS SUSTAIN Taxonomy mapping tool ito the process?
Process for determining the alignment of a company’s activities with the EU taxonomy
Source: European Commission, Goldman Sachs Global Investment Research
Challenges and solutions
|1||Data providers cover, at best, 1-2 steps of the five-step process needed to determine taxonomy alignment||Worked closely with data providers to clarify and specify our needs|
|2||Manual screening was time intensive, which limits the scope of its application||We used manual screening on small company samples to get an idea of the challenges and inform our discussions with providers|
|3||Data for steps 3-5 is not available or does not exist||We engaged with companies to encourage disclosure|
Data providers play a critical role in facilitating implementation. However, not all of them plan to cover all five steps of the assessment process. We believe the investor community should encourage them to cover all the steps and provide detailed feedback on the challenges faced during implementation or manual sample screening. We also need to be transparent about where data is not available (steps 3-5) and the assumptions on which we should base our best estimates, if we choose that route.