By Ravit Dotan PhD, independent AI Ethics Consultant & Researcher, and Daram Pandian, Associate, Private Equity & Venture Capital, PRI

Headshot of co-author Ravit Dotan PhD, independent AI Ethics Consultant & Researcher

Headshot of co-author Daram Pandian, Associate, Private Equity & Venture Capital, PRI

The artificial intelligence (AI) industry is booming. A 2021 PwC survey of 1,000 companies found that only 5% do not use AI – down from 47% one year earlier.

This trend has also manifested itself in venture capital, with investors directing some US$75bn towards AI in 2020, according to the OECD. Eight years earlier, the figure stood at US$3bn.

Responsible investment in venture capital is growing and when it comes to investments in AI, the case for considering ESG factors and potential real-world outcomes is particularly strong.

Like many emerging and fast-moving technologies, AI systems present significant ESG risks and opportunities, not only for the companies developing, selling or using them, but for the people, society, and environment with which they interact.

Venture capital GPs can help establish sustainable structures, processes, and values at portfolio companies before their practices become ingrained and difficult to change as these companies scale.

We recently led a workshop on this topic for the PRI’s Venture Capital Network members – and discuss the key themes explored below.

What are the main risks associated with AI systems?

Recent examples of significant ESG issues associated with AI systems include:

Failing to consider these and other AI ethics issues can create material risks for GPs related to reputation, compliance and value creation, albeit these can vary, depending on whether an investee is creating the AI system or simply integrating it into its operations, for example.

Determining the materiality of AI ethics issues is something that venture capital GPs are grappling with, according to the workshop discussion.

For example, the risks associated with a company using an AI system to optimise a production process would differ from those that might develop if consumer or personal data is being collected.

Problems can also arise if an AI system is used for unintended purposes – such as facial recognition technology being misused by governmental authorities (Medium).

Evaluating AI ethics risks

GPs can take several approaches to evaluating the AI ethics of a potential venture capital holding:

  • By application type: assigning broad risk levels based on legislation such as the EU Artificial Intelligence Act (see also Existing and proposed AI laws, below), which divides AI applications into risk categories, from unacceptable to low risk.
  • Third-party evaluation: using a third-party service with technical, ethical, and legal expertise (e.g. AI ethics auditors, ESG service providers that specialise in AI) to assess an AI system’s risks in detail, especially in later-stage start-ups that have mature products.
  • Evaluate the start-up’s AI responsibility: assessing how a start-up uses AI ethics in its own workflow and product development – start-ups that develop and deploy AI responsibly are more likely to detect AI ethics problems and fix them.

This can be done during screening and due diligence – for example, through conversations about AI ethics with start-ups or using third parties to evaluate the technology in question.

GPs can include AI ethics in their investment memos and scorecards or include a clause or side agreement in a term sheet to ensure expectations are set out clearly. Depending on the scope of influence GPs have, they can also push for AI ethics metrics and reporting to be on the investee company board’s agenda.

In our workshop, participants emphasised the importance of providing education and training on AI ethics to portfolio companies’ founders and GPs’ deal teams, through seminars or other resources. 

Existing and proposed AI laws

AI-specific regulation has already passed in various jurisdictions, for example:

  • China recently passed a law that regulates AI algorithmic recommendation services.
  • The US State of Maine restricts the use of facial recognition by government authorities.
  • New York City prohibits employers from using AI for recruiting, hiring, and promoting without undergoing a bias audit.

Other AI-specific laws are on the horizon. For example:

  • The EU’s Artificial Intelligence Act is the most prominent effort to regulate AI, and it is expected to pass into law. It divides AI applications into risk categories and defines rules for each one.
  • The most prominent federal regulation effort in the US is the Algorithmic Accountability Act, which would require companies to conduct impact assessments on the automated systems they sell and use.

Readers wanting more information on AI regulation efforts can refer to this resource list.

A nascent topic with growing relevance

Anecdotal conversations with venture capital GPs indicate that their approaches vary – some have developed structured processes with specific questions and risk areas to assess, while others are aware of AI ethics as a topic but may not apply these considerations systematically.

The focus on AI ethics is most prevalent among venture capital GPs that target the technology sector, or within that, those that focus on AI only. But that is likely to change, given the growth of AI systems across sectors and industries beyond technology, and the fact that several jurisdictions have passed, or are developing, laws to regulate the development and deployment of AI systems.

Our workshop highlighted another potential area of tension, where a prospective investment presents AI ethics risks that are not financially material but could lead to negative outcomes. For example, a social media company whose product is driven by algorithms that could lead to user addiction and cause negative impacts on mental health. Some GPs may feel they cannot take AI ethics into account under these circumstances due to their perceptions regarding fiduciary duty.[1]

One way that GPs could address this would be to make clear in conversations with prospective and existing LPs, particularly when fundraising, the extent to which they will consider AI ethics when determining whether to make an investment.

Having such conversations with LPs would not be out of place. Asset owners increasingly expect their investment managers to consider ESG factors and want to understand the positive and negative real-world outcomes that their capital is contributing towards.

Indeed, client demand is one of the main drivers of responsible investment in the venture capital industry.

This – alongside the clear rationale to assess the ESG risks and opportunities that many investees present, particularly in emerging sectors such as AI – will continue to shape the development of more formal, standardised practices.

The PRI is supporting this development in several ways, including bringing together signatories to discuss relevant due diligence topics. We have also produced case studies that highlight emerging best practice among investment managers and asset owners, and published a paper, Starting up: Responsible investment in venture capital, to assess the landscape to date.

Explore these resources

 

 

This blog is written by PRI staff members and guest contributors. Our goal is to contribute to the broader debate around topical issues and to help showcase some of our research and other work that we undertake in support of our signatories.Please note that although you can expect to find some posts here that broadly accord with the PRI’s official views, the blog authors write in their individual capacity and there is no “house view”. Nor do the views and opinions expressed on this blog constitute financial or other professional advice.If you have any questions, please contact us at [email protected].