Artificial Intelligence and Machine Learning in Financial Services: What do the Regulators say?
What have the regulators asked?
Artificial intelligence and machine learning are increasingly used in financial services, powered by increased data availability and computing power. The use of AI and ML by market intermediaries and asset managers has been altering firms' business models. For example, many firms have been using AI and ML to support their advisory and support services, risk management, client identification and monitoring, selection of trading algorithms, and portfolio management. In the process, their own risk profiles are likely to have been altered. The risk environment around them is changing in its aggregate as well: are those external industry changes being appropriately assessed?
The use of these technologies may create significant efficiencies and benefits for firms and investors, including more rapid execution speed and reducing the cost of investment services. At the same time, their use may also create or amplify certain risks, which could potentially have an impact on the efficiency of financial markets and could harm clients.
IOSCO identified the use of AI and ML by market intermediaries and asset managers as a key priority. In April 2019, the IOSCO Board mandated the organization's Standing Committee 3 on "Regulation of Market Intermediaries" and Standing Committee 5 on "Investment Management" to examine current and potential best practices for public supervision of the implementation and effects of these technologies. The Committees were asked to propose guidance that member jurisdictions may consider adopting to address the business conduct risks.
Potential risks identified in the Consultation Report
IOSCO announcements are visible around the world to financial services compliance staff. IOSCO surveyed and held roundtable discussions with market intermediaries and conducted outreach to asset managers to identify how AI and ML are used. They questioned what the associated risks might be.
The following areas were highlighted in the Consultation Report released in June 2020, where potential risks and harms may arise in relation to the development, testing, and deployment of these technologies:
- Governance and oversight;
- Algorithm development, testing, and ongoing monitoring;
- Data quality and bias;
- Transparency and explainability;
- Outsourcing; and
- Ethical concerns.
Based on the responses received by the global regulators' team put together for this consultation, the final report provides guidance to assist IOSCO members in supervising market intermediaries and asset managers that utilize AI and ML.
The guidance consists of six measures that reflect expected standards of conduct by the private sector parties concerned. Again, the word "guidance" is especially important because each jurisdiction must adopt the global principle to its market, legal, and regulatory circumstances. IOSCO members and firms should also consider the proportionality of any response when implementing measures.
The use of AI and ML will likely increase as the technology advances, and it is plausible that the regulatory framework will need to evolve concurrently to address the associated emerging risks. IOSCO cautioned that the 2022 report, including the definitions and guidance, may be reviewed and updated in the future. In their adaptation of global guidelines, national authorities are encouraged to go further as may be needed.
Measure 1: Regulators should consider requiring firms to have designated senior management responsible for the oversight of the development, testing, deployment, monitoring, and controls of AI and ML. This includes a documented internal governance framework with clear lines of accountability. An appropriately senior individual or group with the relevant skill set and knowledge should be charged with signing off on initial deployment and substantial updates of these technologies.
Measure 2: Regulators should require firms to test and monitor the algorithms they use to validate the results of AI and ML techniques deployed on a continuous basis. The testing should be conducted in an environment that is segregated from the live environment prior to deployment, to ensure that AI and ML:
(a) behave as expected in stressed and unstressed market conditions; and
(b) operate in a way that complies with regulatory obligations.
Measure 3: Regulators should require firms to have adequate skills, expertise and experience to develop, test, deploy, monitor, and oversee the controls over the AI and ML that the firm establishes. Compliance and risk management functions should be able to understand and challenge the algorithms that are produced and conduct due diligence on any third-party provider, including the level of knowledge, expertise, and experience that the provider has demonstrated specifically in the financial services context.
Measure 4: Regulators should require firms to understand their reliance on third-party IT providers of all kinds and manage these relationships actively. This would include monitoring their performance and conducting oversight. To ensure adequate accountability, firms should have a clear service level agreement and contract in place clarifying the scope and responsibility of the outsourced functions and their provider. Clear performance indicators should be stipulated, as well as the rights and remedies when performance is poor.
Measure 5: Regulators should consider what level of disclosure of the use of AI and ML is required by firms, including:
- requiring firms to disclose meaningful information to customers and clients on how their use of these technologies affects client outcomes.
- considering what type of information they may require from firms using AI and ML to ensure they can have appropriate and effective oversight of those firms.
Measure 6: Regulators should consider requiring firms to have appropriate controls in place to ensure that the data underlying the performance of AI and ML are of sufficient quality to prevent biases and sufficiently broad for a well-founded application of these tools.
In addition to these IOSCO Measures, G20/OECD Principles for Corporate Governance call on boards of directors to oversee enterprise risk management, and to understand, review, and sign off on those proposals as well as their ongoing internal monitoring.
Conclusions for WAIFC
IOSCO member supervisory agencies come from across the world. The financial services they oversee are as disparate as the WAIFC member centers. For this reason, IOSCO works at the level of principles, best practices, and recommendations; it cannot write a rule for identical transcription into law and expect similar enforcement from jurisdiction to jurisdiction.
There is another side to these transformational events: for their own work, RegTech has to be implemented to good effect by the authorities. The supervisory agencies deploying comparable AI and ML technologies for their oversight must be just as aware of the risks and benefits as the private sector. This will lead to interesting conversations jurisdiction by jurisdiction.
 https://www.oecd.org/corporate/principles-corporate-governance/. The text is available in 13 languages.
 The use of artificial intelligence and machine learning by market intermediaries and asset managers, IOSCO Board Consultation Report, June 2020, available at: https://www.iosco.org/library/pubdocs/pdf/IOSCOPD658.pdf
 Board Priorities - IOSCO work program for 2019, March 25, 2019, available at: https://www.iosco.org/library/pubdocs/pdf/IOSCOPD625.pdf