Speech

Remarks at Program on Corporate Compliance and Enforcement Spring Conference 2024

New York City, NY

Professor Arlen, thank you for the introduction, and thank you to PCCE for inviting me to participate in today’s 10th anniversary program.

As is customary, my remarks this afternoon are in my official capacity as SEC Enforcement Director, and do not necessarily reflect the views of the Commission, the Commissioners, or other members of the staff.

One of the hallmarks of PCCE is that it brings together academics, regulators, and industry professionals, and allows us to have candid conversations about corporate misconduct and the ways in which we can all work together to improve compliance.

And that’s precisely what I’d like to do this afternoon. As we gather to mark ten years of dialogue, I think it’s important to both summarize what we’ve learned during this period and see how we can leverage those lessons to effect better compliance in the future.

If the last decade has shown us anything in this regard, it’s that many of the major financial scandals of the time have had a number of things in common. And that is, when you combine charismatic leaders with strong investor interest, noncompliance, weak controls, or under-empowered gatekeepers, it creates a perfect storm of risk and the potential for great investor harm.

Most recently, we’ve seen these storms ravage investors in the crypto markets with the spectacular collapses of FTX and Terraform, among others.

With respect to the former, it’s been referred to as one of the largest financial frauds in U.S. history, and it’s principal architect was recently sentenced to 25 years in federal prison.[1] Tellingly, John Ray, who was appointed FTX CEO as part of its bankruptcy proceedings, observed that “in his 40 years of legal and restructuring experience,” which included the Enron bankruptcy, he had never seen “such a complete failure of corporate controls and such a complete absence of trustworthy financial information as occurred here.”[2]

With respect to the latter, just over a week ago, the SEC prevailed on its fraud claims against Terraform and its founder Do Kwon.[3] A jury in the Southern District of New York found the defendants liable for misleading investors in multiple ways before the collapse of their stablecoin, along with the rest of their ecosystem, in 2022. That collapse wiped out $40 billion from the crypto markets, decimated countless investors, and precipitated a chain of bankruptcies and events that ultimately contributed to the Crypto Winter of 2022.[4]

Now, I recently spoke at length about our efforts to address this decade of non-compliance and investor harm in the crypto markets, and won’t go further into other examples of similar misconduct today. I’ll simply note they exist.[5]

But it’s not just in the crypto markets. We experienced perfect storms of risk during the SPAC boom with frauds like the one that the SEC alleges was perpetrated by Trevor Milton and the EV company, Nikola.[6] And we’ve seen them in any number of recent alleged frauds by founders, including the collapse of Theranos and the prosecutions of its founders Elizabeth Holmes and Sunny Balwani.[7]

I am certain that those who have studied these and other recent financial frauds at greater length than me, or than my remarks allow discussing, will undoubtedly point to other contributing or distinguishing factors. But at minimum, one throughline is that elevated investor interest in rapidly developing technology or offerings often leads to elevated investor risk.

Put another way: any time you have individuals or corporations trying to capitalize and profit on FOMO, or the “fear of missing out,” around a new technology or offering, it should raise red flags for regulators and compliance professionals alike. After all, effective enforcement and governance require anticipating risk areas.

So looking ahead, where do we see potential risk?

Every day, we see revolutionary technological advancements in artificial intelligence, or AI, that promise to transform nearly every aspect of our lives, including our financial decision making.

Every day, we see individuals, corporations, analysts, and others touting these developments.

Every day, we see investors looking to incorporate AI into their investment strategies, or using it to analyze and recommend investments.

Every day, we see companies attempting to not only develop AI capabilities, or harness it to improve their productivity and growth, but also to attract and retain investors.

And in the end, every day, we see investors making larger and larger investments in these issuers or financial products.[8]

Indeed, according to a recent survey, 61% of investors believe that faster adoption of AI is very, or extremely, important to a company’s value.[9] That jumps to 85% when including investors who believe it is moderately important.

As a result, it may be an understatement to say that there is not only immense investor interest, but also immense market interest, in AI.

While perhaps not quite yet a perfect storm, there’s certainly one brewing around AI. And it is incumbent on each of us to make sure it does not come to pass and that investors are not harmed by noncompliance with the securities laws when it comes to this new technology.

How then should we be preparing for, and addressing, these potential risks?

Our experience with ESG investing provides an instructive starting point. With ESG, we saw large swaths of investors become interested in firms and companies that incorporated ESG into their investment strategies and business plans.

That meant that companies had meaningful financial incentives to exaggerate, or make misleading statements about, their supposed ESG activities or products. After all, saying that you have an investment strategy or business plan that incorporates ESG considerations is pretty easy; creating one and sticking to it is another matter. And over the last several years, we have brought a number of enforcement actions addressing this type of misconduct.

For example, the SEC charged a Deutsche Bank subsidiary with making materially misleading statements about its supposed ESG-related investment products.[10] According to the SEC’s order in that case, the firm marketed itself as a leader in ESG and claimed that it adhered to specific policies for integrating ESG considerations into its investments. It even advertised that ESG was in its “DNA.” But as it turned out, the firm did not ensure that its investment professionals actually followed all aspects of the ESG investment processes that it marketed. In the end, it paid a $19 million civil penalty to settle those claims.

Not surprisingly, we are now seeing the same thing with AI. Just last month, the SEC announced settled charges against two registered investment advisers for making false and misleading statements about their purported use of AI.

One firm claimed it used it to analyze client data and predict which companies were going to “make it big.”[11] The other claimed it provided “expert AI-driven forecasts.”[12] Both statements were false and misleading. Neither had the AI capabilities that they claimed.

Sound familiar?

I hope these actions put the investment industry on notice. If you are rushing to make claims about using AI in your investment processes to capitalize on growing investor interest, stop. Take a step back, and ask yourselves: do these representations accurately reflect what we are doing or are they simply aspirational?

If it’s the latter, your actions may constitute the type of “AI-washing” that violates the federal securities laws.

The bottom line: you must ensure that your representations regarding your use of AI are not materially false or misleading.

Now, I know that many of you here are in-house counsel and compliance personnel at public companies. AI-washing is not just an issue for investment firms; it should concern each of you as well.

There are any number of reasons that a public company may disclose AI-related information. It may be in the business of developing AI applications. It may use AI capabilities in its own operations to increase efficiency and value for shareholders. Or, it may discuss security risks or competitive risks from AI.

But irrespective of the context, if you’re speaking on AI, you too must ensure that you do so in a manner that is not materially false or misleading. This becomes ever more significant as AI-related disclosures by SEC registrants are increasing.[13]

So how do you help ensure that your companies and clients comply with securities law requirements as they relate to AI?

Here, I think we can draw from general principles of what I’ve called “proactive compliance.”[14] As I’ve said before, proactive compliance requires three things: education, engagement, and execution.

First, educate yourselves about emerging and heightened AI risk areas as they relate to your businesses. That means reading the AI-related enforcement actions I mentioned. It means reviewing any future enforcement actions that may follow in this space.

It also means reviewing speeches like Chair Gensler’s recent speech on AI, which highlighted multiple other ways in which a firm’s AI use may heighten risk or implicate the federal securities laws.[15] He specifically discussed the conflicts of interests raised by AI for advisers, the problems presented by AI hallucinations, and the threat that AI could pose to the stability of our markets.

And it means staying abreast of how potential AI-related issues are actually impacting companies in the real world. Take for example, the recent reporting around an airline’s chatbot offering a customer incorrect information about its refund policy.[16]

Second, take what you’ve learned from our orders and public pronouncements, and your own research, and engage with personnel inside your company’s different business units to learn how AI intersects with their activities, strategies, risks, financial incentives, and so on.

Ask: what public statements are we making about our incorporation of AI into our business operations? Are they accurate, or are they aspirational? Does AI present a material risk to our business operations in some way?

Now, is the time to engage.

And third, execute. Does your use of AI require updating policies and procedures and internal controls? If so, are those policies and procedures bespoke to your company? And here, let me be clear: it’s not enough to go to ChatGPT or a similar tool and ask it to produce an AI policy for you.

And then, have you taken the steps necessary to implement those policies and procedures? As we have seen time and again, adoption is only part of the battle; effective execution is equally important and that’s where many firms fall short.[17]

Let me finish by touching on a topic that comes up often in these conversations: individual liability. As I mentioned, a public company may find it necessary to disclose information regarding security risks from AI. In fact, by some measures, most public company disclosures about AI relate to security and competitive risks from AI.[18] Individual liability for disclosure failures related to AI as a security threat may therefore be of particular interest to you.

Here, I would look to our approach to cybersecurity disclosure failures generally: we look at what a person actually knew or should have known; what the person actually did or did not do; and how that measures up to the standards of our statutes, rules, and regulations. And as I’ve said before in the context of CCO and CISO liability, and I will say it again in the context of AI-related risk disclosures: folks who operate in good faith and take reasonable steps are unlikely to hear from us.[19]

***

In the end, while AI may be transformative to many aspects of our lives, the SEC’s experience with other developing technologies and investment products illustrates not only the elevated investor risk presented by potentially false or misleading claims about its use, but also the steps market participants can take to help ensure compliance with AI-related disclosures and how we might approach liability in this area.

I look forward to working with you to further our shared goal of protecting investors, while harnessing the benefits of AI and avoiding the next perfect storm of risk.

Thank you again to PCCE for the invitation and congratulations on 10 years of these types of candid conversations.

I now look forward to discussing a few additional topics with Professor Arlen and with each of you.


[1] See Press Release, DOJ, “Statement of U.S. Attorney Damian Williams on the Conviction of Samuel Bankman-Fried” (Nov. 2, 2023) (“Sam Bankman-Fried perpetrated one of the biggest financial frauds in American history….”), available at https://www.justice.gov/usao-sdny/pr/statement-us-attorney-damian-williams-conviction-samuel-bankman-fried; Press Release, DOJ, “Samuel Bankman-Fried Sentenced to 25 Years for His Orchestration of Multiple Fraudulent Schemes” (March 28, 2024), available at www.justice.gov/opa/pr/samuel-bankman-fried-sentenced-25-years-his-orchestration-multiple-fraudulent-schemes. Three of his co-defendants have pleaded guilty and await sentencing. They were all charged by the SEC as well, and apart from Mr. Bankman-Fried, all have admitted the allegations in their respective complaints. See SEC v. Caroline Ellison, No. 22-cv-10794-PKC (S.D.N.Y. Dec. 23, 2022), Dkt. No. 15 [Judgment and Consent]; SEC v. Zixiao (“Gary”) Wang, No. 22-cv-10794-PKC (S.D.N.Y. Dec. 23, 2022), Dkt. No. 16 [Judgment and Consent]; SEC v. Nishad Singh, No. 23-cv-1691-PKC (S.D.N.Y. March 3, 2023), Dkt. No. 7 [Judgment and Consent].

[2] In re: FTX Trading Ltd, No. 22-11068-JTD (Bankr. D. Del., Nov. 17, 2022), Dkt. No. 24 [Decl. of John J. Ray III], ¶¶ 4-5.

[3] See Statement, SEC, “Statement on Jury’s Verdict in Trial of Terraform Labs PTE Ltd. and Do Kwon” (April 5, 2024), available at https://www.sec.gov/news/statement/grewal-statement-040424; see also SEC v. Terraform Labs PTE LTD, No. 23-cv-1346 (S.D.N.Y., February 16, 2023), Dkt. No 1. [Compl.].

[4] See, e.g., Reuters, Crypto’s string of bankruptcies, (Jan. 20, 2023), available at www.reuters.com/business/finance/cryptos-string-bankruptcies-2023-01-20/.

[5] See, e.g., Gurbir S. Grewal, Dir., Div. of Enforcement, U.S. Sec. & Exch. Comm’n, Remarks at SEC Speaks 2024 (April 3, 2024), available at www.sec.gov/news/speech/grewal-remarks-nyc-bar-association-compliance-institute-102423.

[6] See Press Release, SEC, “Nikola Corporation to Pay $125 Million to Resolve Fraud Charges” (Dec. 21, 2021), available at https://www.sec.gov/news/press-release/2021-267; Press Release, SEC, “SEC Charges Founder of Nikola Corp. With Fraud” (July 29, 2021), available at https://www.sec.gov/news/press-release/2021-141; see also Press Release, SEC, “SEC Charges Digital World SPAC for Material Misrepresentations to Investors” (July 20, 2023), available at https://www.sec.gov/news/press-release/2023-135; Press Release, SEC, “SEC Charges SPAC, Sponsor, Merger Target, and CEOs for Misleading Disclosures Ahead of Proposed Business Combination” (July 13, 2021), available at https://www.sec.gov/news/press-release/2021-124.

[7] See Press Release, SEC, “Theranos, CEO Holmes, and Former President Balwani Charged with Massive Fraud” (March 14, 2018), available at https://www.sec.gov/news/press-release/2018-41; see also Press Release, SEC, SEC Charges Former CEO of Medical Device Startup Stimwave with $41 Million Fraud” (Dec. 19, 2023), available at https://www.sec.gov/news/press-release/2023-255; Press Release, SEC, “SEC Charges Ozy Media and its CEO Carlos Watson with Widespread Scheme to Defraud Investors” (Feb. 23, 2023), available at https://www.sec.gov/news/press-release/2023-37.

[8] See, e.g., NVIDIA stock price and trade volume over time, available at https://www.nasdaq.com/market-activity/stocks/nvda/advanced-charting; Reuters, Retail investors flock to small-cap AI firms as Big Tech battles for share (Feb. 7, 2023), available at https://www.reuters.com/markets/us/retail-investors-flock-small-cap-ai-firms-big-tech-battles-share-2023-02-07/.

[9] See, e.g., PwC, “PwC’s Global Investor Survey 2023” (Nov. 15, 2023) (“Accelerated adoption of AI is seen as critical to the value equation, with 61% of investors saying faster adoption is very or extremely important. When responses indicating ‘moderately important’ are included, the proportion jumps to 85%.”), available at https://www.pwc.com/gx/en/issues/c-suite-insights/global-investor-survey.html.

[10] See In the Matter of DWS Investment Management Americas, Inc., Admin. Proc. File No. 3-21709 (Sept. 25, 2023) (settled order), available at www.sec.gov/files/litigation/admin/2023/ia-6432.pdf; see also In the Matter of Goldman Sachs Asset Management, L.P., Admin. Proc. File No. 3-21245 (Nov. 22, 2022) (settled order) (charging policies and procedures failures involving two mutual funds and a separately managed account strategy marketed as ESG investments), available at www.sec.gov/files/litigation/admin/2022/ia-6189.pdf.

[11] See In the Matter of Delphia (USA) Inc., Admin. Proc. File No. 3-21894 (March 18, 2024) (settled order), available at https://www.sec.gov/files/litigation/admin/2024/ia-6573.pdf.

[12] See In the Matter Global Predictions, Inc., Admin. Proc. File No. 3-21895 (March 18, 2024) (settled order), available at https://www.sec.gov/files/litigation/admin/2024/ia-6574.pdf.

[13] See Bloomberg, AI Disclosures to SEC Jump as Agency Warns of Misleading Claims (Feb. 8, 2024) (“At least 203, or 41%, of the S&P 500 companies mentioned AI in their [2023] 10-K report, Bloomberg Law’s review found. That’s up from 35% in 2022 and 28% in 2021. A majority of the disclosures focused on the risks of the technology, while others focused on its benefit to their business.”), available at https://news.bloomberglaw.com/securities-law/ai-disclosures-to-sec-jump-as-agency-warns-of-misleading-claims.

[14] See, e.g., Gurbir S. Grewal, Dir., Div. of Enforcement, U.S. Sec. & Exch. Comm’n, Remarks at New York City Bar Association Compliance Institute (Oct. 23, 2023), available at www.sec.gov/news/speech/grewal-remarks-nyc-bar-association-compliance-institute-102423.

[15] See Gary Gensler, Chair, U.S. Sec. & Exch. Comm’n, “AI, Finance, Movies, and the Law,” Prepared Remarks before the Yale Law School (Feb. 13, 2024), available at https://www.sec.gov/news/speech/gensler-ai-021324.

[16] See Yahoo! Finance, AirCanada chatbot costs airline discount it wrongly offered customer (Feb. 19, 2024), available at https://finance.yahoo.com/news/aircanada-chatbot-costs-airline-discount-180555618.html?fr=sycsrp_catchall.

[17] See, e.g., In the Matter of BofA Securities, Inc., et al, Admin. Proc. File No. 3-21166 (Sept. 27, 2022) (settled order) ¶¶ 4-5, available at https://www.sec.gov/files/litigation/admin/2022/34-95921.pdf; In the Matter of Citigroup Global Markets Inc., Admin. Proc. File No. 3-21165 (Sept. 27, 2022) (settled order) ¶¶ 4-5, available at https://www.sec.gov/files/litigation/admin/2022/34-95920.pdf.

[18] See Bloomberg, AI Disclosures to SEC Jump as Agency Warns of Misleading Claims (Feb. 8, 2024) (“At least 203, or 41%, of the S&P 500 companies mentioned AI in their [2023] 10-K report, Bloomberg Law’s review found. That’s up from 35% in 2022 and 28% in 2021. A majority of the disclosures focused on the risks of the technology, while others focused on its benefit to their business.”), available at https://news.bloomberglaw.com/securities-law/ai-disclosures-to-sec-jump-as-agency-warns-of-misleading-claims.

[19] See, e.g., Gurbir S. Grewal, Dir., Div. of Enforcement, U.S. Sec. & Exch. Comm’n, Remarks at New York City Bar Association Compliance Institute (Oct. 23, 2023) (“As I have said, we have no interest in pursuing enforcement actions against compliance personnel who undertake their responsibilities in good faith and based on reasonable inquiry and analysis.”), available at www.sec.gov/news/speech/grewal-remarks-nyc-bar-association-compliance-institute-102423.

Last Reviewed or Updated: April 15, 2024