Generative AI Meets SEC Regulation

What Investment Advisers Need to Know Now

Published on

Jensen Huang recently suggested that “AI is about to change everything.” Whatever changes AI may bring to investment management, the regulatory standards remain the same. Those familiar rules mean that AI’s new frontiers in financial advertising include pitfalls and traps for the unwary.

Start Investing With the AV Syndicate Today

Take 5 seconds. No document uploads.

For nearly a century, the Investment Advisers Act has made it “unlawful for an investment adviser . . . to engage in any transaction, practice, or course of business that operates as a fraud or deceit upon any client or prospective client.” The recent expansion of the SEC Marketing Rule raised the bar significantly above traditional fraud or scienter.  Advertisements must include sufficient context to prevent misleading the recipient, must be presented in a manner that is fair and balanced, and must have a reasonable basis that the adviser can produce to regulators upon demand. Like other securities laws, the SEC does not need to prove an intentional violation of the Marketing Rule. Rather, “a violation of § 206(2) of the Investment Advisers Act may rest on a finding of simple negligence.”

Risks Where AI Makes the Claim

Few applications have fulfilled Huang’s prediction more visibly than Generative AI. Gen AI is creating advertising copy for businesses across the economic landscape at an unprecedented pace. It can produce marketing content at a fraction of baseline costs even five years ago, many times faster than a full in-house team. Last year, 71% of marketers used Gen AI weekly or more, and almost 20% used it daily. Adoption has accelerated this year. Unsurprisingly, Gen AI’s inroads into the investment management industry are keeping pace with most other sectors. However, with this great power comes great responsibility.

FINRA has explicitly cautioned registrants that “the securities laws . . . continue to apply when member firms use Gen AI or similar technologies in the course of their businesses.” FINRA emphasized that its own antifraud rules “apply whether member firms’ communications are generated by a human or technology tool.” In other words, registrants’ capacity to review AI-generated marketing must keep pace with Gen AI’s ability to produce content at scale. It is not a defense to say that Gen AI inadvertently produced false, misleading, or unsubstantiated claims in an advertisement.

SEC registrants should count on being held to the same standard. AI-created advertisements must satisfy the new requirements of the Marketing Rule discussed above. Use of Gen AI introduces new dimensions to the risk landscape of the Marketing Rule. To effectively navigate these risks, registrants need to understand how Gen AI works, and what Gen AI does.

To many, the term “artificial intelligence” to describe ChatGPT, Claude, Gemini, and others, is a misnomer. If “intelligence” is defined as systematic problem solving, AI is not intelligent. Instead, AI is a label applied to the use of Large Language Models, which are computer programs designed to detect patterns in the use of language. LLMs anticipate the next words in a sequence based on enormous volumes of data on how similar sequences of words have been completed in the past.

On the surface, this output can look very much like the work of intelligence. A user prompting ChatGPT, “Draft a reply to my mom’s Happy Birthday e-mail” will get a response synthesized from millions of replies to Happy Birthday e-mails, which looks very much like a computer program thought carefully about how to design a reply, and then provided one. But as complexity, nuance, context, and the need for precision go up, the quality of AI output goes down. When someone prompts ChatGPT, “Design a one-page tear sheet for our investment fund,” there is no thinking taking place to ensure the response is not misleading, is fair and balanced, and has a reasonable basis that can be provided to regulators. Instead, an LLM is guessing what words might appear in your tear sheet based on millions of similar communications used to train it.

These guesses might look passable on the surface, but there is a highly significant risk that an LLM might include inaccurate performance claims, mismatched risk disclosures, or a strategy description incompatible with the fund. Moreover, an LLM will have no understanding of what “fair and balanced” presentation of facts in an advertisement means, and no way of checking to make sure the tear sheet it produced has a reasonable basis that the user can provide to a regulator. These issues need to remain the focus of an attentive, dedicated reviewer. Putting an advertisement produced by Gen AI into service without this review would be like sending a client a portfolio report produced by Gmail Autocomplete after the first few keystrokes – hopefully unthinkable to even moderately cautious advisers.

An investment adviser was charged with Marketing Rule violations based on advertisements created by a third party, which it failed to review. The advertisements incorporated erroneous performance calculations and compared the adviser’s performance to the wrong benchmark. The adviser’s policies and procedures required that its Chief Compliance Officer review its advertisements, but that review didn’t occur. The parallel to Gen AI is notable. An adviser remains responsible for its marketing, regardless of who (or what) created it.

Not every Marketing Rule violation requires miscalculation of performance. Sometimes an errant overstatement can create enforcement risk. Another adviser stated in an advertisement that it “refuse[d] all conflicts of interest.” Despite this categorical claim, the adviser’s Form ADV admitted it was subject to certain conflicts of interest. Consequently, the adviser violated the Marketing Rule because it “lacked a reasonable basis for believing it would be able to substantiate . . . that it refuse[d] all conflicts of interest.” Cross-referencing marketing copy against other public-facing representations, including regulatory disclosures, is precisely the type of context that Gen AI is unlikely to incorporate when creating an advertisement.

Risks Where AI Is the Claim

As AI reshapes industries across the US economy, investment managers are racing to announce how they’re leveraging this new technology to better service clients. But what happens when they jump the starting gun? Claims that an adviser has integrated AI into their advisory business aren’t mere puffery. Like the ESG enforcement cases from recent years, these claims are objective statements that can be proven either true or false. False claims fall within the heart of the Marketing Rule.

Just last year, a major financial institution with an advisory business was fined $17.5 million for overstating the role of ESG criteria in its advisory services. The adviser asserted that between 70% and 94% of its AUM was subject to ESG criteria, including passive ETFs following non-ESG indices. In addition, the adviser could not produce any set of written policies specifying which assets it counted as subject to ESG criteria. Its ESG claims were therefore a “transaction, practice, or course of business which operates as a fraud or deceit upon any client or prospective client.” Thus, the lesson: Where an advertisement claims a role for ESG or AI in advisory services, the claim must be true, and must not be misleading.

We are already seeing the first applications of this principle to marketing claims regarding AI. One adviser claimed “it was ‘the first investment adviser to convert personal data into a renewable source of investable capital . . . that will allow consumers to invest in the stock market using their personal data.’ [It] further stated that it ‘uses machine learning to analyze the collective data shared by its members to make intelligent investment decisions.’” Its website further “claimed that [the adviser] ‘turns your data into an unfair investing advantage” and that it “put[s] collective data to work to make our artificial intelligence smarter so it can predict which companies and trends are about to make it big and invest in them before everyone else.” During an examination, the adviser admitted to the SEC that “it had not used any of its clients’ data and had not created an algorithm to use client data.” Consequently, the adviser violated the Marketing Rule’s prohibition on using advertisements that include false statements, and omitting information from advertisements necessary to make them not misleading.

This was not the only example. Another adviser was also fined for claiming to incorporate “AI-driven forecasts” into its advisory services, and claiming to be the “first regulated AI financial advisor.”It was not able to substantiate these claims on demand by the SEC. As a result, it was charged with violations of the Marketing Rule.

The SEC itself summarizes the expectations where AI is the claim.

Investment professionals “should say what they’re doing, and do what they’re saying. Investment advisers or broker dealers should not mislead the public by saying they are using an AI model when they’re not, nor say that they’re using an AI model in a particular way, but not do so.”

Start Investing With the AV Syndicate Today

Take 5 seconds. No document uploads.

Risks Where AI Designs the Claim

Under the Policies and Procedures Rule, “[e]ach adviser should adopt policies and procedures that take into consideration the nature of that firm’s operations.” Advisers should “identify . . . compliance factors creating risk exposure . . . in light of the firm’s particular operations, then design policies and procedures that address those risks.” The required policies and procedures should “include elements . . . specific to their use of . . . digital tools . . . such as assessing whether algorithms were performing as intended.” This standard means that an adviser must adopt procedures sufficient to evaluate AI output for regulatory risk. The NIST AI Risk Management Framework (“RMF”) offers helpful guidance in this regard.

First, “validity and reliability for deployed AI systems are often assessed by ongoing testing or monitoring that confirms a system is performing as intended.” RMF, 3.1. “Accuracy measurements should be paired with clearly defined and realistic test sets that are representative of conditions of expected use.” Id. For example, where AI output is directly incorporated into trading decisions, an adviser should apply robust, thorough review procedures to ensure the AI functions as intended and produces accurate output. This test might involve controlled evaluation using historical market data, to validate system functionality.

Second, an adviser leveraging AI should work to confirm “intended purposes, potentially beneficial uses, context-specific laws, norms, and expectations, and prospective settings in which the AI system will be deployed are understood and documented.” RMF, Map 1.1. This includes documenting “assumptions and related limitations about AI system purposes, uses, and risks.” Id. For example, an adviser that trains an AI system to calculate fees, performance, or other data subject to retention under the Books and Records Rule may need to retain the prompts, documents, and other inputs used in the training. Advisers offering use of a client- or public-facing AI system should carefully and regularly conduct reviews to ensure it isn’t a possible data security risk. Prompt injections, model inversion, data poisoning, and other forms of malicious use can cause an AI system to provide confidential information, including investor personal information protected by law, or produce inaccurate responses. More broadly, advisers designing an AI system for a limited use case should periodically review usage history to confirm it isn’t being used in unintended ways.

Third, “harmful bias and other data quality issues can affect AI system trustworthiness.” RMF, Appendix B. AI tools are only as good as the inputs provided to them. An adviser should review processes and systems that provide input data to AI resources, to ensure the data provided is accurate and of sufficient quality to avoid errors in output. For example, an adviser using AI to calculate performance against a benchmark should regularly confirm that the tool is using current data for both advisory and benchmark performance, and is comparing apples to apples where fees, taxes, reinvestment of dividends, and other factors influencing performance outcomes are concerned.

To emphasize, advisers do not need to have a perfect understanding of their AI resources in order to establish adequate policies and procedures. Even the AI/ML industry itself continues to struggle with limitations on its understanding of the inner workings of AI. Nevertheless, effective compliance policies and procedures require thoughtful consideration of potential risks associated with AI use, and the design of commensurately scaled control systems to guard against those risks.

Towards New Frontiers and Familiar Obligations

While this article has discussed a few imminent risk areas, many others remain. Giving AI platforms, particularly third party services, access to systems containing personal information may pose substantial risks under a growing patchwork of data privacy regulations across several jurisdictions. Automating HR and talent acquisition implicates considerable regulatory and civil litigation risks in an increasingly unpredictable legal landscape. And different jurisdictions are beginning to offer glimpses of regulatory frameworks governing the use of AI itself – frameworks that may not always be compatible with one another.

It remains to be seen whether AI will indeed “change everything,” but it promises disruption to many facets of the investment management industry. With everyone’s eyes on the future, regulated businesses should not lose sight of present regulatory obligations. Investment advisers adopting AI tools should treat them as they would any third-party service provider: verify outputs are accurate before relying on them, implement review procedures scaled to the volume AI can produce, and maintain evidence that AI outputs meet regulatory standards. The technology may be new, but the compliance obligations—and the consequences of getting them wrong—remain all too familiar.

Chris Browne is General Counsel and Chief Compliance Officer for Alumni Ventures, a global venture capital company.  He has been counsel to major financial services organizations, and large public companies.  AI contributed significantly to the research for this article, but outputs were manually reviewed to find and correct hallucinations and analytical errors.  This article is not advice for any reader.


Join Us (For Free)

Start Investing With the AV Syndicate Today.

  • Home

    Easy Sign-Up

    Click a button. 5 seconds.
  • Home

    No Obligation to Invest

    Only invest in deals you like.
  • Home

    Co-Invest with Elite VCs

    Frequent co-investors include a16z, Sequoia, Khosla, Accel, and more.
  • Home

    Deal Transparency

    Due Diligence and Investment Memos provided. Live Deal discussions with our investment teams.

This communication is from Alumni Ventures, a for-profit venture capital company that is not affiliated with or endorsed by any school. It is not personalized advice, and AV only provides advice to its client funds. This communication is neither an offer to sell, nor a solicitation of an offer to purchase, any security. Such offers are made only pursuant to the formal offering documents for the fund(s) concerned, and describe significant risks and other material information that should be carefully considered before investing. For additional information, please see here. Achievement of investment objectives, including any amount of investment return, cannot be guaranteed. Co-investors are shown for illustrative purposes only, do not reflect all organizations with which AV co-invests, and do not necessary indicate future co-investors. Example portfolio companies shown are not available to future investors, except potentially in the case of follow-on investments. Venture capital investing involves substantial risk, including risk of loss of all capital invested. This communication includes forward-looking statements, generally consisting of any statement pertaining to any issue other than historical fact, including without limitation predictions, financial projections, the anticipated results of the execution of any plan or strategy, the expectation or belief of the speaker, or other events or circumstances to exist in the future. Forward-looking statements are not representations of actual fact, depend on certain assumptions that may not be realized, and are not guaranteed to occur. Any forward-looking statements included in this communication speak only as of the date of the communication. AV and its affiliates disclaim any obligation to update, amend, or alter such forward-looking statements, whether due to subsequent events, new information, or otherwise.