Introduction — AI Is No Longer a Hypothetical Risk
A year ago, most firms treated generative AI as an emerging technology issue. Advisors experimented with tools that could draft emails, summarize research, or help write marketing copy. Compliance teams responded the way organizations often do with new technology: they issued policies.
For example, firms circulated guidance such as:
- Don’t input confidential client data.
- Don’t rely on AI-generated investment advice.
- Use approved tools only.
At the time, that seemed sufficient. AI usage felt experimental — something that could be controlled through guidance and reminders.
That moment, however, has passed. Across wealth management, 91% of U.S. financial advisors now use generative AI in some way, and only 9% say they do not use GenAI tools at all.
Generative AI tools are now embedded in everyday workflows across financial services. Advisors use them to draft client communications. Marketing teams rely on them to generate campaign copy. Research teams experiment with AI-generated summaries of market information. In many firms, these tools are already influencing how client-facing content is created.
In just one year, the share of advisors who say GenAI helps their practice has jumped from 64% to 85%, and 76% report immediate benefits from GenAI‑enabled tools such as note summarization and marketing assistance.
FINRA’s recent guidance signals a clear shift in posture. Artificial intelligence is no longer treated as a purely technological issue. Instead, regulators are framing it within existing supervisory obligations — the same expectations that govern marketing communications, recordkeeping, and advisor oversight.
The question for firms is no longer whether AI is being used. The question is how it is governed.
What FINRA’s AI Guidance Actually Signals
AI Is a Supervisory Issue, Not a Technology Issue
One of the most important signals in FINRA’s guidance is the regulator’s framing of artificial intelligence. Rather than introducing an entirely new regulatory framework, FINRA consistently places AI within the scope of existing rules.
The message is simple but important: the use of AI does not change a firm’s regulatory obligations. Supervision requirements, communications rules, and recordkeeping obligations still apply.
FINRA’s 2024 Regulatory Notice 24‑09 explicitly states that its rules are technology‑neutral and continue to apply when firms use GenAI or similar tools, just as they apply to any other technology or tool.
In practical terms, that means the technology does not shift responsibility away from the firm. If an advisor uses AI to draft a client email, the firm remains accountable for the accuracy and appropriateness of that communication. If marketing teams rely on AI to produce promotional content, those materials must still comply with regulatory standards governing fairness and transparency.
From the regulator’s perspective, AI is simply another tool that influences how advisors work. Like any other tool used in the business, it must operate within the firm’s supervisory framework.
Disclosure Alone Is Not Enough
Some firms initially assumed disclosure might address AI risk. If clients were informed that AI tools contributed to the creation of certain communications, perhaps that transparency would reduce regulatory concerns.
But regulators are not focused primarily on disclosure. They are focused on outcomes.
If AI-generated content is misleading, incomplete, or inaccurate, the fact that AI was involved does not change the firm’s responsibility. Advisors and broker-dealers remain accountable for the communications delivered to clients.
This is why AI oversight is quickly becoming a governance issue rather than a transparency exercise. Firms must ensure that the outputs generated by AI tools meet the same standards expected of any other client communication.
Documentation and Testing Expectations
Another signal emerging from regulatory discussions is the growing emphasis on documentation.
Regulators increasingly expect firms to demonstrate how they evaluate, monitor, and supervise AI tools. This includes documenting testing procedures, identifying where AI tools are used in the business, and maintaining records showing how AI-generated outputs are reviewed.
The expectation is not that firms eliminate the use of AI. Instead, regulators want firms to be able to explain how the technology operates within their compliance framework.
As AI adoption expands across financial services, this expectation will only grow stronger.
Why “AI Usage Policies” Are Not a Control System
Many firms begin their response to AI risk by drafting written policies. Policies are necessary, but they are rarely enough on their own.
The Policy–Behavior Gap
Technology adoption tends to outpace formal oversight. Advisors often experiment with new tools independently, particularly when those tools are available through personal accounts or public platforms.
This creates what compliance leaders sometimes describe as “shadow AI.” Employees use AI systems outside the firm’s approved environment, often with good intentions — trying to work more efficiently or respond to clients more quickly.
One recent survey found that 59% of U.S. employees use AI tools that have not been approved by their employers, and 75% of those users report sharing potentially sensitive data with those tools.
But once AI usage moves outside approved systems, visibility disappears. Compliance teams cannot review prompts, outputs, or decision-making processes. Supervisors may not even know when AI tools were used.
Policies alone cannot close that gap.
The same research shows that 23% of employers still have no AI policy at all, creating a direct path for uncontrolled shadow AI to grow inside regulated businesses.
The Output Risk Problem
Another challenge comes from the nature of generative AI itself. These systems are designed to produce persuasive language quickly, but they are not always reliable.
AI models can generate incorrect statements, omit important context, or present speculative information with unwarranted certainty. These issues, often referred to as hallucinations, are well-documented.
In advisory settings, AI is already concentrated in areas like predictive analytics, marketing copy, and meeting‑note summaries, while far fewer advisors use it directly for personalized financial plans, a sign that firms are still cautious about embedding AI into suitability decisions.
In everyday settings, a flawed AI-generated paragraph might simply be inconvenient. In financial services, however, the consequences can be more serious.
A misleading marketing claim, an inaccurate market summary, or an unsupported performance claim could easily become a regulatory issue if distributed to clients.
Supervisory Blind Spots
AI also introduces new supervisory blind spots. When communications are generated by AI tools, the process behind them may be difficult to reconstruct.
Compliance teams may struggle to determine how a message was created. What prompt produced the response? What edits were made before the communication was sent? Was the content reviewed by a supervisor?
Without systems that capture this context, firms may find it difficult to explain how client communications were produced during an examination.
The Shift from Permission to Governance
These challenges point toward a broader shift in how firms must approach AI oversight.
The early compliance response to AI focused on permission: which tools employees could use and which ones were prohibited. But as AI becomes embedded in daily workflows, permission alone is no longer enough.
Firms need governance.
Governance means defining how AI tools are introduced, monitored, and supervised across the organization. It requires visibility into where AI is used, who uses it, and how outputs are reviewed before reaching clients.
This shift mirrors changes already occurring in other areas of compliance. Just as communication supervision evolved from simple message storage to behavioral oversight, AI governance is moving from policy statements to operational control.
What AI Governance Looks Like in Practice
In practice, governance frameworks typically begin by identifying approved AI tools and limiting their use to systems that have been evaluated for security and reliability. Clear guidelines establish what types of information can be entered into these systems and how generated outputs must be reviewed.
Supervisory checkpoints are then built into workflows. AI-generated communications may require review before distribution, particularly when they involve marketing claims or client recommendations.
Equally important is the creation of audit trails. Firms must be able to demonstrate how AI-generated content was produced, reviewed, and approved.
Platforms such as Patrina can support this governance model by ensuring that communications — including those drafted with AI assistance — are captured, supervised, and documented within a unified compliance environment.
The objective is not to eliminate the use of AI. The objective is to ensure that AI operates within a structure that preserves accountability.
Where AI Intersects with Existing Rules
One reason regulators emphasize governance is that AI intersects with several existing regulatory obligations.
Marketing Communications
AI tools are frequently used to draft marketing materials, social media posts, and promotional content. These materials must still comply with FINRA communications rules governing fairness, balance, and disclosure.
If AI-generated content exaggerates potential benefits or omits important risks, the firm remains responsible for the communication.
Surveys of large advisory firms show that roughly three‑quarters of advisors are already using generative AI in their daily business, with top use cases in marketing, analytics, and communication workflows that fall squarely under existing communications rules.
Books and Records
Recordkeeping requirements also become more complex when AI is involved.
If AI generates a client-facing communication, firms may need to preserve not only the final message but also evidence of its review and approval. Without proper documentation, firms may struggle to demonstrate compliance during regulatory examinations.
Supervision and Suitability
AI tools are also increasingly used to assist advisors with research and client communications. When those tools influence recommendations, supervisory responsibilities remain unchanged.
Firms must ensure that advisors understand the limitations of AI outputs and that recommendations made to clients remain grounded in appropriate suitability analysis.
What an Exam-Ready AI Framework Looks Like in 2026
Looking ahead, regulatory expectations around AI are likely to follow the same trajectory seen in other compliance areas.
Firms that manage AI risk effectively will treat governance as infrastructure rather than policy.
In these environments, AI usage is visible across the organization. Approved tools operate within controlled systems. Supervisory responsibilities are clearly assigned, and review processes are integrated into existing workflows.
At the same time, FINRA’s GenAI guidance emphasizes that firms should inventory higher‑risk AI use cases, evaluate GenAI tools before deployment, and ensure they can continue to comply with existing supervision, communications, and books‑and‑records requirements.
Testing protocols evaluate how AI systems perform, while documentation ensures that firms can explain how these technologies are used in practice.
When regulators ask how AI-generated communications are supervised, firms can provide evidence rather than policy statements.
Achieving this level of readiness often requires integrating communication capture, supervisory review, and recordkeeping into a unified operational framework. Platforms such as Patrina help firms maintain that visibility by ensuring that client communications, including AI-assisted messages, are archived and supervised in accordance with regulatory expectations.
FINRA has made clear it will continue engaging with member firms on the use of GenAI and other emerging technologies, signaling that AI governance will remain a standing exam theme rather than a one‑off focus.
In this environment, governance becomes part of the firm’s infrastructure rather than an afterthought.
A Self-Assessment for Compliance Leaders
For compliance teams evaluating their current posture, several questions can help reveal where governance gaps may exist:
- Do you know which AI tools employees are currently using?
- Can you identify when AI was used to draft client-facing communications?
- Are AI-generated materials subject to supervisory review?
- Can you document how AI-generated content was tested or evaluated?
- Could you explain to regulators how your firm controls AI outputs?
These questions often reveal whether AI oversight exists primarily in policy documents — or within operational systems.
Conclusion — AI Is a Governance Problem
Artificial intelligence is rapidly becoming part of how financial professionals work. Advisors use it to draft communications, marketing teams rely on it for content generation, and research teams experiment with its analytical capabilities.
For regulators, the technology itself is not the central concern. The concern is control.
FINRA’s guidance makes clear that AI must exist within the same supervisory structures that govern all other aspects of the business. Firms remain responsible for the accuracy of communications, the integrity of marketing materials, and the oversight of advisor activity.
In Regulatory Notice 24‑09, FINRA reiterates that its rules and the federal securities laws apply to the use of GenAI just as they do to any other technology, and that firms should address model governance, data integrity, and accuracy when deploying AI tools.
Policies alone cannot deliver that assurance.
The firms that manage AI risk successfully will not be the ones with the strictest restrictions. They will be the ones that build governance directly into their operational architecture — where supervision, documentation, and recordkeeping work together to make oversight visible and defensible.
FAQs
Does FINRA allow firms to use generative AI?
Yes. FINRA does not prohibit AI usage. However, firms remain responsible for supervising the use of AI and ensuring compliance with all regulatory obligations.
What are the biggest compliance risks associated with AI?
The primary risks include inaccurate or misleading communications, lack of supervisory oversight, insufficient documentation, and recordkeeping gaps related to AI-generated content.
Do AI-generated communications need supervisory review?
Yes. If AI tools generate content distributed to clients or the public, that content must comply with applicable communications and marketing rules.
Do firms need to record AI prompts or outputs?
While regulations do not always explicitly require prompt capture, firms must maintain sufficient documentation to explain how communications were created, reviewed, and approved.
How can firms effectively manage AI governance?
Firms should define approved AI tools, implement supervisory review processes, document testing procedures, and ensure that AI-assisted communications are captured and archived in accordance with recordkeeping requirements.




