Tag Archives: Financial Firms

FINRA’s GenAI Pivot: From Usage Policies to Governance, Testing, and Supervision

Introduction — AI Is No Longer a Hypothetical Risk

A year ago, most firms treated generative AI as an emerging technology issue. Advisors experimented with tools that could draft emails, summarize research, or help write marketing copy. Compliance teams responded the way organizations often do with new technology: they issued policies.

For example, firms circulated guidance such as:

  • Don’t input confidential client data.
  • Don’t rely on AI-generated investment advice.
  • Use approved tools only.

At the time, that seemed sufficient. AI usage felt experimental — something that could be controlled through guidance and reminders.

That moment, however, has passed. Across wealth management, 91% of U.S. financial advisors now use generative AI in some way, and only 9% say they do not use GenAI tools at all.

Generative AI tools are now embedded in everyday workflows across financial services. Advisors use them to draft client communications. Marketing teams rely on them to generate campaign copy. Research teams experiment with AI-generated summaries of market information. In many firms, these tools are already influencing how client-facing content is created.

In just one year, the share of advisors who say GenAI helps their practice has jumped from 64% to 85%, and 76% report immediate benefits from GenAI‑enabled tools such as note summarization and marketing assistance.

FINRA’s recent guidance signals a clear shift in posture. Artificial intelligence is no longer treated as a purely technological issue. Instead, regulators are framing it within existing supervisory obligations — the same expectations that govern marketing communications, recordkeeping, and advisor oversight.

The question for firms is no longer whether AI is being used. The question is how it is governed.

What FINRA’s AI Guidance Actually Signals

AI Is a Supervisory Issue, Not a Technology Issue

One of the most important signals in FINRA’s guidance is the regulator’s framing of artificial intelligence. Rather than introducing an entirely new regulatory framework, FINRA consistently places AI within the scope of existing rules.

The message is simple but important: the use of AI does not change a firm’s regulatory obligations. Supervision requirements, communications rules, and recordkeeping obligations still apply.

FINRA’s 2024 Regulatory Notice 24‑09 explicitly states that its rules are technology‑neutral and continue to apply when firms use GenAI or similar tools, just as they apply to any other technology or tool.

In practical terms, that means the technology does not shift responsibility away from the firm. If an advisor uses AI to draft a client email, the firm remains accountable for the accuracy and appropriateness of that communication. If marketing teams rely on AI to produce promotional content, those materials must still comply with regulatory standards governing fairness and transparency.

From the regulator’s perspective, AI is simply another tool that influences how advisors work. Like any other tool used in the business, it must operate within the firm’s supervisory framework.

Disclosure Alone Is Not Enough

Some firms initially assumed disclosure might address AI risk. If clients were informed that AI tools contributed to the creation of certain communications, perhaps that transparency would reduce regulatory concerns.

But regulators are not focused primarily on disclosure. They are focused on outcomes.

If AI-generated content is misleading, incomplete, or inaccurate, the fact that AI was involved does not change the firm’s responsibility. Advisors and broker-dealers remain accountable for the communications delivered to clients.

This is why AI oversight is quickly becoming a governance issue rather than a transparency exercise. Firms must ensure that the outputs generated by AI tools meet the same standards expected of any other client communication.

Documentation and Testing Expectations

Another signal emerging from regulatory discussions is the growing emphasis on documentation.

Regulators increasingly expect firms to demonstrate how they evaluate, monitor, and supervise AI tools. This includes documenting testing procedures, identifying where AI tools are used in the business, and maintaining records showing how AI-generated outputs are reviewed.

The expectation is not that firms eliminate the use of AI. Instead, regulators want firms to be able to explain how the technology operates within their compliance framework.

As AI adoption expands across financial services, this expectation will only grow stronger.

Why “AI Usage Policies” Are Not a Control System

Many firms begin their response to AI risk by drafting written policies. Policies are necessary, but they are rarely enough on their own.

The Policy–Behavior Gap

Technology adoption tends to outpace formal oversight. Advisors often experiment with new tools independently, particularly when those tools are available through personal accounts or public platforms.

This creates what compliance leaders sometimes describe as “shadow AI.” Employees use AI systems outside the firm’s approved environment, often with good intentions — trying to work more efficiently or respond to clients more quickly.

One recent survey found that 59% of U.S. employees use AI tools that have not been approved by their employers, and 75% of those users report sharing potentially sensitive data with those tools.

But once AI usage moves outside approved systems, visibility disappears. Compliance teams cannot review prompts, outputs, or decision-making processes. Supervisors may not even know when AI tools were used.

Policies alone cannot close that gap.

The same research shows that 23% of employers still have no AI policy at all, creating a direct path for uncontrolled shadow AI to grow inside regulated businesses.

The Output Risk Problem

Another challenge comes from the nature of generative AI itself. These systems are designed to produce persuasive language quickly, but they are not always reliable.

AI models can generate incorrect statements, omit important context, or present speculative information with unwarranted certainty. These issues, often referred to as hallucinations, are well-documented.

In advisory settings, AI is already concentrated in areas like predictive analytics, marketing copy, and meeting‑note summaries, while far fewer advisors use it directly for personalized financial plans, a sign that firms are still cautious about embedding AI into suitability decisions.

In everyday settings, a flawed AI-generated paragraph might simply be inconvenient. In financial services, however, the consequences can be more serious.

A misleading marketing claim, an inaccurate market summary, or an unsupported performance claim could easily become a regulatory issue if distributed to clients.

Supervisory Blind Spots

AI also introduces new supervisory blind spots. When communications are generated by AI tools, the process behind them may be difficult to reconstruct.

Compliance teams may struggle to determine how a message was created. What prompt produced the response? What edits were made before the communication was sent? Was the content reviewed by a supervisor?

Without systems that capture this context, firms may find it difficult to explain how client communications were produced during an examination.

The Shift from Permission to Governance

These challenges point toward a broader shift in how firms must approach AI oversight.

The early compliance response to AI focused on permission: which tools employees could use and which ones were prohibited. But as AI becomes embedded in daily workflows, permission alone is no longer enough.

Firms need governance.

Governance means defining how AI tools are introduced, monitored, and supervised across the organization. It requires visibility into where AI is used, who uses it, and how outputs are reviewed before reaching clients.

This shift mirrors changes already occurring in other areas of compliance. Just as communication supervision evolved from simple message storage to behavioral oversight, AI governance is moving from policy statements to operational control.

What AI Governance Looks Like in Practice

In practice, governance frameworks typically begin by identifying approved AI tools and limiting their use to systems that have been evaluated for security and reliability. Clear guidelines establish what types of information can be entered into these systems and how generated outputs must be reviewed.

Supervisory checkpoints are then built into workflows. AI-generated communications may require review before distribution, particularly when they involve marketing claims or client recommendations.

Equally important is the creation of audit trails. Firms must be able to demonstrate how AI-generated content was produced, reviewed, and approved.

Platforms such as Patrina can support this governance model by ensuring that communications — including those drafted with AI assistance — are captured, supervised, and documented within a unified compliance environment.

The objective is not to eliminate the use of AI. The objective is to ensure that AI operates within a structure that preserves accountability.

Where AI Intersects with Existing Rules

One reason regulators emphasize governance is that AI intersects with several existing regulatory obligations.

Marketing Communications

AI tools are frequently used to draft marketing materials, social media posts, and promotional content. These materials must still comply with FINRA communications rules governing fairness, balance, and disclosure.

If AI-generated content exaggerates potential benefits or omits important risks, the firm remains responsible for the communication.

Surveys of large advisory firms show that roughly three‑quarters of advisors are already using generative AI in their daily business, with top use cases in marketing, analytics, and communication workflows that fall squarely under existing communications rules.

Books and Records

Recordkeeping requirements also become more complex when AI is involved.

If AI generates a client-facing communication, firms may need to preserve not only the final message but also evidence of its review and approval. Without proper documentation, firms may struggle to demonstrate compliance during regulatory examinations.

Supervision and Suitability

AI tools are also increasingly used to assist advisors with research and client communications. When those tools influence recommendations, supervisory responsibilities remain unchanged.

Firms must ensure that advisors understand the limitations of AI outputs and that recommendations made to clients remain grounded in appropriate suitability analysis.

What an Exam-Ready AI Framework Looks Like in 2026

Looking ahead, regulatory expectations around AI are likely to follow the same trajectory seen in other compliance areas.

Firms that manage AI risk effectively will treat governance as infrastructure rather than policy.

In these environments, AI usage is visible across the organization. Approved tools operate within controlled systems. Supervisory responsibilities are clearly assigned, and review processes are integrated into existing workflows.

At the same time, FINRA’s GenAI guidance emphasizes that firms should inventory higher‑risk AI use cases, evaluate GenAI tools before deployment, and ensure they can continue to comply with existing supervision, communications, and books‑and‑records requirements.

Testing protocols evaluate how AI systems perform, while documentation ensures that firms can explain how these technologies are used in practice.

When regulators ask how AI-generated communications are supervised, firms can provide evidence rather than policy statements.

Achieving this level of readiness often requires integrating communication capture, supervisory review, and recordkeeping into a unified operational framework. Platforms such as Patrina help firms maintain that visibility by ensuring that client communications, including AI-assisted messages, are archived and supervised in accordance with regulatory expectations.

FINRA has made clear it will continue engaging with member firms on the use of GenAI and other emerging technologies, signaling that AI governance will remain a standing exam theme rather than a one‑off focus.

In this environment, governance becomes part of the firm’s infrastructure rather than an afterthought. 

A Self-Assessment for Compliance Leaders

For compliance teams evaluating their current posture, several questions can help reveal where governance gaps may exist:

  • Do you know which AI tools employees are currently using?
  • Can you identify when AI was used to draft client-facing communications?
  • Are AI-generated materials subject to supervisory review?
  • Can you document how AI-generated content was tested or evaluated?
  • Could you explain to regulators how your firm controls AI outputs?

These questions often reveal whether AI oversight exists primarily in policy documents — or within operational systems.

Conclusion — AI Is a Governance Problem

Artificial intelligence is rapidly becoming part of how financial professionals work. Advisors use it to draft communications, marketing teams rely on it for content generation, and research teams experiment with its analytical capabilities.

For regulators, the technology itself is not the central concern. The concern is control.

FINRA’s guidance makes clear that AI must exist within the same supervisory structures that govern all other aspects of the business. Firms remain responsible for the accuracy of communications, the integrity of marketing materials, and the oversight of advisor activity.

In Regulatory Notice 24‑09, FINRA reiterates that its rules and the federal securities laws apply to the use of GenAI just as they do to any other technology, and that firms should address model governance, data integrity, and accuracy when deploying AI tools.

Policies alone cannot deliver that assurance.

The firms that manage AI risk successfully will not be the ones with the strictest restrictions. They will be the ones that build governance directly into their operational architecture — where supervision, documentation, and recordkeeping work together to make oversight visible and defensible.

FAQs

Does FINRA allow firms to use generative AI?
Yes. FINRA does not prohibit AI usage. However, firms remain responsible for supervising the use of AI and ensuring compliance with all regulatory obligations.

What are the biggest compliance risks associated with AI?
The primary risks include inaccurate or misleading communications, lack of supervisory oversight, insufficient documentation, and recordkeeping gaps related to AI-generated content.

Do AI-generated communications need supervisory review?
Yes. If AI tools generate content distributed to clients or the public, that content must comply with applicable communications and marketing rules.

Do firms need to record AI prompts or outputs?
While regulations do not always explicitly require prompt capture, firms must maintain sufficient documentation to explain how communications were created, reviewed, and approved.

How can firms effectively manage AI governance?
Firms should define approved AI tools, implement supervisory review processes, document testing procedures, and ensure that AI-assisted communications are captured and archived in accordance with recordkeeping requirements.

Reg S-P Is Now a Deadlines Story: Incident Response & Vendor Oversight Under a Privacy Rule

Introduction – Privacy Rules Used to Be About Paper

For years, Regulation S-P was treated as a disclosure exercise. Firms drafted privacy notices, updated policy manuals, and ensured language complied with requirements around safeguarding customer information. Compliance teams reviewed templates. Legal departments adjusted phrasing. 

The amended Regulation S-P has fundamentally shifted the conversation from what firms disclose to how they respond. Privacy is no longer a static obligation; it’s an operational test. And it comes with a clock.

The introduction of mandatory incident response programs and a 30-day customer notification requirement transforms Reg S-P from a documentation rule into a design constraint. Firms are now expected to detect incidents quickly, assess impact decisively, notify affected individuals promptly, and demonstrate how the decision-making unfolded.

The rule is no longer about what’s written in a policy. It’s about what your systems do when something goes wrong.

What Actually Changed in Reg S-P

From Policy Language to Incident Response

The amended rule requires firms to adopt written incident response programs designed to detect, respond to, and recover from unauthorized access to customer information. The SEC’s final rule requires covered entities to “develop, implement, and maintain written policies and procedures for an incident response program” that address detection, response, recovery, and customer notification when sensitive information is involved.

This is more than a documentation update. It requires firms to define who investigates incidents, how the scope is assessed, how containment is carried out, and how decisions are documented. The rule assumes incidents will happen. What matters is whether your organization responds in a structured, defensible way.

A written policy alone cannot meet that standard. A functioning workflow can.

The 30-Day Notification Clock

The addition of a 30-day customer notification requirement significantly raises the stakes. Once a firm determines that unauthorized access to sensitive customer information has occurred and notification is required, the timeline begins. Under the amended rule, the timeline runs from when the firm becomes aware of an incident and determines that misuse of customer information is reasonably likely, and notice must be sent within 30 days of that point.

That clock compresses uncertainty. Investigation must be timely. Escalation must be clear. Decision-making must be documented.

Larger SEC-registered investment advisers and broker-dealers must comply with these expanded incident response requirements by December 3, 2025, while smaller entities have until June 3, 2026, making preparation a near-term priority rather than a distant concern.

In fragmented environments, time is lost coordinating between systems and teams. In structured environments, the workflow itself guides the response. The difference between those two realities determines whether 30 days feels manageable — or dangerously short.

Service Providers Are Now in Scope

Reg S-P now explicitly requires oversight of service providers that access or use customer information.

This widens the compliance perimeter. If a vendor experiences unauthorized access involving your customer data, your firm’s obligations may be triggered. Vendor contracts, reporting requirements, monitoring practices, and escalation paths must align with your internal response framework.

“Third party” no longer means “outside risk.” It means shared responsibility.

Under the amended rule, service providers must notify covered firms as soon as possible — and no later than 72 hours after becoming aware of a breach involving customer information — reinforcing that vendor oversight is now a time-sensitive compliance obligation.

Why This Is an Operational Problem, Not a Legal One

Privacy incidents do not begin in policy manuals. They begin in the operational layer — in inboxes, cloud platforms, mobile devices, file-sharing tools, and integrated applications.

By the time legal is involved, the operational event has already occurred.

Privacy Failures Rarely Start in Legal

Most privacy failures stem from routine workflows: an employee sends data to the wrong recipient, a compromised account exports information, vendor controls fail, or a communication slips outside supervised channels.

The vulnerability lives where work happens. If your operational environment lacks visibility and structure, your response will too. Reg S-P’s amendments recognize this reality. They focus on detection, escalation, and execution — not just disclosure language.

What Breaks in Legacy Environments

In many firms, customer data moves through disconnected systems. Communications are archived on one platform, supervision occurs in another, incident tracking lives in spreadsheets, and vendor oversight is handled through static contracts. 

When an incident occurs in that environment, firms struggle to reconstruct basic facts:

  • When did the issue begin?
  • Who knew about it, and when?
  • What information was affected?
  • How was the decision to notify made?

The challenge isn’t a lack of intent. It’s a lack of integration.

Without centralized workflows, privacy becomes reactive — and reconstruction replaces readiness. Recent breach data shows that 35.5% of all cyber breaches in 2024 were third-party related, up from 29% in 2023, a 6.5 percentage-point increase that highlights how vendor gaps can quickly become your firm’s problem.

How Exams Now Frame Privacy Risk

Examiners reviewing Reg S-P compliance increasingly focus on execution. They want to see timelines. They want to understand how internal notifications occurred. They want to review the documentation of the decision-making process. They want to see whether escalation followed defined paths or informal coordination.

The exam becomes less about reviewing your written response plan and more about evaluating whether your systems supported it in practice. Privacy compliance, in this context, is inseparable from operational design.

The Shift to Operational Privacy

A broader pattern is emerging across financial regulation: compliance expectations are moving from articulation to automation. Operational privacy reflects that shift.

Privacy protection must now live inside workflows. Detection must occur within systems. Escalation must follow defined channels. Documentation must be produced as a by-product of the response and not assembled after the fact.

Operational privacy means that when an incident occurs, the process activates predictably. Detection lives within communications systems, escalation follows defined channels, and documentation is automatically preserved. This architectural approach is increasingly reflected in unified compliance platforms such as Patrina, where privacy supervision, communications oversight, and incident workflows operate within the same environment rather than across disconnected tools.

What Operational Privacy Looks Like

In an operational privacy environment:

  • Customer interactions and communications are centrally supervised
  • Alerts surface anomalous activity in real time
  • Incident workflows are predefined
  • Escalations are automatically routed
  • Decisions are recorded within the system
  • Vendor touchpoints are mapped and monitored

The result is clarity. And clarity is what Reg S-P now demands.

What a Reg S-P–Ready Firm Looks Like in Practice

To understand what operational privacy truly looks like, imagine a privacy event unfolding inside a firm that has embedded compliance directly into its infrastructure rather than layering it on top of daily activity.

When a suspicious activity appears — whether it’s an unusual data export, an anomalous login, or a flagged communication — it doesn’t disappear into inboxes or depend on someone noticing it hours later. The signal is surfaced within a centralized compliance environment where visibility is built into the system itself. Detection is not incidental; it is structural.

Because the environment is designed around defined workflows, responses follow form rather than improvisation. Investigation begins inside a structured process that guides assessment, containment, and documentation simultaneously. Leadership visibility is embedded from the outset, not added through fragmented email chains. If customer notification becomes necessary, communication flows through a defined path that is directly connected to the documented rationale that triggered it.

The critical difference is not just speed — it is coherence. Each action is captured as it occurs, creating a defensible timeline without requiring reconstruction days later. Detection, escalation, assessment, and notification are not separate events stitched together after the fact; they are integrated stages within a unified compliance system.

For many firms, reaching this level of readiness requires rethinking how non-trading compliance operates. Instead of relying on scattered archives, spreadsheets, and disconnected tools, firms are centralizing supervision, incident tracking, vendor oversight, and documentation into structured platforms. Solutions such as Patrina are designed around this model — where communications oversight, privacy supervision, and audit trails exist within the same operational framework, allowing documentation to emerge naturally from everyday business rather than being assembled under regulatory pressure.

In that environment, privacy readiness becomes continuous rather than reactive. The firm does not scramble to explain what happened because the response itself generates the record.

A Self-Assessment for Advisors & Compliance Leaders

Ask yourself:

  • Do you know exactly where customer data resides across systems and vendors?
  • Can you detect a potential privacy incident without waiting for manual reporting?
  • Can you reconstruct the first 24 hours of a breach with timestamps?
  • Do you have documented ownership handoffs across compliance, IT, and leadership?
  • Can you demonstrate how your firm determined whether customer notification was required?

These questions reflect how privacy enforcement now unfolds. Each answer reveals whether privacy in your firm is policy-driven or system-driven.

Reg S-P as a Design Constraint

Regulation S-P is no longer a rule about disclosure language. It is a rule about execution under pressure. The amended framework forces firms to design for speed, clarity, and defensibility — not just policy completeness. It requires structured workflows for detection and escalation and extends responsibility beyond internal systems to third-party vendors now embedded in most firms’ operational ecosystems.

In that sense, privacy has become infrastructure.

Firms that continue to rely on fragmented systems will feel increasing strain as timelines compress and oversight expands. Every disconnected tool adds friction. Every manual handoff introduces uncertainty. Under a 30-day notification requirement, those inefficiencies are no longer inconveniences — they are exposure points.

By contrast, firms that embed privacy into their operational architecture will find that response becomes more predictable. Incidents are surfaced earlier. Escalation paths are clearer. Documentation is created as events unfold rather than reconstructed afterward.

The firms that navigate the next privacy incident successfully will not be the ones with the longest policies. They will be the ones whose systems already know what to do—and can prove they did it.

FAQs

What is the biggest change in the amended Reg S-P?

The most significant change is the requirement for a formal incident response program and a 30-day customer notification obligation. The rule now emphasizes operational execution rather than disclosure language alone.

When does the 30-day notification period begin?

The timeline begins once a firm determines that unauthorized access to sensitive customer information has occurred and that notification is required. This makes structured investigation and documentation critical.

Does Reg S-P apply to vendor breaches?

Yes. If a service provider that accesses or uses your customer data experiences unauthorized access, your firm’s obligations may be triggered. Vendor oversight is now explicitly part of your compliance responsibility.

Is this primarily a cybersecurity issue?

Cybersecurity is one component, but Reg S-P is broader. It encompasses incident governance, customer notification, documentation, escalation pathways, and vendor monitoring. It is as much about operational design as it is about IT controls.

How should firms prepare for these changes?

Preparation requires mapping data flows, reviewing vendor agreements, formalizing incident response workflows, and ensuring that detection, escalation, and documentation occur within structured systems rather than informal channels.