What is a generative AI use policy for an RIA
A generative AI use policy is a written document that defines how your firm uses AI tools, what data can be entered into them, who's responsible for reviewing outputs, and what's prohibited. Every RIA needs one in 2026 because the SEC's December 2025 risk alert made AI use a clear examination priority and the lack of a policy is itself a finding.
The minimum viable policy
Your AI use policy should answer eight questions in writing:
- Which AI tools are approved? A specific list, by name.
- What data is allowed in? Public market data yes; client PII no, unless the tool has a signed enterprise agreement.
- Who reviews AI output before it goes to a client? Always a named human.
- How are AI outputs preserved as records? Per Rule 204-2.
- What's prohibited? Personal ChatGPT accounts touching client data, AI giving recommendations, chatbots talking to clients without disclosure.
- Who owns AI governance at the firm? A named individual.
- How is the policy reviewed? At least annually.
- What happens when something goes wrong? Escalation path.
Why this matters now
The SEC's December 2025 Marketing Rule risk alert flagged AI-generated marketing as a 2026 examination priority. NTSA's 2025 RIA survey put AI as the #1 compliance concern for the first time. If you're examined and you don't have a policy, that's the finding before any output is even reviewed.
This is general information, not legal advice. Talk to your compliance counsel.
Quiet Machines installs an AI brain inside advisory firms in a 3-day on-site build. Free AI visibility audit →