Home  /  Answers  /  How to protect client data with AI

How to protect client data when using AI at an RIA

Last updated April 13, 2026 · By Isaiah Grant, Founder

An RIA can use AI on real client data safely, but the firm has to make five specific configuration decisions up front: deploy in a walled-garden environment, opt out of model training, require encryption-in-transit and at-rest, log every AI call against a client record, and keep an incident playbook within reach. None of these is hard. All of them are skipped routinely.

The five non-negotiable configurations

  1. Walled-garden deployment. Use the vendor's enterprise tier (Anthropic Claude for Work, OpenAI Enterprise, Microsoft Copilot for Microsoft 365). The consumer tiers can train on inputs by default; the enterprise tiers cannot.
  2. Training opt-out, in writing. Confirm the vendor's no-training commitment in the contract. Many vendors have it as default; get it on paper.
  3. Encryption-in-transit and at-rest. TLS 1.2+ for transit, AES-256 for storage. Ask the vendor for the spec sheet; don't accept 'yes we have encryption' as an answer.
  4. Audit logging tied to client records. Every prompt, response, and document touched needs to be logged in a way the firm can produce on a SEC exam. Most installed brains do this natively; most consumer tools don't.
  5. Incident playbook. A one-page document the principal and CCO can pull up in under a minute that says: who to call, what to disable, what to communicate. Run a tabletop exercise once a year.

What 'walled garden' actually means

A walled-garden AI deployment means the vendor has agreed in writing that prompts and outputs from the firm's account will not be used to train any model, will not be visible to the vendor's employees in non-emergency contexts, and will be deleted on cancellation per a published timeline. The technical mechanism varies — some vendors run the firm's traffic through a dedicated tenant; others use logical separation. The firm doesn't need to care about the mechanism; it needs to care about the contractual commitment.

The most-skipped step: audit logging

SEC Rule 204-2 requires advisers to keep records of communications with and about clients. AI-generated drafts that touch a client record fall into that scope the moment they're sent. Most firms using consumer-tier AI tools cannot reconstruct, six months later, exactly what the AI was asked, what it produced, and what version got sent. That's a books-and-records gap. The fix is structural: route AI through a system that logs by client, not by user.

The incident playbook contents

The one-page incident playbook should answer five questions in advance: (1) Who is the technical contact at each AI vendor and what's their off-hours number? (2) Which credentials need to be rotated in what order if a breach is suspected? (3) Which clients need to be notified, in what timeframe, by which method? (4) What's the firm's external counsel contact for cyber incidents? (5) What does the firm log to its WORM-storage location to preserve evidence? Pin it to the inside of the CCO's desk drawer.

Frequently asked

Can we use the free version of ChatGPT for client work?

No. The free and ChatGPT Plus tiers retain conversation data and may use it for training. Use ChatGPT Enterprise, ChatGPT Team, or a different vendor's enterprise tier for any work touching client PII.

Is on-premise AI the only safe option?

No. Walled-garden cloud deployments from Anthropic, OpenAI, and Microsoft are widely used by RIAs and major financial institutions. On-premise is appropriate for very large firms with specialized requirements; it's overkill for most RIAs.

Do we need a separate data-protection officer for AI?

No. The CCO already owns this function. The firm should add 'AI vendor oversight' to the CCO's annual compliance calendar and budget for outside help on the technical configuration if needed.

What if a client asks whether we use AI?

Tell them yes, with examples. The 2024-2026 surveys show the majority of clients prefer advisors who use AI as long as the human is accountable for the output. Hiding it backfires.

How often should we review the configuration?

Annually at minimum. Plus on any of: vendor acquisition, vendor pricing change, vendor security incident, our own SEC exam, or a major change in our client base.

Quiet Machines installs an AI brain inside advisory firms in a 3-day on-site build. Free AI visibility audit →

Sources