How to train AI on your firm's voice
Last updated April 13, 2026 · By Isaiah Grant, Founder
You train AI on your firm's voice by building a voice file — a structured document that captures your principal's writing style, banned phrases, sentence patterns, tone preferences, and the specific way your firm talks to clients — and loading it into every AI workflow before it writes anything. The voice file is the single highest-leverage artifact in an AI installation.
What a voice file contains
- Sentence structure. Does the principal write in short, punchy sentences or longer, explanatory ones? Does the firm use contractions? How does the principal start emails?
- Banned words. Every firm has words they would never use. "Navigate your financial journey" might be fine for a wirehouse — it would be wrong for a plainspoken RIA. The voice file lists every word and phrase that is off-limits.
- Tone anchors. Three to five adjectives that describe how the firm sounds: "warm, direct, plainspoken, slightly wry, never condescending." These become the guardrails the AI checks every output against.
- Sample paragraphs. Five to ten paragraphs the principal has actually written — emails, letters, blog posts — that represent the target voice. The AI uses these as calibration material.
- Client-specific language. How the firm refers to money ("your portfolio" vs "your assets" vs "your nest egg"), how they refer to the firm ("we" vs the firm name), how they close emails.
How to build one
Start with the principal's sent folder. Pull the last 50 client-facing emails and look for patterns. Then run a 30-minute interview: how do you want to sound? What do you hate seeing in advisor content? If you read this email out loud, would it sound like you?
The output is a 1-2 page document that lives in the firm's knowledge base and gets loaded into every AI workflow as a system instruction. When the Content Studio drafts a blog post, it reads the voice file first. When Meeting Prep writes a follow-up email, it reads the voice file first. Every output inherits the same voice.
Why this matters for AI search
AI search engines are getting better at detecting generic AI content. Firms that publish content in a distinctive, consistent voice are more likely to be cited because the content reads as authoritative and original. A voice file is not just about sounding good — it is a citation signal.
The common mistake
Telling the AI "write in a professional tone." That is not a voice — it is the absence of one. Every advisory firm sounds "professional." The voice file captures what makes your firm sound like your firm and nobody else's.
The Reference Set That Matters
Voice training starts with collecting the right reference material — and most firms already have it. The principal's sent email folder contains thousands of messages written in their natural voice. Past quarterly letters show how they explain markets. Blog posts reveal their preferred metaphors and sentence structure. Even internal memos to the team carry voice signals: the level of formality, the use of humor, the way they deliver difficult news.
Quantity matters less than variety. Twenty emails covering different situations — a warm welcome, a market downturn explanation, a planning recommendation, a referral thank-you — teach more about voice than two hundred meeting confirmations. The goal is to capture the full range of the principal's communication style, not just the most common template. Include the messages the principal is proudest of. Those are the ones that best represent how they want to sound.
Testing and Refining the Output
The first drafts will not sound right. They will be close — the vocabulary will be correct, the structure will be familiar — but something will feel off. Maybe the system uses contractions where the principal never would. Maybe it opens with a question when the principal always opens with a statement. These are the calibration details that turn a passable draft into a convincing one.
The refinement process is iterative. The principal reviews a batch of drafts, marks what sounds right and what sounds wrong, and the system adjusts. After three or four rounds of this feedback, the output tightens. After a month of daily use — with every edit and correction feeding back into the model — the drafts reach a point where the principal is editing for substance, not style. That transition is the milestone that signals the voice training is working.
Frequently asked
How long does this take to install at our firm?
Three days on-site for the install, eight weeks for the workflows to settle in, eight months for the full hand-off. The principal needs to clear the on-site week — that's the only hard scheduling constraint. Everything else flexes around your calendar.
What does it cost?
$50,000 flat for the 90-day engagement. That includes the on-site residency, all workflow installs, training, and the runbook. SaaS subscriptions you already pay for stay in your name. There's no per-lead, per-seat, or per-output billing. Ever.
Who owns the system at the end?
You do, completely. Every workflow lives in your shared folder and your accounts. The runbook documents how every piece works in plain English. If you fired Quiet Machines tomorrow, your team would still have the system and could keep operating it indefinitely.
What's the biggest mistake firms make with AI?
Buying tools instead of installing systems. Most firms have ChatGPT, Claude, Jump, and a CRM — none of which talk to each other. The mistake is thinking the tools are the answer. The answer is the system that wires them into the way your firm actually works.
Quiet Machines installs an AI brain inside advisory firms in a 3-day on-site build. Free AI visibility audit →