Last updated: May 8, 2026
For board administrators, IT reviewers, and anyone who wants a plain-English explanation of what happens when a MinuteSmith user asks a question using Ask AI.
Ask AI is a retrieval-augmented generation (RAG) pipeline. It does not query a general-purpose AI with your question alone. Instead, it first searches your own meeting history for relevant context, then passes that context — along with your question — to an AI language model to synthesize an answer.
Ask AI architecture
Every Ask AI query follows the same path. Two boundary layers — the database-level board filter (step 2) and the limited-context payload (step 4) — sit between your data and the AI vendor.
User asks a question
Sent over HTTPS to MinuteSmith.
Board access enforced
SQL filter inside the database — not just app code — restricts retrieval to your boards.
Relevant excerpts retrieved
Vector search across pre-computed embeddings of your meeting history.
Limited context to AI vendor
Question + relevant excerpts only — never your entire archive.
Answer generated
Anthropic Claude synthesizes a response. Citations included.
User reviews cited answer
Each part of the answer links back to its source meeting.
What leaves MinuteSmith: the question text (steps 1, 4) and the retrieved excerpts (step 4) — sent to Anthropic and OpenAI under commercial API terms (no model training; up to 30-day vendor retention). What stays: the rest of your archive, including any meetings the retrieval step did not select.
text-embedding-3-smallmodel to convert it into a numeric vector (an “embedding”). This vector represents the semantic meaning of your question but is not a human-readable form of it.board_id = ANY(...)filter is enforced inside the database function, not just in application code. No other customer’s content is in scope.Two categories of data leave MinuteSmith servers for each Ask AI request:
| What | Sent to | Purpose |
|---|---|---|
| Your question text | OpenAI | Convert question to a search embedding |
| Your question text + relevant meeting excerpts | Anthropic (Claude) | Generate the answer |
“Meeting excerpts” are short passages of text (300–500 words) from your approved meeting minutes, generated minutes, or action items — the same content visible to any member of your board inside the app.
Large language models like Claude can produce answers that are fluent, specific, and wrong. This is sometimes called “hallucination.” It is a fundamental property of current AI systems, not a bug specific to MinuteSmith.
Ask AI is designed to mitigate — not eliminate — this risk:
MinuteSmith has an optional prompt-redaction system that can strip sensitive patterns (email addresses, phone numbers, SSNs, payment-card numbers, bank/routing identifiers) from the text sent to AI vendors, then restore redacted values in the returned answer where appropriate.
Current status: The redaction layer is implemented and audited but is currently disabled by default (feature flag ENABLE_AI_REDACTION=false). It can be enabled per-deployment by an operator. We document this as-is rather than claiming protections that are not active by default.
Ask AI results are scoped to boards you are a member of. This is enforced at two levels:
match_meeting_chunks database function is a SECURITY DEFINER function that includes a board_id = ANY(board_ids) filter in its SQL body. Even if application code passed incorrect board IDs, the database function would enforce the scope.No content from a board you are not a member of can appear in Ask AI results.
Security reviewers and board counsel should be aware of the following limitations:
Does my meeting content go to OpenAI?
Short answer: yes, partially. When meetings are first processed, text chunks are sent to OpenAI to generate embeddings (stored in our database). At query time, only your question text is sent to OpenAI — not the meeting content. The meeting content is sent to Anthropic (Claude) to generate the answer.
Will my data be used to train AI models?
Under both Anthropic’s and OpenAI’s commercial API agreements, data submitted via the API is not used to train their models. However, both vendors retain data for up to 30 days for trust-and-safety review.
Can other boards see my content in Ask AI?
No. Board isolation is enforced at the database layer, not just in application code. A SQL-level filter inside the vector search function ensures only your boards’ content is in scope.
What if I have sensitive information (SSNs, financial data) in meeting minutes?
The optional redaction layer can strip sensitive patterns before they reach AI vendors. This layer is currently disabled by default. For boards handling sensitive personal or financial data, we recommend evaluating whether Ask AI is appropriate for your use case. Contact [email protected] with specific compliance requirements.
Can I turn off Ask AI for my board?
Ask AI access is controlled by subscription plan. Board-level controls for disabling Ask AI while retaining other features are on the roadmap. Contact us if this is a requirement.
Where are the meeting embeddings stored?
Embeddings are stored in MinuteSmith’s Postgres database (Supabase-hosted, encrypted at rest) using the pgvector extension. Embeddings are numeric vectors — they do not contain human-readable text. However, they can be used to recover approximate semantic content, so they are treated as sensitive data subject to the same board-scoping rules as meeting text.
Questions not answered here? Email [email protected].