How Ask AI Works

Last updated: May 8, 2026

For board administrators, IT reviewers, and anyone who wants a plain-English explanation of what happens when a MinuteSmith user asks a question using Ask AI.

Architecture Overview

Ask AI is a retrieval-augmented generation (RAG) pipeline. It does not query a general-purpose AI with your question alone. Instead, it first searches your own meeting history for relevant context, then passes that context — along with your question — to an AI language model to synthesize an answer.

Ask AI architecture

Every Ask AI query follows the same path. Two boundary layers — the database-level board filter (step 2) and the limited-context payload (step 4) — sit between your data and the AI vendor.

  1. 1

    User asks a question

    Sent over HTTPS to MinuteSmith.

  2. 2

    Board access enforced

    SQL filter inside the database — not just app code — restricts retrieval to your boards.

  3. 3

    Relevant excerpts retrieved

    Vector search across pre-computed embeddings of your meeting history.

  4. 4

    Limited context to AI vendor

    Question + relevant excerpts only — never your entire archive.

  5. 5

    Answer generated

    Anthropic Claude synthesizes a response. Citations included.

  6. 6

    User reviews cited answer

    Each part of the answer links back to its source meeting.

What leaves MinuteSmith: the question text (steps 1, 4) and the retrieved excerpts (step 4) — sent to Anthropic and OpenAI under commercial API terms (no model training; up to 30-day vendor retention). What stays: the rest of your archive, including any meetings the retrieval step did not select.

  1. Question submitted. You type a question in the Ask AI input. The question text leaves your browser and travels to MinuteSmith servers over HTTPS.
  2. Embedding generated.MinuteSmith sends the question text to OpenAI’s text-embedding-3-smallmodel to convert it into a numeric vector (an “embedding”). This vector represents the semantic meaning of your question but is not a human-readable form of it.
  3. Vector search across your boards only. The embedding is compared against pre-computed embeddings of your meeting minutes and action-item chunks stored in a vector database (Postgres + pgvector). The search is scoped to your boards at the database layer — a SQL-level board_id = ANY(...)filter is enforced inside the database function, not just in application code. No other customer’s content is in scope.
  4. Context assembled.The most semantically relevant text excerpts from your meetings are gathered. Excerpts are typically 300–500 words each. The number of excerpts included is capped to fit within the model’s context window.
  5. Prompt sent to Claude.A prompt containing your question and the retrieved excerpts is sent to Anthropic’s Claude API (currently claude-sonnet-4-6). The prompt does not include excerpts from other customers. The system prompt instructs the model to cite its sources and to decline to answer if the meeting record does not contain relevant information.
  6. Answer returned.Claude’s response is streamed back to MinuteSmith, then returned to your browser. Citations link back to the specific meetings that supported each part of the answer.

What Leaves MinuteSmith

Two categories of data leave MinuteSmith servers for each Ask AI request:

WhatSent toPurpose
Your question textOpenAIConvert question to a search embedding
Your question text + relevant meeting excerptsAnthropic (Claude)Generate the answer

“Meeting excerpts” are short passages of text (300–500 words) from your approved meeting minutes, generated minutes, or action items — the same content visible to any member of your board inside the app.

Vendor Data Handling

Anthropic (Claude)

  • Under Anthropic’s commercial API agreement, data submitted via the API is not used to train Anthropic’s models.
  • Anthropic’s trust-and-safety policy permits retention of submitted prompts for up to 30 days for abuse review; this applies to MinuteSmith’s account.
  • MinuteSmith is not currently on Anthropic’s zero-data-retention enterprise plan, which would eliminate this 30-day window.
  • Reference: anthropic.com/legal/privacy

OpenAI (Embeddings)

  • Under OpenAI’s API data usage policy, data submitted via the API is not used to train OpenAI’s models.
  • OpenAI’s standard API data retention policy applies (up to 30 days for abuse detection).
  • MinuteSmith is not currently on OpenAI’s zero-retention enterprise plan.
  • Reference: openai.com/policies/privacy-policy
Meeting-chunk embeddings are computed once, not on every query.When a meeting is first processed, MinuteSmith sends each text chunk to OpenAI for embedding. The resulting vectors are stored in MinuteSmith’s database. Subsequent searches use the stored vectors — they do not re-send meeting content to OpenAI. Only the question text is sent to OpenAI at query time.

AI Can Be Wrong

Large language models like Claude can produce answers that are fluent, specific, and wrong. This is sometimes called “hallucination.” It is a fundamental property of current AI systems, not a bug specific to MinuteSmith.

Ask AI is designed to mitigate — not eliminate — this risk:

  • The model is instructed to decline to answer when the retrieved meeting context does not contain relevant information, rather than inventing an answer.
  • Each answer is accompanied by citations linking to the specific meetings used as source material.
  • A confidence level is displayed (high / medium / low) based on the quality of retrieved matches.
Always verify important facts, votes, or dates against the original meeting minutes. Do not rely on Ask AI output alone for legal, financial, or compliance decisions.

Optional Redaction Layer

MinuteSmith has an optional prompt-redaction system that can strip sensitive patterns (email addresses, phone numbers, SSNs, payment-card numbers, bank/routing identifiers) from the text sent to AI vendors, then restore redacted values in the returned answer where appropriate.

Current status: The redaction layer is implemented and audited but is currently disabled by default (feature flag ENABLE_AI_REDACTION=false). It can be enabled per-deployment by an operator. We document this as-is rather than claiming protections that are not active by default.

Cross-Board Isolation

Ask AI results are scoped to boards you are a member of. This is enforced at two levels:

  • Application layer:The server-side route fetches the requesting user’s board memberships and passes only those board IDs to the search function.
  • Database layer: The match_meeting_chunks database function is a SECURITY DEFINER function that includes a board_id = ANY(board_ids) filter in its SQL body. Even if application code passed incorrect board IDs, the database function would enforce the scope.

No content from a board you are not a member of can appear in Ask AI results.

What MinuteSmith Does Not Claim

Security reviewers and board counsel should be aware of the following limitations:

  • No end-to-end encryption for AI features. AI processing requires MinuteSmith servers to decrypt content and send it to AI vendor APIs. There is no technical mechanism to use Ask AI without this decryption occurring.
  • No zero-data-retention guarantee from vendors. Neither Anthropic nor OpenAI are on zero-retention agreements with MinuteSmith. The 30-day vendor trust-and-safety retention window applies.
  • No SOC 2, ISO 27001, FedRAMP, or HIPAA certification. MinuteSmith has not undergone third-party compliance audits. We do not sign HIPAA BAAs.
  • No external penetration test. MinuteSmith has not undergone a formal external penetration test.
  • AI answers are not legally reliable. Ask AI output is generated by a language model and can be incorrect. It is not a substitute for reviewing source documents.
  • Retention deletion is not fully automated. Boards may configure retention windows, but the deletion executor is not running automatically in all environments. Operators must confirm actual deletion behavior for their deployment.

FAQ

Does my meeting content go to OpenAI?

Short answer: yes, partially. When meetings are first processed, text chunks are sent to OpenAI to generate embeddings (stored in our database). At query time, only your question text is sent to OpenAI — not the meeting content. The meeting content is sent to Anthropic (Claude) to generate the answer.

Will my data be used to train AI models?

Under both Anthropic’s and OpenAI’s commercial API agreements, data submitted via the API is not used to train their models. However, both vendors retain data for up to 30 days for trust-and-safety review.

Can other boards see my content in Ask AI?

No. Board isolation is enforced at the database layer, not just in application code. A SQL-level filter inside the vector search function ensures only your boards’ content is in scope.

What if I have sensitive information (SSNs, financial data) in meeting minutes?

The optional redaction layer can strip sensitive patterns before they reach AI vendors. This layer is currently disabled by default. For boards handling sensitive personal or financial data, we recommend evaluating whether Ask AI is appropriate for your use case. Contact [email protected] with specific compliance requirements.

Can I turn off Ask AI for my board?

Ask AI access is controlled by subscription plan. Board-level controls for disabling Ask AI while retaining other features are on the roadmap. Contact us if this is a requirement.

Where are the meeting embeddings stored?

Embeddings are stored in MinuteSmith’s Postgres database (Supabase-hosted, encrypted at rest) using the pgvector extension. Embeddings are numeric vectors — they do not contain human-readable text. However, they can be used to recover approximate semantic content, so they are treated as sensitive data subject to the same board-scoping rules as meeting text.

Further Reading

Questions not answered here? Email [email protected].