synaptiq Live demo
  • How It Works
  • Pricing
  • Blog
  • FAQ
Log InStart Free Pilot
synaptiq

AI-powered sales agent that qualifies leads and books meetings autonomously.

Product
  • How It Works
  • Pricing
  • FAQ
Resources
  • Blog
  • Docs
  • API Reference
  • Embed Guide
Legal
  • Privacy Policy
  • Terms of Service
  • Cookie Policy
2026 Synaptiq. All rights reserved.
Documentation
  • Quick Start Guide
  • Embed the Widget on Your Site
  • Configure Your AI Agent
  • Upload Your Knowledge Base
  • Test Your First Conversation
  • Understanding Your Dashboard Metrics
  • Managing Leads and Conversations
  • Using the Conversion Funnel
  • Exporting Data
  • Choosing a Theme
  • Customizing the Chat Icon
  • Position and Sizing Options
  • Custom CSS Overrides
  • Choosing an Industry Template
  • Customizing Qualification Criteria
  • Writing Effective Greeting Messages
  • Objection Handling Best Practices
  • Uploading Documents
  • Supported File Formats
  • How the AI Uses Your Documents
  • Testing Queries Against Your Knowledge Base
  • Calendar Setup (Cal.com / Calendly)
  • CRM Sync (HubSpot)
  • Webhook Configuration
  • Zapier / Make Integration
  • Authentication
  • Chat API
  • Leads API
  • Conversations API
  • Analytics API
  • Webhooks
  • Rate Limits and Error Codes
  • Code Examples
  • Plans and Pricing
  • Usage Metering
  • Managing Your Subscription
  • Invoices and Receipts
Docs/Knowledge Base/Testing Queries Against Your Knowledge Base

Testing Queries Against Your Knowledge Base

Validate that your AI agent answers accurately by testing questions and reviewing source citations.

Testing Queries Against Your Knowledge Base

Uploading documents is only half the work. You need to verify that the AI actually answers questions correctly using that content. Synaptiq includes a built-in test interface that lets you simulate customer conversations, inspect which documents the AI referenced, and identify gaps in your knowledge base.

Accessing the Test Interface

Navigate to Admin > Knowledge (/admin/knowledge) and click the Test tab at the top of the page. This opens a chat-style interface where you can ask questions exactly as a customer would, with additional diagnostic information that customers do not see.

The test interface connects to the same retrieval pipeline and AI model that your live widget uses. The only difference is the added transparency: you can see which documents were retrieved, how they scored, and how the AI assembled its answer.

Asking Your First Test Question

Start with a question you know the answer to. If you uploaded a pricing document, ask something like:

"How much does the Pro plan cost?"

The test interface returns three things:

  1. The AI's answer -- the response a customer would see.
  2. Source citations -- which documents (and which sections within those documents) the AI used to generate the answer.
  3. Relevance scores -- how closely each retrieved chunk matched the question.

Check all three. The answer should be correct. The sources should be the documents you expect. The relevance scores should be high (above 0.7 for the primary source).

What to Test

Work through these categories systematically to build confidence in your knowledge base coverage.

Pricing and Plans

  • "How much does [plan name] cost?"
  • "What is included in the [plan name] plan?"
  • "Do you offer annual billing?"
  • "Is there a free trial?"
  • "What happens if I exceed my usage limit?"

Product Capabilities

  • "Does [product] support [specific feature]?"
  • "Can I integrate with [specific tool]?"
  • "What is the maximum number of [users/connections/items]?"
  • "How does [feature] work?"

Comparisons and Positioning

  • "How is [your product] different from [competitor]?"
  • "Why should I choose [your product] over [alternative]?"
  • "What are the limitations of [your product]?"

Process and Logistics

  • "How do I get started?"
  • "What does onboarding look like?"
  • "How long does implementation take?"
  • "What support is included?"

Edge Cases

  • Questions with typos or informal language ("whats the price for pro")
  • Questions that combine multiple topics ("I need pricing for the enterprise plan and also want to know about SSO support")
  • Questions about things your product does NOT do
  • Vague questions ("tell me about your product")

Reviewing Source Citations

Every answer in the test interface includes an expandable Sources section. Click it to see the specific document chunks the AI retrieved.

What to look for:

Correct sourcing. If the customer asked about pricing, the sources should come from your pricing document, not your technical specs sheet. If the wrong document is being retrieved, the documents may have overlapping or ambiguous content.

Sufficient context. Sometimes the right document is retrieved, but the specific chunk does not contain enough information to fully answer the question. This means the relevant information is either missing from the document, buried in a long paragraph, or split across a chunk boundary.

Score distribution. Ideally, you want to see one or two chunks scoring above 0.8 and the rest dropping off. If all chunks score around 0.4-0.5, it means the question did not closely match anything in your knowledge base, and the AI is working with weak source material.

Improving Answers

When a test reveals a poor answer, the fix is almost always in your documents, not in the AI configuration. Here are the most common issues and how to resolve them.

Problem: The AI gives a wrong answer

Diagnose: Check the source citations. Is it pulling from the wrong document? Is there outdated content?

Fix: Find the document providing the incorrect information. Either update it with the correct information or delete it if it is entirely outdated. If two documents give conflicting information, consolidate into a single authoritative source.

Problem: The AI says "I don't know" when it should know

Diagnose: The question likely does not match any document content closely enough. Check if the topic is covered in your knowledge base at all.

Fix: Either add a new document covering that topic or add a section to an existing relevant document. Write the content using language similar to how customers ask about it. If customers say "free trial," make sure your document says "free trial" and not just "evaluation period."

Problem: The answer is correct but incomplete

Diagnose: The retrieved chunk probably contains the core fact but lacks surrounding detail.

Fix: Expand the relevant section in your source document. Add specifics, conditions, exceptions, and examples. For instance, if your pricing section says "$49/month" but does not mention what that includes, add the inclusions.

Problem: The AI mixes up information from different products or plans

Diagnose: Check if similar products or plans are described in the same document section without clear differentiation.

Fix: Restructure the document so each product or plan has its own clearly headed section. Use the product name in every relevant paragraph, not just at the top. Instead of "This plan includes 5 users," write "The Pro plan includes 5 users."

Problem: The answer is too generic

Diagnose: The source documents likely use vague, marketing-style language instead of specifics.

Fix: Replace generalities with concrete details. Change "industry-leading performance" to "average response time of 145ms based on independent benchmarks." Change "seamless integration" to "native HubSpot CRM integration with bi-directional sync, plus a REST API for custom integrations."

Structuring Content for Better Retrieval

Based on common patterns from the test interface, these document structures consistently produce the best results:

Use a Q&A Format for FAQs

## Can I cancel my subscription at any time?

Yes. All Synaptiq plans are month-to-month with no long-term contracts.
You can cancel from your account settings at any time. Your access
continues through the end of your current billing period.

The heading is the question customers actually ask. The answer is direct and complete. This format retrieves extremely well because the chunk heading closely matches the customer's query.

Use Comparison Tables for Feature Differences

## Plan Comparison

| Feature | Starter | Pro | Enterprise |
|---|---|---|---|
| Monthly price | $19 | $49 | Custom |
| Users included | 2 | 10 | Unlimited |
| API access | No | Yes | Yes |
| SSO | No | No | Yes |
| Support | Email | Email + Chat | Dedicated CSM |

Tables allow the AI to extract and present structured comparisons directly, rather than trying to assemble them from scattered paragraphs.

Use Clear Headings for Each Topic

## Integrations

### HubSpot Integration
Connect your HubSpot CRM to automatically sync leads, contacts, and deals...

### Calendar Integration
Book meetings via Cal.com or Calendly directly from the chat widget...

### Custom Integrations via API
For platforms without a native integration, use our REST API...

Each integration gets its own subsection, so a question about "HubSpot integration" retrieves exactly the right chunk without pulling in calendar or API details.

Common Issues and Troubleshooting

Documents show "Processing" for more than 5 minutes

Something likely went wrong during extraction. Delete the document and re-upload it. If the issue persists, check that the file is not corrupted and that it meets the supported format requirements. Image-only PDFs and password-protected files will fail processing.

Test results differ from live widget answers

Both use the same pipeline, but there are two things to check. First, confirm that all documents show a Ready status. Documents still processing are not available for retrieval. Second, if you recently updated documents, clear your browser cache and reload the test page to ensure you are not seeing cached results.

Relevance scores are consistently low across all questions

This usually indicates a mismatch between how your documents are written and how customers ask questions. Your documents may use internal terminology that customers do not use. Review the specific language in your documents and align it with customer vocabulary. Adding a glossary or synonym section to key documents can also help.

The AI answers from the wrong section of a long document

Long documents with many topics are harder to chunk precisely. The best fix is to split the document into smaller, single-topic files. A 40-page product manual should become a set of focused documents: one for pricing, one for features, one for setup, and so on.

The AI ignores recently uploaded documents

Verify the document status is Ready on the Knowledge Base page. If it still shows Processing or Error, the content is not yet available for retrieval. Also confirm you are testing against the correct workspace if you have multiple environments.

Building a Testing Routine

Do not treat testing as a one-time activity. Build it into your workflow:

  • After every document upload or update, test 3-5 questions related to that content.
  • Weekly, run through your 10-15 most common customer questions to make sure nothing has drifted.
  • After product launches or pricing changes, do a thorough pass across all affected topic areas.
  • Review actual conversations in your analytics dashboard periodically. When you see a question the AI handled poorly in a real conversation, test it in the test interface, fix the underlying document, and test again.

Consistent testing is how you keep your AI agent sharp. The knowledge base is a living system, not a set-and-forget configuration.

Was this page helpful?

PreviousHow the AI Uses Your DocumentsNextCalendar Setup (Cal.com / Calendly)