synaptiq Live demo
  • How It Works
  • Pricing
  • ROI Calculator
  • Blog
  • FAQ
Log InStart Free Pilot
synaptiq

AI-powered sales agent that qualifies leads and books meetings autonomously.

Product
  • How It Works
  • Pricing
  • ROI Calculator
  • FAQ
Resources
  • Blog
  • Docs
  • API Reference
  • Embed Guide
Legal
  • Privacy Policy
  • Terms of Service
  • Cookie Policy
© 2026 Synaptiq. All rights reserved.
Documentation
  • Quick Start Guide
  • Embed the Widget on Your Site
  • Configure Your AI Agent
  • Upload Your Knowledge Base
  • Test Your First Conversation
  • Understanding Your Dashboard Metrics
  • Managing Leads and Conversations
  • Using the Conversion Funnel
  • Exporting Data
  • Live Conversations
  • ROI Report
  • Choosing a Theme
  • Customizing the Chat Icon
  • Position and Sizing Options
  • Custom CSS Overrides
  • Proactive Triggers
  • White Label
  • A/B Testing
  • Choosing an Industry Template
  • Customizing Qualification Criteria
  • Writing Effective Greeting Messages
  • Objection Handling Best Practices
  • Uploading Documents
  • Supported File Formats
  • How the AI Uses Your Documents
  • Testing Queries Against Your Knowledge Base
  • Calendar Setup (Cal.com / Calendly)
  • CRM Sync (HubSpot)
  • Webhook Configuration
  • Notification Settings
  • Zapier / Make Integration
  • Authentication
  • Chat API
  • Leads API
  • Conversations API
  • Analytics API
  • Webhooks
  • Rate Limits and Error Codes
  • Code Examples
  • Plans and Pricing
  • Usage Metering
  • Managing Your Subscription
  • Invoices and Receipts
Docs/Widget Customization/A/B Testing

A/B Testing

Run experiments on your widget's opening message to find what drives the most conversations and qualified leads.

A/B Testing

A/B testing lets you run controlled experiments on your chat widget to find what drives more conversations and qualified leads. The most impactful variable to test is your opening message — the first thing a visitor sees when the widget opens. Small changes to this message can dramatically change whether visitors engage.

Configure A/B tests under Admin Dashboard > Settings > Widget > A/B Testing.


What You Can Test

Currently, Synaptiq supports A/B testing on:

  • Opening message — the greeting text the AI uses to start the conversation
  • Proactive trigger message — the message shown when the widget auto-opens (see Proactive Triggers)
  • Chat bubble tooltip — the hover text on the chat trigger button (e.g., "Chat with us" vs. "Get a demo")

Creating an Opening Message Experiment

Step 1: Set Up the Experiment

  1. Go to Settings > Widget > A/B Testing
  2. Click + New Experiment
  3. Name your experiment (internal only — e.g., "Pricing page greeting — April 2026")
  4. Select the experiment type: Opening Message

Step 2: Define Your Variants

You need at least two variants: a control and a challenger.

Variant A (Control): Your current opening message. This is what visitors currently see.

Variant B (Challenger): Your alternative message. This is what you're testing.

You can add up to 4 variants per experiment. Add more variants when you want to test multiple hypotheses at once — but note that more variants means slower statistical significance.

Example variants:

VariantMessage
A (Control)"Hi there! I'm Alex, your sales assistant. How can I help you today?"
B"Hi! Quick question — are you looking for pricing, a demo, or something else?"
C"Welcome! I can get you a live demo booked in under 2 minutes. Interested?"

Step 3: Set Traffic Split

Decide what percentage of visitors sees each variant. By default, traffic is split evenly:

  • 2 variants: 50% / 50%
  • 3 variants: 33% / 33% / 33%
  • 4 variants: 25% / 25% / 25% / 25%

You can adjust the split manually if you want to protect most traffic on a proven control while still testing a challenger (e.g., 80% control / 20% challenger).

Step 4: Set Your Goal

Choose the primary metric that determines a winner:

GoalDescription
Conversation startedVisitor sends at least one message
Contact info capturedVisitor provides email or phone
Lead qualifiedLead score exceeds your qualification threshold
Meeting bookedVisitor books a calendar event

Recommendation: Use Lead qualified as your goal whenever possible. It measures the full-funnel impact — a message that drives more conversations but fewer qualified leads is not actually winning.

Step 5: Launch

Click Launch Experiment. The experiment starts immediately for new visitors. Returning visitors who saw a variant in a previous session continue to see the same variant for consistency.


Reading Your Results

The Results Dashboard

Navigate to Settings > Widget > A/B Testing > [Experiment Name] to view live results.

The results dashboard shows:

ColumnDescription
VariantThe message text
VisitorsTotal unique visitors who saw this variant
ConversationsVisitors who sent at least one message
Conversation rateConversations ÷ Visitors
Qualified leadsLeads that hit your qualification threshold
Qualification rateQualified leads ÷ Conversations
Goal ratePerformance on your primary goal metric
vs. ControlPercentage improvement over Variant A
ConfidenceStatistical confidence level (see below)

Understanding Statistical Confidence

The confidence percentage indicates how likely it is that the observed difference is real and not due to random chance.

ConfidenceInterpretationAction
< 70%InconclusiveKeep running — not enough data yet
70–85%Emerging signalWorth watching, not yet conclusive
85–95%Good confidenceConsider declaring a winner for low-stakes tests
> 95%High confidenceSafe to declare a winner and ship the change

Synaptiq calculates confidence using a two-tailed proportion test. The dashboard highlights cells in green when a variant reaches 95% confidence over the control.

Rule of thumb: Wait for at least 200 conversations per variant before drawing conclusions from the data.


Declaring a Winner

When you're ready to end the experiment and ship a winner:

  1. In the experiment results, click Declare Winner
  2. Select the winning variant
  3. Click Apply as Default — the winning message becomes the new default for 100% of visitors
  4. The experiment is archived with its results preserved for reference

If you want to stop the experiment without applying a winner (e.g., results were inconclusive), click Stop Experiment. Your original control message continues as the default.


Best Practices

What to Test First

If you're new to A/B testing the chat widget, start with these high-impact hypotheses:

  1. Question vs. statement: Does a direct question ("Looking for pricing?") outperform a statement greeting ("I'm here to help")?
  2. Specificity: Does naming the next step ("Book a demo in 2 minutes") outperform a generic open ("How can I help?")?
  3. Urgency: Does time-based framing ("Our team can chat now") outperform neutral phrasing?
  4. Personalization: Does a page-specific message ("I see you're looking at our pricing...") outperform a generic one?

What NOT to Change Mid-Experiment

Once an experiment is running, do not:

  • Change the variant messages
  • Change the traffic split
  • Change the goal metric

Any of these changes invalidates prior data and restarts the confidence calculation. If you need to make changes, stop the current experiment and start a new one.

Seasonal Effects

Be mindful of running experiments during unusual traffic periods (product launches, conference weeks, end-of-quarter pushes). Traffic quality during these periods is not representative of your baseline, which can skew results. Pause experiments during planned traffic spikes if your sample is sensitive to composition shifts.


Next Steps

  • Proactive Triggers — A/B test trigger messages, not just the default greeting
  • Choosing a Theme — ensure your widget visuals are on-brand before testing copy
  • ROI Report — track whether your winning variant improves pipeline-level metrics

Was this page helpful?

PreviousWhite LabelNextChoosing an Industry Template