Run experiments on your widget's opening message to find what drives the most conversations and qualified leads.
A/B testing lets you run controlled experiments on your chat widget to find what drives more conversations and qualified leads. The most impactful variable to test is your opening message — the first thing a visitor sees when the widget opens. Small changes to this message can dramatically change whether visitors engage.
Configure A/B tests under Admin Dashboard > Settings > Widget > A/B Testing.
Currently, Synaptiq supports A/B testing on:
You need at least two variants: a control and a challenger.
Variant A (Control): Your current opening message. This is what visitors currently see.
Variant B (Challenger): Your alternative message. This is what you're testing.
You can add up to 4 variants per experiment. Add more variants when you want to test multiple hypotheses at once — but note that more variants means slower statistical significance.
Example variants:
| Variant | Message |
|---|---|
| A (Control) | "Hi there! I'm Alex, your sales assistant. How can I help you today?" |
| B | "Hi! Quick question — are you looking for pricing, a demo, or something else?" |
| C | "Welcome! I can get you a live demo booked in under 2 minutes. Interested?" |
Decide what percentage of visitors sees each variant. By default, traffic is split evenly:
You can adjust the split manually if you want to protect most traffic on a proven control while still testing a challenger (e.g., 80% control / 20% challenger).
Choose the primary metric that determines a winner:
| Goal | Description |
|---|---|
| Conversation started | Visitor sends at least one message |
| Contact info captured | Visitor provides email or phone |
| Lead qualified | Lead score exceeds your qualification threshold |
| Meeting booked | Visitor books a calendar event |
Recommendation: Use Lead qualified as your goal whenever possible. It measures the full-funnel impact — a message that drives more conversations but fewer qualified leads is not actually winning.
Click Launch Experiment. The experiment starts immediately for new visitors. Returning visitors who saw a variant in a previous session continue to see the same variant for consistency.
Navigate to Settings > Widget > A/B Testing > [Experiment Name] to view live results.
The results dashboard shows:
| Column | Description |
|---|---|
| Variant | The message text |
| Visitors | Total unique visitors who saw this variant |
| Conversations | Visitors who sent at least one message |
| Conversation rate | Conversations ÷ Visitors |
| Qualified leads | Leads that hit your qualification threshold |
| Qualification rate | Qualified leads ÷ Conversations |
| Goal rate | Performance on your primary goal metric |
| vs. Control | Percentage improvement over Variant A |
| Confidence | Statistical confidence level (see below) |
The confidence percentage indicates how likely it is that the observed difference is real and not due to random chance.
| Confidence | Interpretation | Action |
|---|---|---|
| < 70% | Inconclusive | Keep running — not enough data yet |
| 70–85% | Emerging signal | Worth watching, not yet conclusive |
| 85–95% | Good confidence | Consider declaring a winner for low-stakes tests |
| > 95% | High confidence | Safe to declare a winner and ship the change |
Synaptiq calculates confidence using a two-tailed proportion test. The dashboard highlights cells in green when a variant reaches 95% confidence over the control.
Rule of thumb: Wait for at least 200 conversations per variant before drawing conclusions from the data.
When you're ready to end the experiment and ship a winner:
If you want to stop the experiment without applying a winner (e.g., results were inconclusive), click Stop Experiment. Your original control message continues as the default.
If you're new to A/B testing the chat widget, start with these high-impact hypotheses:
Once an experiment is running, do not:
Any of these changes invalidates prior data and restarts the confidence calculation. If you need to make changes, stop the current experiment and start a new one.
Be mindful of running experiments during unusual traffic periods (product launches, conference weeks, end-of-quarter pushes). Traffic quality during these periods is not representative of your baseline, which can skew results. Pause experiments during planned traffic spikes if your sample is sensitive to composition shifts.
Was this page helpful?