A/B Testing
Automatic creative and landing page testing to find what converts best for each channel.
How A/B Testing Works
Growity automatically runs A/B tests on your Telegram Ads to find the best-performing creative and landing page combinations for each target channel. You do not need to set up tests manually — the system creates, monitors, and resolves them as part of its normal optimization cycle.
The core idea is simple: your current best-performing ad is the champion. New variants (challengers) compete against it. The system measures which variant delivers the lowest cost per subscriber (CPA), declares a winner when it has enough data, and automatically promotes the winner across your campaigns.
This process runs continuously. As soon as one test resolves, the system can start a new one with the next variant in the queue. Over time, your ad creatives improve through this cycle of testing, learning, and cascading wins to new channels.
Types of Tests
Growity supports three types of A/B tests, each isolating a different variable:
| Test Type | What Changes | What Stays the Same | Best For |
|---|---|---|---|
| Creative test | Ad text (copy) | Landing page (promote URL) | Testing different hooks, angles, or CTAs |
| Landing test | Promote URL (channel link) | Ad text | Comparing channel pages or landing strategies |
| Combined test | Both text and landing page | — | Trying a completely different approach |
Creative tests are the most common. They allow you to compare different ad copy variations while keeping the landing page constant, giving you a clean signal about which message resonates better with the channel's audience.
Landing tests are useful when you operate multiple Telegram channels and want to discover which channel page converts best for a given audience. The ad text stays the same, isolating the effect of the landing page.
Combined tests change both variables simultaneously. They are less scientifically rigorous (you cannot attribute the result to either variable alone) but useful for testing fundamentally different approaches without running sequential tests.
Champion vs Challenger
Every A/B test has exactly one champion and one or more challengers:
- Champion — the current best performer. When a campaign starts, the first ad you import becomes the default champion. As tests resolve, the winner becomes the new champion.
- Challenger — a new variant competing against the champion. Challengers are created when you add new ad creatives or when the system generates variants through the AI Creative Engine.
Both the champion and the challenger run simultaneously on the same target channel, receiving equal opportunity for impressions. The system tracks CPA, CTR, and conversion rate independently for each variant.
This approach ensures a fair comparison — both variants face the same audience, at the same time, under the same market conditions. The only difference is the variable being tested.
How Winners Are Determined
Growity uses statistical confidence to declare a winner, not gut feeling or arbitrary thresholds. A test resolves when the system is confident that one variant genuinely outperforms the other — not just by random chance.
The Confidence Threshold
The system requires a minimum of 90% statistical confidence before declaring a winner. This means there is less than a 10% probability that the observed CPA difference is due to random variation rather than a real difference in ad performance.
In practice, this means:
- Small CPA differences require more data to reach confidence. If variant A has a CPA of $2.10 and variant B has $2.15, the system needs many conversions to confirm this 2% difference is real.
- Large CPA differences reach confidence quickly. If variant A is at $1.50 and variant B is at $3.00, even a small sample size produces high confidence.
You can monitor the current confidence level for each running test in your dashboard's A/B Tests section. The confidence percentage increases as more conversion data accumulates.
Minimum Data Requirements
Before statistical analysis begins, each variant must accumulate a minimum amount of data:
- At least 50 clicks per variant
- At least 5 conversions (subscribers) per variant
- At least 48 hours of runtime
These minimums prevent premature conclusions from small sample sizes. Early data is volatile — a single lucky or unlucky conversion can dramatically skew CPA when the total count is low.
Cascading Wins
One of Growity's most powerful features is automatic cascading of winning creatives across channels. When a test resolves and a winner is found on one channel, the winning creative is automatically deployed to other channels in the same campaign.
How Cascading Works
- A creative test resolves on Channel A — Variant 2 beats the champion with 93% confidence.
- Variant 2 becomes the new champion on Channel A.
- The system identifies other channels in the same campaign that are still running the old champion.
- New challenger ads with Variant 2's text are created on those channels.
- Each channel runs its own independent test to confirm the improvement works in its specific audience context.
This is important: cascading does not blindly replace creatives everywhere. It creates new tests on other channels. A creative that won on a crypto news channel may not work on a tech channel — the system verifies this through per-channel testing.
Why This Matters
Without cascading, you would need to manually duplicate winning creatives across dozens of channels. With cascading, improvements spread automatically. A single creative breakthrough can improve CPA across your entire campaign within days, not weeks.
Tip: Cascading only triggers when a test resolves with a clear winner. If a test resolves with no statistically significant difference (both variants perform similarly), no cascading occurs — there is nothing to spread.
Test Lifecycle
Every A/B test goes through a predictable lifecycle:
| Phase | Duration | What Happens |
|---|---|---|
| Setup | Instant | Challenger ad is created on the same channel as the champion, with matching CPM and budget |
| Collecting | 2–14 days | Both variants run simultaneously; system accumulates click and conversion data |
| Analysis | Continuous | Statistical confidence is recalculated as new data arrives; test resolves when threshold is met |
| Resolution | Instant | Winner becomes the new champion; loser is disabled; cascading begins if applicable |
The Collecting phase duration varies depending on traffic volume and the magnitude of the CPA difference between variants. High-traffic channels with clear winners resolve in 2–3 days. Low-traffic channels or closely matched variants may take 1–2 weeks.
You can see all running tests and their current phase in the A/B Tests card on your dashboard. Each test displays its confidence level, variant CPAs, and total subscribers contributed by each variant.
Best Practices
Test One Variable at a Time
Creative tests give the clearest signal because they isolate a single variable. When possible, prefer creative tests or landing tests over combined tests. If you change both the ad text and the landing page simultaneously, you cannot determine which change drove the improvement (or decline).
Keep a Queue of Variants Ready
The system resolves tests and immediately starts new ones if variants are available. Keep 3–5 creative variants queued up so testing never stops. Continuous testing compounds improvements over time — each winning creative becomes the new baseline for the next round of tests.
Do Not Edit Ads During Tests
Editing a champion or challenger ad in the Telegram Ads interface while a test is running invalidates the results. The system detects external modifications and flags them in your activity feed. Always make creative changes through Growity so the testing framework stays intact.
Let Tests Resolve Naturally
Resist the urge to end tests early. If you see one variant leading after 24 hours, it does not mean that variant is truly better — early data is noisy. The confidence threshold exists specifically to protect you from acting on incomplete information. Let the system reach its confidence target before drawing conclusions.
Use Different Creative Angles
Testing small wording changes ("Join us" vs "Join now") rarely produces meaningful CPA differences. Instead, test fundamentally different approaches:
- A question hook vs a social proof lead
- Short, punchy copy vs detailed, informative copy
- Benefit-focused ("Get daily insights") vs fear-focused ("Stop missing opportunities")
- Specific numbers ("73% of traders...") vs general claims ("Most traders...")
Bigger creative differences produce bigger CPA differences, which resolve faster and teach you more about what your audience responds to.
Reading Test Results
When a test resolves, you will see an activity feed entry like:
A/B test resolved: Variant 2 wins in Tech Channel Growth (94.2% confidence)
The key metrics to review for each resolved test:
- CPA comparison — the winning variant's CPA vs the loser's. A $2.10 vs $3.40 result is a strong signal; $2.10 vs $2.25 is a marginal improvement.
- Confidence level — how certain the system is about the result. Higher confidence means more reliable results.
- Sample size — total subscribers from each variant. More data means more reliable CPA calculations.
- Cascading status — whether the winner has been deployed to other channels and how those follow-up tests are performing.
Over time, you will build a library of tested creatives and learn which angles, hooks, and messaging styles work best for your specific audience. This institutional knowledge is one of the most valuable outputs of systematic A/B testing.