The Rule
Change one thing per test. That’s it. If you change two things at once and performance shifts — up or down — you won’t know which change caused it. More clicks but lower ROAS? Was it the new image or the new headline? You’ll never find out, and you’ll make the wrong call next.
This sounds obvious. In practice, almost nobody follows it.
The Three Layers of a Campaign
Think of your campaign configuration as three layers:
- Objective — what you’re optimizing for (purchases, add to cart, registrations, content views)
- Targeting — who sees the ad (audience, location, demographics, interests)
- Creative — what they see (image vs. video, short copy vs. long copy, button color, headline)
When you test, you change something in one layer and lock the other two.
Testing Creatives
This is the most common test. You keep the same objective and the same targeting, but run multiple ads with different creatives.
For example: four ads, four different images. Everything else — copy, CTA, targeting, campaign objective — stays identical. Now when one image outperforms the others, you know exactly why.
Other creative variables you can isolate:
- Format: static image vs. video vs. carousel
- Copy length: one-liner vs. three paragraphs
- Hook: different opening lines in the same ad structure
- CTA: “Buy now” vs. “Learn more” vs. “Get started”
One variable. Not two. Not “let’s try a new image AND shorter copy.” That’s two experiments disguised as one, and the results will be noise.
Testing Objectives
You can also test different campaign objectives against each other. Same targeting, same creatives — but one campaign optimizes for purchases, another for add-to-cart, another for content views.
A common concern: won’t these campaigns cannibalize each other if they target the same audience?
No. The ad platform handles this. When you set a campaign to optimize for purchases, you’re telling the system: “within this audience, find people most likely to buy.” When another campaign targets registrations, the system finds people most likely to register. Same audience on paper — different subsets in practice. The platform splits them by behavioral signals tied to the objective.
So you can safely run multiple campaigns to the same audience with different objectives. Just make sure the objective is the only variable:
- Same targeting
- Same creatives
- Same budget
- Same duration
- Launched at the same time
Two campaigns, one month, one difference: the conversion event. That’s a clean test.
Timing Matters
Always run test variants simultaneously and for the same period. If you run creative A this week and creative B next week, you’re not testing creatives — you’re testing weeks. Seasonality, day of the week, algorithm fluctuations, even news cycles can shift performance. Side by side, same window, same budget.
What This Looks Like in Practice
A clean test plan:
| Campaign A | Campaign B | |
|---|---|---|
| Objective | Purchase | Add to Cart |
| Targeting | Same audience | Same audience |
| Creative | Same 3 ads | Same 3 ads |
| Budget | 500 PLN | 500 PLN |
| Duration | 30 days | 30 days |
| Start date | Same day | Same day |
One variable: the objective. After 30 days, you compare cost per result, ROAS, and funnel quality. If “Add to Cart” brings cheaper top-of-funnel traffic that converts downstream, you’ve learned something real. If “Purchase” gives fewer but higher-value conversions, that’s a real answer too.
The Takeaway
Resist the urge to test everything at once. One variable, two variants, same conditions, same timeframe. It’s slower than throwing spaghetti at the wall — but it’s the only way to actually know what works. Every insight you get from a clean test compounds. Every insight from a messy test is a coin flip.