What ASC actually does (and what it does not)
Advantage+ Shopping Campaigns are Meta’s AI-driven campaign type built specifically for eCommerce. ASC consolidates prospecting and retargeting into a single campaign, letting Meta’s Andromeda delivery system decide who sees your ads and when.
The appeal is obvious. Instead of managing dozens of ad sets with layered targeting, you give Meta a set of creatives and a conversion objective, and it finds buyers across its entire audience pool. Meta reports that advertisers using ASC see a 17% improvement in cost per acquisition compared to business-as-usual campaigns.
But ASC is not a set-it-and-forget-it solution. It is a tool. And like any tool, the result depends entirely on how you configure it and what you feed it. Most underperforming ASC accounts we inherit share the same problems: wrong conversion objective, insufficient creative volume, no separation between testing and scaling, and no structural guardrails to prevent the algorithm from taking the path of least resistance.
How to set up ASC for new customer acquisition
In a study of 15 A/B tests, Meta found that ASC drove 12% lower cost per purchase compared to business-as-usual campaigns. But that improvement only shows up when the setup is correct. The default ASC configuration optimises for total purchases. This sounds right, but it creates a problem. Meta’s algorithm will always gravitate toward the easiest conversions, and the easiest conversions are people who already know your brand. Left unchecked, your “prospecting” campaign quietly becomes a retargeting campaign.
Set a custom conversion for new customers. This is the single most important configuration decision in your ASC setup. By creating a custom conversion event that fires only on first-time purchases, you tell the algorithm exactly what you want: new buyers, not repeat orders.
How to implement it:
- Create a custom conversion in Events Manager that filters purchase events to first-time buyers only. You can do this using customer list exclusions or by passing a “new customer” parameter through your Conversions API (CAPI) integration
- Set this custom conversion as your optimisation event in the ASC campaign settings
- Set the existing customer budget cap to 20-30% of total campaign spend. This forces Meta to allocate the majority of budget toward genuinely new audiences
The existing customer budget cap is your structural guardrail. Without it, ASC will over-index on warm audiences because they convert at higher rates and lower costs. That looks good in your ROAS column but does nothing for growth. A 20-30% cap keeps some budget flowing to existing customers (which helps the algorithm learn) while ensuring the bulk of your spend goes toward acquisition.
For a detailed comparison of when ASC outperforms manual campaigns and when it doesn’t, see our guide to Advantage+ vs. Manual Campaigns: which is right for your eCommerce store.
Creative is the targeting
Meta’s Andromeda delivery system represents a 10,000x increase in model capacity for ad personalisation, with over 100x improvement in feature extraction speed compared to the previous retrieval engine. This is the concept that most advertisers get wrong about ASC. In a traditional manual campaign, you control targeting through audience parameters: interests, lookalikes, custom audiences. In ASC, you hand targeting over to Andromeda, and it makes targeting decisions based primarily on creative signals.
What does that mean in practice? The images, video, copy, and hooks in your ads are not just communicating your value proposition. They are telling the algorithm who to show them to. A UGC video of a 28-year-old woman unboxing your skincare product signals a completely different audience than a studio-shot product-on-white image with benefit-focused copy. With broad targeting and Andromeda, you are not testing audiences. You are testing creative. Every new ad is a hypothesis about who will buy and why.
This makes creative volume and diversity your primary performance levers in ASC. Not budgets. Not audience settings. Creative.
The implication is direct. If you are running 3 to 5 creatives in ASC and wondering why performance has stalled, the answer is almost always that you have saturated the audience those creatives can reach. Adding budget to the same creatives will not unlock new audiences. Adding new creative angles will.
Your creative mix needs to be intentionally diverse across multiple dimensions. Andromeda’s deployment achieved a +8% ads quality improvement on selected segments, and that improvement scales with creative diversity. Vary your formats (static, short-form video, UGC, carousel, long-form video). Vary your value propositions (price, quality, convenience, social proof, aspiration). Vary your hooks (questions, bold claims, testimonials, demonstrations, before-and-after). Each variation causes Andromeda to explore different audience segments, effectively doing your audience testing for you.
For our full creative testing methodology, see our creative testing framework for Meta Ads.
The three-stage creative maturity model
Not every ASC account is ready to scale the same way. The number of active proven winners in your scaling campaign determines your budget allocation across testing, scaling, and retargeting.
We define a “proven winner” as a creative that has graduated from testing with a CPA at or below your target, has generated at least 5 purchases, and is currently performing in your scaling campaign without signs of fatigue. Count your active winners, not lifetime winners or fatigued creatives.
Stage 1: Discovery (fewer than 3 active winners)
You need to find what works before you can scale it. Most of your budget should feed the testing machine. Scaling spend is limited because you don’t yet have enough proven creative to sustain it.
| Tier | Allocation | Purpose |
|---|---|---|
| Scaling | 40% | Limited winners to scale; avoid concentrating risk on 1-2 creatives |
| Testing | 50% | Maximum creative exploration to build your winner pipeline |
| Retargeting | 10% | Baseline warm audience capture |
Stage 2: Growth (3-8 active winners)
You have enough winners to scale meaningfully, but creative fatigue on broad targeting demands aggressive testing to keep the pipeline full. This is the most common operating stage for healthy accounts.
| Tier | Allocation | Purpose |
|---|---|---|
| Scaling | 60% | Strong winner pool supports increased scaling spend |
| Testing | 30% | Aggressive testing to outpace fatigue and expand the winner library |
| Retargeting | 10% | Baseline warm audience capture |
Stage 3: Maturity (8+ active winners)
Your winner library is deep enough to sustain heavy scaling spend. Testing shifts to maintenance mode, replacing fatigued creatives and exploring new angles rather than searching for your first hits.
| Tier | Allocation | Purpose |
|---|---|---|
| Scaling | 70% | Deep winner library supports maximum scaling |
| Testing | 20% | Maintenance testing to replace fatigued winners and explore new angles |
| Retargeting | 10% | Baseline warm audience capture |
When to transition between stages:
Move from Discovery to Growth when you have 3+ active winners performing at or below target CPA in your scaling campaign for at least 2 consecutive weeks. Move from Growth to Maturity when you have 8+ active winners AND your testing pipeline is consistently graduating 2+ new winners per month.
Watch for regression too. If your net winner flow (graduated minus fatigued) is negative for 2 consecutive months, move down one stage immediately. Your pipeline is not keeping up with fatigue.
The 5x CPA testing rule
Every new creative gets a budget of 5x your target CPA to prove itself. This is a disciplined, emotion-free framework for evaluating creative performance.
If your target CPA is $40, each test creative gets $200 to run. Launch it in your testing campaign with broad targeting and Advantage+ placements. If it spends $200 with zero conversions, kill it. No exceptions. If it converts at or below target CPA with 5+ purchases, graduate it to your scaling campaign.
Your testing budget divided by the test budget per creative gives you the number of test slots per month. At a $50K monthly budget in Growth stage, that is $15,000 in testing divided by $200 per creative, giving you 75 test slots per month or roughly 19 per week.
Not all test slots should go to the same type of creative. Apply a 70/20/10 rule within your testing budget:
- 70% iterations: Variations on proven winners. New hooks, different colours, alternate CTAs, changed thumbnails, first-frame swaps
- 20% new concepts: Fresh angles, new value propositions, different formats you haven’t tried
- 10% big swings: Unconventional creative, pattern interrupts, polarising angles, completely new approaches
Before killing a creative at the 5x threshold, check its early engagement signals. A thumb-stop rate above 25-30% or a link CTR above 1.5% suggests the concept has potential even if this execution didn’t convert. Document the angle and iterate rather than abandoning the direction entirely.
When ASC stops working: the spend concentration problem
The most common ASC failure mode is spend concentration. This happens when a small number of creatives absorb the vast majority of your campaign budget without producing proportional sales. You check your ad-level breakdown and find that 2 creatives are consuming 70-80% of spend, but their CPA is well above your target.
Why does this happen? Meta’s algorithm allocates budget based on predicted conversion probability. If your creative pool is small or lacks diversity, the algorithm has limited options and will concentrate spend on whatever it predicts will perform best, even if that prediction turns out to be wrong.
How to diagnose it:
- Pull your ASC ad-level spend report for the last 7 to 14 days
- Sort by spend (highest to lowest)
- If 2 to 3 ads are consuming more than 60% of total spend AND their CPA is above your target, you have a spend concentration problem
How to fix it:
- Pause the high-spend underperformers. This forces the algorithm to redistribute budget across other creatives. It feels counterintuitive because Meta “chose” those ads, but Meta optimises for predicted outcomes, not guaranteed ones
- Set minimum spend thresholds on ad sets to force budget diversification across your creative pool. This prevents the algorithm from starving potentially strong creatives of the budget they need to exit the learning phase
- Add new creative angles immediately. Spend concentration is almost always a symptom of insufficient creative diversity. The algorithm is concentrating because it has nowhere else to go
The fix is never to increase budget on a campaign with concentrated spend. More budget flowing into the same underperforming creatives will not improve results. Fix the creative pool first, then scale.
For a deeper look at how the learning phase affects ASC performance and how to protect it, see our guide to understanding Meta’s learning phase.
Managing winner fatigue in your scaling campaign
Graduating a creative to scaling is not the end of the process. Every winner has a lifecycle, and broad targeting accelerates fatigue because Andromeda’s 10,000x model capacity increase means it serves winners to large audiences faster than ever.
Watch for these fatigue signals in your scaling campaign:
- CPA rising above 1.5x target over a 3-day rolling window
- Frequency above 2.5 within your key audience
- CTR declining 20%+ week over week
- CPM rising while conversion rate drops
When a winner fatigues, don’t discard the concept. Iterate on it. Change the first 3 seconds of a video or the headline of a static. Reshoot UGC with a different creator. Same creative angle, different execution. Refreshed creatives go back into the testing campaign and must re-earn their way into scaling through the 5x CPA rule. No shortcuts.
Set automated rules at the ad level to enforce this without emotional decision-making. A fatigue alert when CPA exceeds 1.5x target over 3 days with 5,000+ impressions. An emergency kill when CPA exceeds 2.5x target over 3 days with $500+ spend. These rules keep your scaling campaign clean without requiring you to check it every hour.
Measuring ASC performance correctly
The biggest measurement mistake with ASC is evaluating it on blended ROAS. Because ASC runs across both new and existing customers, a healthy-looking ROAS number can mask the fact that most of your revenue is coming from people who would have purchased anyway.
Segment your reporting by customer type. If you have set up your custom conversion for new customers (as described above), you can now separate:
- New customer CPA: What you are actually paying to acquire a first-time buyer
- Existing customer ROAS: How effectively you are driving repeat purchases
- nCAC (new customer acquisition cost): Your prospecting spend divided by new customers acquired
This segmentation is what turns ASC from a black box into a transparent growth engine. Without it, you are guessing. With it, you can see exactly whether your spend is driving real acquisition or just recirculating through your existing customer base.
Evaluate your nCAC against customer lifetime value (LTV). If nCAC is less than 30-50% of first-year LTV, your acquisition economics are healthy and you have room to scale. If nCAC is approaching LTV, tighten your existing customer cap or improve your creative before increasing budget.
For a full breakdown of how to set up first-party attribution and why Meta’s self-reported numbers can’t be trusted in isolation, see our eCommerce attribution guide.
ASC within the broader campaign architecture
ASC should not be your only campaign. It should be the centrepiece of a three-layer architecture, with testing and scaling separated and retargeting handled by manual campaigns.
Layer 1: ASC scaling campaign (evergreen, full-funnel) Your primary scaling vehicle. Only graduated winners live here. Broad targeting, Advantage+ placements, optimised for new customer purchases via custom conversion. Always on, always receiving graduated winners from testing.
Layer 2: Testing campaign (separate from ASC) Every new creative starts here. Broad targeting, purchase objective, campaign budget optimisation. The 5x CPA rule governs what survives and what gets killed. Testing and scaling must live in separate campaigns. Mixing them contaminates your data and prevents the algorithm from optimising each campaign for its purpose.
Layer 3: Manual retargeting (education) Manual campaigns serving content to warm audiences: site visitors, product viewers, cart abandoners. This layer runs testimonials, product education, behind-the-scenes content, and founder stories. Its job is to accelerate purchase decisions, not to push another product ad. This content often does not get direct attribution credit, and that is fine. It is working when your overall conversion rate and time-to-purchase improve.
Layer 4: ABO seasonal (event-driven) Separate campaigns for sales events, product launches, and seasonal pushes. Running these in ABO instead of through ASC protects the learning phase and evergreen performance of your primary scaling campaign. When the sale ends, you pause the ABO without disrupting anything.
For a detailed walkthrough of how to build this structure, see our Meta Ads campaign structure guide.
Frequently Asked Questions
How many creatives should I run in ASC? Your scaling campaign should contain only graduated winners, typically 8 to 15 active creatives at any given time. Your testing campaign should be running as many test slots as your budget allows (calculated by testing budget divided by 5x CPA). The goal is to always have creative in testing so you are never caught without a replacement when a winner fatigues.
What is the ideal existing customer budget cap for ASC? We set it at 20 to 30 percent. This is high enough that the algorithm retains useful signal from your existing customer base, but low enough that the majority of spend goes toward genuine acquisition. If you set it below 10 percent, you may starve the algorithm of conversion data it needs for optimisation.
How long does ASC take to exit the learning phase? Meta’s learning phase requires approximately 50 conversion events per week per ad set. For ASC, this typically means 7 to 14 days depending on your budget and conversion volume. During this period, expect higher CPAs and less stable delivery. Do not make changes to budget, creative, or targeting during learning phase, as each change resets the counter.
Should I use Advantage+ Audience signals in ASC? ASC automatically targets Meta’s full audience pool and uses your creative signals and conversion data to find buyers. You can add Advantage+ Audience suggestions, but these are hints to the algorithm, not hard constraints. For more on when audience signals help versus hurt, see our Advantage+ Audience guide.
What budget should I start with for ASC? Start with enough to generate 50 conversions per week at your target CPA. If your target CPA is $30, that means a minimum weekly budget of $1,500 ($215/day). Starting below this threshold means ASC cannot exit the learning phase efficiently, and you will see unstable, unreliable results.
How do I know when to move between creative maturity stages? Run a monthly assessment. Count your active winners, measure how many graduated from testing in the last 30 days, and subtract how many were paused due to fatigue. If your net winner flow is positive and you meet the threshold for the next stage, shift your budget allocation. If net flow is negative for 2 months, drop down a stage regardless of current winner count.
What to Read Next
- Meta Ads for eCommerce: The Complete Guide (2026) — The full strategic framework including all campaign layers, attribution, and benchmarks
- Meta Ads Benchmarks for eCommerce: ROAS, CPC, CPM and CPA by Industry (2026) — Compare your ASC performance against industry-specific benchmarks
- Scaling Meta Ads for eCommerce Without Increasing CPA — The budget scaling framework for accounts ready to move beyond Growth stage
- How to Set Up Meta Ads for a Shopify Store (Step-by-Step, 2026) — The technical setup guide for Pixel, CAPI, and catalog integration