Google Ads “Performance Composer” Overhaul: Quick Marketer Checklist

September 28, 2025
8 read

Google has rolled a major AI update that changes how bidding and creative work together in campaigns. Many marketers call it Performance Composer. The change affects campaign setup, bidding logic, and how assets are used.

This guide is a short, tactical checklist. It shows the exact settings to change, migration steps, and A/B tests you must run. Read this and act fast. Keep calm. Test carefully. Just published: GPT-5’s Google-level intelligence?

Why this matters now

AI now handles more bidding and creative choices inside Google Ads. That means old campaign rules may not work the same way. If you leave everything as-is, performance can drift.

You need to migrate, test, and watch results. Small moves now prevent big problems later. Google has also added more channel-level reporting and brand controls to help marketers see where ads run.

Quick checklist overview

Use this short checklist as your action map. Do these steps in order.

  1. Audit current campaigns and conversions.
  2. Copy key campaigns into controlled experiments.
  3. Update campaign settings for the new orchestration logic.
  4. Run paired A/B tests.
  5. Check asset and creative hygiene.
  6. Monitor channel reports and adjust.
  7. Roll changes to winners only.

You will find details for each step below.

Step 1 — Audit your account (30 to 60 minutes)

Start simple. Know what you have.

  1. Export a list of active campaigns and their objectives.
  2. Note which campaigns use automated bidding, Performance Max, or smart bidding.
  3. Identify top conversions and where they are tracked. Is your conversion window the same across campaigns?
  4. Check budgets and pacing for top-performing campaigns.

Why do this? You need a clear baseline to measure any change. Without a baseline, you will guess. Guessing costs money.

Step 2 — Pause big changes and clone for testing

Never change your best campaign without a test version.

  1. Clone any campaign you plan to change.
  2. Name the clone with a test label and date.
  3. Keep the original running as control.

Run the test at 50/50 or similar so Google splits traffic. This gives real data under the new bidding and creative logic. Use experiments or drafts if available.

Step 3 — Campaign settings to check and change

Here are the exact settings you must review.

  1. Conversion goals
  2. Make sure the campaign uses the same conversion actions as the control. A mismatch will confuse bidding.
  3. Bidding strategy
  4. If the platform suggests a new automated option, add it to the cloned campaign only. Do not flip the whole account at once.
  5. Budget and delivery
  6. Keep the same daily or monthly budget in the test for fair comparison.
  7. Audience signals
  8. For AI orchestration, provide clear audience signals. These are suggestions to the system, not hard limits.
  9. Asset groups and creatives
  10. Group assets by theme. Provide at least three headlines and two images per group. More asset variety helps the system learn faster.

Make only one major change at a time in a test. Multiple changes hide the result.

Step 4 — Creative hygiene checklist

Creative matters more than ever.

  1. Use clear, on-brand headlines and short descriptions.
  2. Ensure images are high quality and follow size rules.
  3. Include one strong call to action in each asset group.
  4. Remove duplicated or low-performing assets before testing.

Why clean creatives? The AI mixes and matches assets. A weak image can drag performance down. Treat assets like test subjects. Replace low performers quickly.

Step 5 — Must-run A/B tests

You should always run these three tests when migrating.

  1. Bidding strategy test
  2. Control: current bidding.
  3. Test: new AI-driven bidding or Performance Composer option.
  4. Creative orchestration test
  5. Control: current asset group.
  6. Test: reorganized assets and new headlines.
  7. Channel allocation test
  8. Control: control campaign with existing channel mix.
  9. Test: enable full AI channel orchestration if the new tool allows it.

Run tests for at least 2 to 4 weeks or until you have statistical confidence. Do not pause early. If a test loses, roll back the change and learn why. Use Google’s experiments or split testing features to keep traffic split cleanly.

Step 6 — Migration steps for your campaigns

Follow this safe migration path.

  1. Pick a low-risk campaign to start. Prefer non-brand or mid-budget campaigns.
  2. Clone it and apply the new settings.
  3. Run the A/B tests listed above.
  4. Review results weekly. Look at conversions, CPA, and conversion rate.
  5. If the test wins, migrate similar campaigns in batches of 5 to 10.
  6. Keep a rollback plan for each batch.

Small batches make it easier to find issues. If you migrate everything at once, you lose control.

Step 7 — Reporting and what to watch

New orchestration can hide where conversions come from. Use these checks.

  1. Channel and placement breakdowns. Watch YouTube, Display, Search, and Discover separately. Google has added better channel performance reports this year. Use them.
  2. Conversion path length. Does a channel show as a first touch only? Note downstream effects.
  3. Cost per conversion by channel. Some channels cost more but drive value later.
  4. Asset-level performance. Watch which headlines win.

Set alerts for big swings in CPA, cost, or conversion volume. If numbers jump, pause and review.

Real-life example

A small e-commerce brand migrated its mid-funnel campaign. They cloned the campaign, added three new headlines, and switched the clone to the suggested AI bidding mode. After two weeks, the test showed a 12 percent lower CPA and better conversion rate from video placements. They then migrated three more similar campaigns. The key was small steps and clean tests.

Why did it work? They gave the system varied creatives and kept conversion tracking aligned. The AI had what it needed to learn.

Troubleshooting common problems

Problem: Performance drops after migration

  1. Check conversion tagging. A missing tag often causes bad bidding.
  2. Review which assets were used most. Pull the weak ones.
  3. Pause the test and compare to the control.

Problem: Too many channel placements

  1. Provide stronger audience signals or exclusions. AI will try many places if left free.
  2. Use placement exclusions for low-value sites.

Problem: Budget pacing oddities

  1. Keep budgets stable during tests. If pacing is off, pause and re-evaluate bids and budgets.

Final checklist before you launch wide

  1. All top conversions are tracked and identical across control and test.
  2. Tests are running for at least two full weeks.
  3. You have a naming convention for test campaigns.
  4. Creative assets are cleaned and grouped.
  5. You exported baseline reports for comparison.

Simple. Clear. Repeatable.

Conclusion

The Performance Composer style changes are a step forward. They give AI more control over bidding and creative mixing. That can be great if you test and guide the system.

Will you see instant wins? Maybe. Will you avoid many costly mistakes? Yes, if you follow this checklist. Start small. Test one change at a time. Keep data in hand.

Ready to start? Clone one campaign and run the first test today. Small steps, steady wins.

Sponsored Content