Chapter 6: The Scaling Framework

El Pico Navideño proved the content could work. The next question was whether the system behind it could scale without burning money on audiences that wouldn’t stick.

The first decision was to not set an AVD target upfront. The market would define what good looked like. El Pico Navideño established the baseline for long-form. A short-form video about a car drifting through Mexico City established the baseline for short-form. With both benchmarks in place, the optimization targets became clear.

The engagement problem

Scaling across LatAm has a structural wrinkle most campaigns ignore. CPMs in these markets are cheap, which makes acquiring subscribers easy. What is harder is acquiring subscribers who actually engage. Dead subs are worse than no subs, especially when advertising spend is the primary acquisition lever.

AVD was the quality filter. Not a solution to low-cost traffic, but a way to engineer engagement within a trusted network. Users watching with high AVD are invested in the content. That metric, alongside watch time, likes, earned views, and playlist adds, became the signal that decided where budget went and where it got reallocated.

Campaign structure and testing cadence

Targeting ran broad by region with genre-based affinity layers. AVD by campaign decided where budget went and where it got reallocated. When a campaign dropped below baseline, spend was shifted to what was performing.

Advertising compressed the testing timeline. Running content against real audiences quickly meant ideas could be tested, measured, and adjusted faster than organic traffic alone would have allowed. That iteration cycle is likely what contributed to stronger organic performance over time.

Shorts and long-form serve different functions. Long-form holds attention. Shorts drive action. AVD works as the baseline benchmark across both, with engagement metrics providing the context to make optimization calls.

The numbers

24,000 subscribers acquired, 18,000 retained, on roughly $500 in total spend. None of this required a large budget. It required careful targeting, a structured campaign approach, and a content schedule built around what the data was showing.

The human in the loop

The system does not run itself. Campaign reviews, content decisions, optimization calls, and creative direction all require someone watching the data and acting on it. What AI changed is the workload. Tasks that would have required a team, writing lyrics, generating visuals, producing metadata, auditing outputs, can be batched or automated. That compression is what made it possible to run 20+ markets, hundreds of assets, and a live advertising campaign as a one-person operation.

The open question heading into the next phase is whether the same approach works outside of music entirely. The content engine is proven. The boundaries of it are still being mapped.