One Week to Stabilise and Scale a National Live TV Experience
Banijay • Live Voting Platform
'All Against One' is a highly interactive primetime entertainment format broadcast across Europe. Each episode engages hundreds of thousands of viewers simultaneously through a mobile application, allowing the population of an entire country to play directly against a contestant in the studio. The format requires the seamless processing of millions of concurrent guesses and votes, real-time aggregation of national data, and instant delivery of results to broadcast graphics during live television.
Adappt was engaged by Banijay, the world's largest independent television production group, to stabilise, operate, and scale the digital infrastructure behind the format across multiple countries as well as redevelop the play-along mobile application.
The Format: A Nation Playing Live
During each episode, a contestant in the studio is presented with a series of physical stunts and must estimate the outcome. Viewers at home submit their own guesses through the integral mobile app in real time. Throughout the show, the contestant competes directly against the country's aggregated average, aiming to win the most rounds, and the cash prize. Voters at home, conversely hope that the average scores beat the contestant so that they may be the one lucky voter who gets a call from the studio to confirm them as the winner of as much as 100,000 euros.
Delivering this seamless experience is no small feat and required:
- Instant scalability: The platform remains on standby during the week, but must react to scaling to levels of 1 million concurrent users logging in via the app and simultaneously voting during the broadcast itself.
- Mass registration throughput: This sees, hundreds of thousands of new registrations in the first 5–10 minutes, at a time where the infrastructure needs to be rock solid to provide an excellent experience.
- High-volume voting: Upwards of 6 million votes per show across several rounds is not uncommon. All these votes need to be captured, and best a prize fund is available, auditable in the event of any dispute.
- Real-time studio integration: The aggregated votes must be delivered to broadcast graphics systems in seconds. We are talking about LIVE TV, there is no place for latency delays.
- Multi-country concurrency: Sweden, Norway, Denmark, Spain, Germany, and the Netherlands have all aired versions of the show, often with their own voting happing at the same time as another country.
- Zero downtime tolerance: LIVE television cannot pause or repeat segments, there is no second chance.
Whilst between broadcasts, only around 10 servers are required to allow registartions, catch-up features and minigames to be played in the app, scaling during the peak moments of pre-show registration and LIVE voting can see the infrastructure scale to 200–700 servers within minutes.
Initial Platform Assessment
Adappt inherited a long-running voting backend that had historically shown instability under high load. The previous provider could not support more much than 100,000 concurrent users, and broadcasters had become accustomed to regular technical issues during live shows.
Banijay engaged Adappt to:
- Modernise and upgrade the mobile application
- Operate the existing backend platform
- Architect and deliver a scalable, resilient infrastructure capable of supporting national-level participation
Whilst the initial mobile upgrades were delivered successfully, the task ahead of what was required from the infrastructure was really only first released during the first series of shows Adappt were exposed to, which highlighted significant weaknesses in the inherited backend system.
The First Broadcast Failure (almost)
During the first live show under Adappt's operation, a pan-European routing surge on AWS coinciding with the Eurovision Song Contest triggered extreme wave-pattern traffic within the system. The inherited backend was met by a deluge of votes, but the architecture was simply unable to handle the load and became unresponsive for several minutes. A Very nervous time for all.
With no fallback mechanism in place, the platform failure caused viewer participation to halt entirely during key rounds of the broadcast.
This incident highlighted to Adappt the urgent need for swift architectural intervention.
Immediate Stabilisation: One Week to Restore Reliability
With only one week before the next live broadcast, Adappt implemented a comprehensive stability strategy focused on preserving end-user functionality even if backend services became unstable. We have all heard the saying ' The show must go on' and in this case in LIVE TV there was no alternative but to swiftly look at solutions to mitigate any repeat of the weekend before.
Emergency Fallback Layer
Having workshopped the scenarios, Adappt's senior engineers developed a high-performance caching and failover system designed to:
- Serve all active content directly from memory-based JSON structures
- Maintain user access to questions, voting interfaces, and statistics
- Queue and safely process votes during backend restarts
- Seamlessly reintegrate with the main backend when available
This 'fail safe' layer effectively insulated the mobile experience from backend instability. Even if core services failed or rebooted, users continued to interact with the show without disruption.
Delivered in time for the second show, this interim solution ensured uninterrupted broadcasts, and great user experiences were maintained without hiccup, while in the background Adappt started the work on preparing a full system rebuild to meet Banijay's scaling ambitions.
Full Platform Rebuild: Engineering for National-Scale Interactivity
Adappt proceeded to undertake a complete rebuild of the backend architecture to deliver reliability, elasticity, and broadcast-grade performance.
Key Capabilities Delivered Include:
1. Auto-Scaling Cloud Infrastructure
- Elastic scaling from 10 → 700 servers based on real-time load
- Intelligent load distribution to handle highly volatile traffic spikes
- Low operational cost via extensive use of AWS spot instances
2. High-Throughput Registrations
- Distributed onboarding pipeline supporting hundreds of thousands of registrations in minutes
- Optimised database write patterns for burst load
- Reliable token management and authentication under extreme concurrency
3. Real-Time Vote Aggregation
- Low-latency ingestion of millions of votes
- Real-time averaging and computation for national results
- Immediate transmission to studio graphics systems
4. Multi-Region, Multi-Broadcaster Operation
- Dedicated deployments for Norway, Denmark, Sweden, Spain, Germany, and the Netherlands
- Consistent architecture with language and ruleset flexibility
- Ability to operate multiple broadcasts concurrently
Crisis #2: Cloud Capacity Shortage During COVID-19
During the COVID-19 lockdown, European cloud capacity tightened significantly. AWS spot instances, previously the backbone of cost-efficient scaling, suddenly became unavailable ahead of a major new show just moments before airing. Knowing that there was an extreme likelihood that this could jeopardise the show, Adappt executed a complete regional migration which included:
- Identified alternative compute capacity across Europe
- Reconfigured infrastructure in a new AWS region
- Rebuilt and tested the entire architecture
- Completed migration within three hours, ahead of broadcast
Despite global infrastructure shortages, Adappt pulled a solution out of the bag, and all shows proceeded without disruption. This has been the ONLY moment (post Adappt's rebuild) where platform stability was at risk. Since then, the platform has supported many 100's of shows successfully, without interruption or glitch. The new architecture has proven to be rock solid and highly scalable.
Results
With a modernised architecture in place, the platform achieved breakthroughs across all key metrics:
The improvements revealed substantial hidden user demand. Previously, platform instability meant viewers often abandoned the app. Once stabilised, full audience participation became visible for the first time.
Engineering Principles & Learnings
1. Resilience Must Be Native
Fallback layers, caching strategies, and graceful degradation are essential when user expectations are tied to live broadcast timing.
2. First Episodes Will Expose System Limits
Initial live broadcasts generate the sharpest traffic spikes; architecture must be prepared for this 'first-episode effect.'
3. Multi-Region Cloud Strategy is Mandatory
The COVID-19 incident demonstrated that elasticity depends on global availability, not assumptions about cloud capacity.
4. Registration Throughput Is the True Bottleneck
Whilst votes are lightweight data points; mass registration bursts represent the real engineering challenge.
Conclusion
Adappt transformed an unstable, legacy platform into a resilient, scalable, and high-performance digital experience capable of engaging entire nations live on air.
Through emergency stabilisation, a full architectural rebuild, and rigorous operational execution, Adappt delivered:
- Broadcast-grade reliability
- Real-time national participation
- Multi-country scalability
- Millions of interactions with zero downtime
This case demonstrates Adappt's capability to deliver mission-critical, high-load, real-time systems where performance, resilience, and user experience cannot fail.
Need Operations That Maintain Excellence?
The Banijay story demonstrates what's possible when technology and operations combine: emergency projects delivered under pressure, infrastructure problems exposed and solved, systems built for long-term excellence. Whether you're facing scaling challenges, need to build resilient infrastructure, or want operations that maintain measurable performance year after year—the principles are the same: measure everything, solve systematically, build for permanence.