Cloud gaming and mystery of the bad business cases: $400 BILLION bad

By Marek Rubasinski

Cloud gaming is clearly having another moment – between Google Stadia, xCloud, GeForceNOW, and Facebook buying Playgiga there’s a lot of action going around. As you would expect, a lot of writing on the subject too, and a lot of discussion in the industry.  It’s a hot topic.

I spend a lot of time with customers, partners and investors and I’m increasingly struck by how rarely the business case for how anyone will actually make this activity payout in the long run. How is this all going to be paid for, and who’s buying?

Is anyone actually asking “How does cloud gaming actually make money?”

Because the answer is absolutely not: ‘same way as games do now’. You see, cloud gaming requires infrastructure in a way that I think most onlookers don’t grasp.

Unlike the value chain of the industry now,  the more people play, the more it costs. There’s an inherent 1:1 relationship between required resources and the number of players that will always break the economies of scale that the games industry is built on.

Who’s paying?

If you are a cloud gaming service provider (a platform like Stadia, or a game store like Epic Games’ that wants to become a platform), YOU have to pay for the infrastructure that is needed for your customers to play. This is in stark contrast to today where the cost of end-point delivery and play is largely borne by the end-user. i.e. end-users pay for a suitable PC, console of mobile and they pay to run it. The cost is spread.

In cloud gaming, the more your players play – the more YOU pay.

This breaks two of the three main monetisation models of the games industry, and sorely tests the third:

  • One-off paid-for games and subscriptions – because you have a fixed income but a variable (and open-ended) cost.
  • Free-2-plays fare better because they often align monetisation with usage – the more time you spend in-game, the more likely you are to spend some money on a virtual hat etc.
  • But F2Ps have a different problem – they make a tiny amount per player hour on average. It ranges between $0.08-0.12 (with obvious outliers and seasonality). This very low revenue per hour puts a hard ceiling on how much you can spend on your variable cost-to-serve.

Service providers are faced with two mountains to climb. The capex mountain to put the infrastructure in place and keep scaling it faster than the user based grows, and the opex mountain to keep running it. These would be hard tasks for any online service but cloud gaming currently makes life even more difficult (and I argue, impossible) for itself by relying on a technology that does not scale and is very very expensive for this use case to both build and run.

In cloud gaming, the more players play – the more YOU pay.

You see, almost all cloud gaming platform solutions are currently based on proprietary implementations of something sometimes called ‘pixel streaming’ or ‘frame streaming’, often just generically video streaming. They use highly specialised resources in the cloud equipped with GPUs that can render graphics and encode them into video in real-time and send it to you. Stadia, xCloud, Nvidia Geforce NOW, PlayStation Now and Blade Shadow are all built like this. Almost all are built on niche data-centre products from AMD or Nvidia. All the above services own their own servers – none of this is a shared cloud resource that you can find on AWS, Azure or Tencent. There are some shared GPU resources available like for AWS AppStream – but the numbers are small and currently, most of them are busy servicing CAD and other 3D productivity applications that also need GPUs as the volume of home-working has rocketed.

So far, so what? Well – let’s do the numbers on this approach

We have to start with the concept of an ‘instance’ if you are not too familiar. Let’s not go too deep on the tech but simply define an ‘instance’ as: ‘the set of cloud resources required to provide service to 1 player for 1 game at a time’. We refer to instances because increasingly it is not just one discrete physical machine on a blade in a rack – although plenty still are here’s a closer look at the hardware behind project xcloud. The crucially important thing here is that 1 instance only serves 1 player. It is not like other streaming (such as TV) where 1 server could be supporting thousands of users or requests.

Any platform must have enough resource instances to satisfy peak concurrency demand. For gameplay, this is reliably between 7-10 pm and has been for decades (since Xbox Live really hit it off) been around 30% of DAU (Daily Active Users).

So if you have 1,000,000 DAU on your platform you will need 300,000 instances (plus change for new game launches etc), unless you want players to see ‘servers busy’ messages and stop paying you very quickly (after flaming you on Reddit and review boming you first, of course).

Today, about 2,600,000,000 people play games from occasionally to every day across all platforms, roughly this is gamerplanet’s MAU. (Monthly Active Users)

Usually, at this point, once the numbers have sunk in, I’m often told ” that’s fine because Google and Amazon have loads of servers and if they need more they’ll just build more. Not a problem.”

Wrong. They don’t and they won’t

They have lots of generic compute servers and are growing capacity on multipurpose AI compute servers. They have a lot of storage. They do not have lots of GPU-equipped instances. So if we believe this is how we are going to build the cloud gaming future then someone is going to have to pay for it – but how much?

Let’s be very clear on this – AWS, Azure, and Google DO NOT operate on a ‘build it and they will come investment strategy’. If they did they would already be bankrupt – they are super smart at this. They will not pay for this stuff upfront – platforms and service providers would have to commit this spend themselves, including their in-house ones like Stadia and xCloud.

I recently posted a response to an analyst forecast here and here’s a recap of the back of the envelope calculation.

To achieve the ARR potential this analyst is forecasting of about $56Bn by 2027, the global infrastructure providers would have to invest $400Bn to get there…just in infrastructure capex.

Always take these forecasts with a pinch of salt but if you believe the $56Bn by 2027, by then that’s about 20% of the total industry value which is maybe reasonable in the time frame.

That means about 600M players, give or take, on all platforms primarily cloud gaming.

If you assume normal resource contention ratios and peak concurrency demand used to model data centre roll-out for this kind of use case that could be 150-200 MILLION GPU enabled resources in the cloud needed. Today there are about 200 THOUSAND.

Each data-centre grade GPU enabled set of resources that can support cloud gaming costs about $2000 all in once you’ve amortised the silicon, building, cooling, infrastructure; the people and truck rolls to install. That’s before you even switch it on. Then you have to replace it every 3-5 years.

Even Google Stadia doesn’t have that much to spend. Read about an alternative view of the world in my recent blog: Reality check the Metaverse