Everything old is new again

GEFORCE blog cover

It’s five years since the world’s largest cloud gaming platform shut its doors. Here we are again, discussing a new generation of cloud gaming platforms that people think are different, and that somehow these ones are better set for success.

This week’s announcement that Nvidia’s Geforce Now cloud gaming platform is moving to a subscription model was particularly striking for those of us that have been here before, partly because of how much it echoes history. So much so, I felt it was worth looking a bit closer at the numbers and what they may really mean. I firmly believe based on the size of the loss they will be making this can only be an expensive marketing exercise for GRID rather than a viable streaming service.

First, let’s revisit OnLive. As Head of Engineering, and SVP Business and Corporate Development at OnLive, I was part of that adventure, I feel I come not just with an opinion, but actually, some deep-rooted experience and subject matter expertise, I’ve been there and done that, and have real numbers to back it up, so let’s dig a bit deeper.

OnLive launched in the middle of June 2010, going live the week of E3. At that time we had data-centres in Santa Clara (California), Dallas (Texas), and Ashburn (Virginia). These are three of the largest network routing points in the US, as they still are today. This gave a decent coverage across the US and with OnLive requiring around 6Mps for 720p made it accessible to a significant number of people. The servers themselves were our own custom design, using consumer Nvidia Geforce cards (Grid didn’t exist at this time) and Intel CPUs, it was the spec you’d see in a decent mid-range gaming rig. OnLive also designed its own hardware video encoder solution, taking the output from the graphics card and turning it into an internet friendly video stream, it was capable of encoding each 720p frame in under 8ms (for context 60FPS is one frame every 16ms). OnLive’s offering was a mixture of subscription for older titles, providing a wide variety of content, and then premium titles requiring a one time purchase. Ubisoft, Warner Brothers, Square Enix, Take Two, and many others, along with a good number of day and date releases all went towards a library of more than 350 titles at its peak.

As OnLive developed it added the MicroConsole and Controller, and many forward-looking features that really highlighted the dream of cloud gaming. There was Spectating (watching others play), Brag Clips (hit the button and the platform saved your previous play moment, to be posted up on a display board for others to view and vote on), messaging to other users, Multi-View (allowing four-player co-op to view your team members), Virtual Living Room (two-player co-op for people in different locations), and CloudLift, the ability to sync your steam library and play games through OnLive, then play them locally on your PC, all your saves and progress saved regardless of the platform.

I often hear people say OnLive was ahead of its time, the internet wasn’t ready, it was too laggy, it didn’t have enough games and on and on. I hear people saying the internet is faster now, latency is lower now, the visual quality is better now, these new offerings have learned from what’s gone before. And yet, this week’s Geforce Now news, suggests to me that actually very little has changed or been learnt and here’s why.

They say they have 300,000 users. OnLive had around 250,000 paying subscribers. They say they have 19 data centres, OnLive had 7 across North America and Europe. They are proposing $5/user, OnLive was $5/user. But here’s where things start to really look unnervingly similar. OnLive’s burn rate was largely based on the cost of powering those mid-range cloud gaming machines, and the headcount to support the operation. Across those 7 data-centres were approximately 8,000 servers, split variously from around 600 to 1800 based on the region. That meant that OnLive’s theoretical maximum number of concurrent users was 8,000, but in reality, it was only a couple of thousand since time-zones and latency create a barrier to where a user can play from. A gamer in California isn’t going to connect to a server in Luxembourg. This is as true today for Nvidia as it was for OnLive in 2010. OnLive required 7Mbps to deliver a good experience, Nvidia suggests a minimum of 20Mbps and recommends 50Mbps!

OnLive’s burn rate, the money it was spending on people and infrastructure to operate the platform, was north of $3.5m / month. There is no magic in this number, it is made up of people required to run a live service, develop new features, market the product, and run all those servers. Each server consumes several hundred watts, the nature of gaming systems, just because it’s in a data-centre doesn’t reduce the power consumed by those components. They require cooling, and a mid-range gaming rig ten years ago required THE SAME amount of power as a mid-range gaming rig today. In other words, GeForce now needs five times the bandwidth, and costs about the same in power and cooling as OnLive did ten years ago, to deliver what each generation of gamer demands for an acceptable experience. For Nvidia, $3.5m / month to run ~10k servers, for a subscription service of ~300k users is right in the ballpark of OnLive. In other words, with every one of those 300,000 Geforce Now subscribers paying $5 / month, Nvidia is going to lose around $2m / month. If you do the maths for Stadia, it’s not going to look a whole lot different.

So what’s changed? Well, from where I’m sitting, nothing. But that feels like an acceptable marketing cost to me if I’m a $250 billion company. It’s also a good testbed for a product I’m trying to sell i.e. Nvidia GRID. What it isn’t, is a scalable game platform, even allowing for some gross margins of error, everything about these numbers suggests that Geforce Now can support no more than a few thousand concurrent users in any given region, which means any form of game launch, any form of day and date, any form of high usage MMO, would simply saturate the number of available servers. For OnLive, the problem of large spikes in demand caused us to create the “velvet rope”, a queuing system not dissimilar to an MMO lobby. It’s been a decade since OnLive launched, five years since it closed down, and it doesn’t look like a whole lot has changed.

PS – I just tried to log on now before posting… here are the results:

This is the reality of using proprietary, inelastic 1:1 hardware solutions to stream what is effectively a desktop

cloud gaming, gaming platform, Geforce, Geforce Now, google stadia, Nvidia, Nvidia GRID, OnLive, stadia

Leave a Reply

Your email address will not be published. Required fields are marked *

  • Games Aid
  • Initial Capital
  • London Venture
  • Microsoft Accelerator
  • UKIE
  • White Space
  • Intel Capital
  • Lauder
  • Wargaming
  • Epic Mega Grants

Stay Connected

Become part of our ground-breaking community.

#HHCIB