“Septimus” and “Shooting Star” by Ilan Eshkeri, London Metropolitan Orchestra, and Andy Brown from the major motion picture Stardust🎵
I’m almost done with Ronald Heifetz and Martin Linsky’s “Leadership in the Line.” Sometime in the last two weeks I remember thinking “I’m struggling with standards – when can games propose innovation with stability and cost effectively if the games industry is now meeting cloud-native standards for containers.”
I then digested this thinking about the next 5 years – that they may be the most interesting of all.
In the middle of this amazing book the authors say,
“It is not always possible to show people the future. It might not exist… Confidence in the future is crucial in the face of the inevitable counter pressures from those who will doggedly cling to the present and for whom you become the source of unwanted disturbance.”
Heifetz and Linsky, pg 122.
At the same time I re-read, the famous speech by, Franklin D. Roosevelt from his fireside chat on The Recovery Program. The timing felt right.
“This is preeminently the time to speak the truth, the whole truth, frankly and boldly…let me assert my firm belief that the only thing we have to fear is fear itself—nameless, unreasoning, unjustified terror which paralyzes needed efforts to convert retreat into advance.”
Franklin D. Roosevelt
The Disclaimer
Before you do any great futuring you need a great disclaimer: This is my personal blog. This is not the opinion of Take-Two, nor Zynga, nor contains any information about either. Please don’t stick this in a press article as if it does. It would be inaccurate.
What is in here is related to the Kubernetes community and games at present and my past. There are many barriers both technical and adaptive, and as mentioned already, the actual industry itself is still adopting standards which make up the cloud-native future for games.
The final disclaimer – anytime I say anything about the future I can be wrong. It’s a fun exercise, but listening is more important than holding on to a vision for the sake of it.
It’s important for all of us to be willing to see one than not having one at all,
or as FDR said, convert retreat into advance.
Delivery of Games Is Faster: The Importance of HTTP/3 & QUIC UDP
My fascination with QUIC UDP started in ’20 during COVID by way of an incredibly bright and intelligent human – Alec Bryan and Team NICE DCV at AWS. Alec brought the streaming people network and expertise, and I brought the games background.
I’ve never had nearly as much fun as I did getting game engines to work with NICE DCV. We also tested Apex Legends, Ori and the Blind Forest, and Spellbreak at one point over QUIC UDP. Antoine Genereux then contributed a quick bash script so we could all test this even faster. We, along with many others, were trying to get 4K streaming working well for games and understand what the problems were going to be beyond cost.
I was somewhat fascinated by remote workstations, but in particular I began to see more benefits to QA, press demos, and players. I wondered end-to-end, how do you get this both into player’s hands and creator’s hands in tandem – if the creator is living in the same client (in our case NICE DCV client) they are closer to their end user in some ways. It wasn’t about the remote workstation as much as it was about trying to be closer to the player.
None of this was new – many streaming companies and initiatives had failed in the past 10 years at both small startups and huge entities. But they had not enforced QUIC UDP – security was still up for debate.
In ’21 QUIC UDP started to hit the key checkpoints for Internet Standards and then as version 3 in 2022 as what many called the real future of the internet. While some were focused on the benefits of blockchain in ’20-’21, a few colleagues and I had been heads down on HTTP/3 and all the innovation that came with QUIC UDP.
In simple terms, QUIC UDP relies on the User Datagram Protocol (UDP) instead of TCP (Transmission Control Protocol). It was originally developed by google in 2012 but took years to standardize. I think the metaverse is a keyword and not real, but the one innovation that came out of that trend was finally a lean in to QUIC. I felt confident in a future where games streaming was both containerized and using QUIC UDP because I have questions about whether parents long term will buy consoles in 5 to 10 years.
My last purchasing decision was between a TV to play games on and a console, not both, because I can now play them on TV. That is happening now for millennials and parents who have to look at the next 5 years carefully as buyers.
Games as Shared Spaces: Games Shared Responsibility Modeling in Kubernetes
I’ve discussed in another blog that to understand Kubernetes cost in people, not compute, teams should consider to track auxiliary workloads as a percentage against app workloads against the CNCF standards.
It is challenging to operate Kubernetes without knowing how many people a company needs against the different types of workloads there are: multi-tenant, “single-tenant,” large scale FPS games that need low latency, shared feature sets, analytics. The list goes on.
Games come in different shapes and sizes – different types of play, different kinds of backends, different platform targets. I began to understand that the AWS whitepaper for organizations – the one on “Organizing your AWS Organization into Multiple Accounts” while great and I’ve read it about 5 times, including when it was in draft, perhaps was not written with this future in mind.
Are we waving the principles in it for isolation, as the de-facto way to do games workloads. 1 game. 1 account. or 1 game, 1 account per environment. … because “It is known, Khaleesi”? How many decisions are we making because “It is known?”
How in a microservices world where we care a lot about centralizing for cost some applications, but not centralizing others, can we look at those structures and say “This is the way?” I see a world where some games infrastructure share spaces deeply (like many hypercasual games hosted in 1 cluster streamed to players, each as a session on it, and common feature sets moved to multi-tenant clusters as microservices), but this whitepaper is the authority of best practices because it existed first before our industry had a real voice in Kubernetes.
What if it’s not the way? What if it is not known?
Cloud standards need to take a step back, get on the balcony, and look at the big picture with the games community and with the Kubernetes community. “This is where we are. But this is where people want to go.” Games infrastructure will be more shared spaces, not less, both for businesses but also for players who want to host their own experiences.
Streaming is Still Expensive: Games Containerization Requires Specialty Skills in Both Games and Kubernetes
Often when people discussed streaming in the last 5 years, it was “It will never happen. It’s too expensive. This is why it costs Microsoft so much money to run Game Pass cloud streaming. That’s why Stadia failed.” Etc etc enter your story.
Believe me.
I know.
Innovation came at the cost of eating it yourself so players didn’t have to – but the internet, purchasing habits, and places of play are evolving yet again.
I am not going to talk in a public blog about my opinions on streaming subscription models, but what I can say, is that to get cost downs you have to start thinking about the payload really deeply and how it is split up on the backend and on the client delivery side vs what people want to spend. Too often though, we think about this from the perspective of web and console, not Smart TVs.
Our industry must care strategically about what we are delivering to players, not only in play and fun, but in performance and size – the game updates, the client updates for the streaming client (like support for high density screens), the partnerships that enable it. We now have to care who owns the streaming client as a product and the Smart TV as a platform.
As we speak, my husband is playing on Game Pass Starfield via cloud gaming over a Smart TV without the console.
To support experiences like that, reversing into the backend, we have to think about how we can split up the GPU. For example “With GPU sharing, you can share one or more physical GPUs between multiple NICE DCV virtual sessions” otherwise it is extremely expensive.
We have to containerize what we can in these workloads so we don’t have 1 session per 1 box. This is all possible, but talent expertise is lacking – another expense often unaccounted for.
This is why I was pretty excited by Justin Garrison’s example repo for Game Streaming on Kubernetes with Moonlight. Equally, I was excited by Alec’s DMs that he had figured out a way to use docker on his NAS to drop downloads of movies into a container and index by Plex. This is why Sushant Kapare‘s blog on deploying Tetris using ArgoCD on EKS is amazing to me and this one using a DevSecOps pipeline to deploy 2048 on docker with Jenkins. I’m seeing immensely talented people growing and exploring our industry and space.
To really advance instead of retreat we have to press gas on talent and everything that comes with that – training, specialized blogs for containerization and microservices, and hiring for innovation.
We need to work backwards from: What if now we don’t care about game binaries going to the client – we only care about the streaming client going to the client?
What changes?
Everything.