In Defense of Less Clusters: Part 3 – The Inverse Conway Maneuver, Team Namespaces, & Business Scalability

NGC 346 Star Cluster

Thunderstruck” a piano version…by Tommee Profitt, William Joseph 🎵

If a company runs Kubernetes as a Platform Engineering team, customers will ask for their own clusters, per project.

They will goal on it.
They will hire on it.

And they will hit scaling problems that can’t be solved by adding more people.

It is known.

Multi-tenancy has a bad rapport.

Lots of teams try to throw headcount at solving what are actually process problems where what they really needed was less people, less approvers, less infrastructure. Less fear of production.

It’s easy to fall into the dangerous ops and business trap of:
Add a project, add a headcount, add some clusters. Keep going. Until you die. 😊

What the Data Actually says about Adding Headcount

In Accelerate, Chapter 5, in a section titled “A Loosely Coupled Architecture Enables Scaling” the authors state that “…while adding developers to a team may increase overall productivity, individual developer productivity will in fact decrease due to communication and integration overheads. However when looking at the number of deploys per day per developer for respondents who deploy at least once per day…(that) as the number of developers (on a team) increases we found, low performers deploy with decreasing frequency, medium performers deploy at a constant frequency, and higher performers deploy at a significantly increasing frequently.” (Forsgren, 65)[1]. In the case of this book, high performers deploy more often and generally have low change failure rates because of it. The faster they moved, the safer they became. The important takeaway is that deployment frequency makes teams handle scale of adding more products, applications, and infrastructure to their ownership portfolio as a more important metric than headcount and this is done by moving faster to enable teams with removing unnecessary communication and approvals that get in their way.

Customers often approach with the belief that they need (1) a new cluster per (2) integrated environment and (3) new accounts for all those environments because of security, isolation, or compliance, or all three because they want to “isolate their team” and are “worried about noisy neighbors” and because they “do not know Kubernetes and do not want to hurt other people while learning.” I appreciate all of those considerations – they come from the right place and are empathetic and promote caution in spirit. But they unfortunately promote a sense of caution that is absent the research – one where moving faster is actually what keeps us safer.

They happen to also come from a place of not knowing what Kubernetes can do- and that’s okay too! It is important to acknowledge what we each do not know about each other so the right decisions are made.

All of us have to start somewhere. You’ve got to be Gandalf the Grey before you become Gandalf the White[12].

The unfortunate truth about asking for more clusters is, the cost benefits and knowledge gains are lost when teams do this as it increases operational burden on Platform Engineering teams. Kubernetes most shines in multi-tenant contexts – if teams adopt it correctly they won’t hurt each other as application engineers and have noisy neighbor problems. But this requires trial and error, learning, and a willingness to try, to move faster, to break and test incrementally in dev until teams learn.

In Accelerate, the authors (Forsgren, Humble, Kim) argue that “The goal is for your architecture to support the ability of teams to get their work done – from design through to deployment – without requiring high-bandwidth communication between teams.” (Forsgren, 63).

Specifically Accelerate references the Inverse Conway Maneuver – that instead of designing architecture and communication patterns that match an organization’s existing team today (Conway argued in 1968 that companies copy their organizational structure into their architecture – ie this problem of cluster(s) per team and project), turn it on its head and think about the architecture you want to have that is the most efficient for your organization…then organize your team around it[2].

I wrote about how teams today are not using node pools enough with Part One “In Defense of Less Clusters” and then later about Cluster Sprawl (Part 2) when reviewing a Reddit thread where the internet argued about how many clusters companies actually need and the top most upvoted comment was “1” with a reply of “I was worried I was doing something wrong. Glad this is the top comment.”

Truthfully, what companies and teams are not using enough are namespaces, priority classes, and resource quotas. Thank you to Reddit, my teams today, and those in the past who have all taught me that.

What are Namespaces, Resource Quotas, and PriorityClasses?

From the Kubernetes documentation, “Namespaces are intended for use in environments with many users spread across multiple teams, or projects…Namespaces are a way to divide cluster resources between multiple users (via resource quota).” Beyond this teams can use namespaces to also isolate environments (For example testing and staging) or node pools depending on what they are trying to test – application vs compute host performance benchmarking for example are different testing problems.

Resource quotas are used “When several users or teams share a cluster with a fixed number of nodes, there is a concern that one team could use more than its fair share of resources…A resource quota, defined by a ResourceQuota object, provides constraints that limit aggregate resource consumption per namespace. It can limit the quantity of objects that can be created in a namespace by type, as well as the total amount of compute resources that may be consumed by resources in that namespace…If quota is enabled in a namespace for compute resources like cpu and memory, users must specify requests or limits for those values; otherwise, the quota system may reject pod creation.”

For Priority Classes, “Pods can have priority. Priority indicates the importance of a Pod relative to other Pods. If a Pod cannot be scheduled, the scheduler tries to preempt (evict) lower priority Pods to make scheduling of the pending Pod possible…A PriorityClass is a non-namespaced object that defines a mapping from a priority class name to the integer value of the priority.” In spirit, teams can control what they think is the most important service to keep live when in a crunch.

It’s on Platform Engineering teams to enforce the use of these constructs, make this all simpler, while encouraging multi-tenancy whenever a new cluster is asked for. Deeply question, why teams would need a new cluster – if they don’t know these constructs, then they simply do not know. Learning and taking the time to teach is caring.

The answer for adding a new cluster for one project is almost always not good enough to justify the sprawl that organizations eventually have with regards to running Kubernetes – with many organizations who used to tout their 100s of clusters backtracking on that as an idea. Games and applications come in all different shapes and sizes, style of play, and backends. Ask yourself – would you rather move faster or…not?

Unless you’re AWS, GCP, Microsoft Azure, or a training entity that absolutely has to spin up multiple small tiny, baby cluster environments, teams do not have their problems. Cloud providers have already built and adopted continuous delivery and deployment systems which address what happens at the scale of “10000s of clusters.” They had to go through the culture shift first that requires removing process – as mentioned above – and still struggle with it because of proprietary tech debt.

Chances are an enterprise team has not built the same systems because it is a multi-million dollar investment to do so that starts first with understanding, what Accelerate proposes, which is that the goal of efficiency, continuous delivery (and deployment), is for teams to:

  • “Make large-scale changes to the design of their system without the permission of somebody outside of the team
  • Make large-scale changes to the design of their system without depending on other teams to make changes in their systems or creating significant work for other teams
  • Complete their work without communicating and coordinating with people outside their team” (Forsgren, 62).

These principles, all of which, require teams to remove approvers from everywhere that isn’t a code repository (IE Git) Peer Review (PR) or a finance/headcount ask also require a willingness to let engineers deploy on-demand so they can learn as fast as possible, daily, until they address their architecture in a way to where they don’t have to communicate all the time through extra approvals and manual barriers.

They require us to try to break the rules, re-think what is normal, believe that what we assume is safe might not be. Teams have to let go, so they can learn.

Accelerate isn’t a technical book. It is a book about deleting process so teams can build better. Isolating inside the cluster itself, doing more multi-tenancy but with loosely coupled microservices, instead of the “Conway” method of asking for new everything, every time teams want to do a new project is one fantastic way to remove process. It’s a way to build faster and safer every time because there are less unknowns across the board.

When to Ask for More Clusters

The real reason to ask for more clusters is not best done because of security or isolation (both of which are also achievable in Kubernetes) but rather if you need low latency, close-to-the-end-user deployments which can only exist by having clusters closer to the end users.

Many workloads do not require this today, even in games. The projects that do need more clusters that are deployed closer to end users in games are for competitive esports where real money is on the line in a way that that latency can be directly attributed to a player loss, session-based battle games where region-based fleets matter, or streaming use cases – at that point companies also would have to evaluate exactly how close they need to be depending on what they are running (AWS Outposts for example comes into play if the lowest-possible latency is a factor).

Another time to ask for more clusters is when you already have multi-tenant clusters or big workloads and you truly are hitting your limits. Kubernetes v1.30 itself is designed to support up to 110 pods per nod, 5K nodes per cluster, 150K total pods per cluster, and 300K total containers. You can also hit IP limitations which cannot be solved by custom networking nor a rework of your existing VPC and subnets and the resources within them which may mean at truly large scale you need to revisit a whole lot more than just “getting a new cluster” would actually solve.

This all said, it is rare teams are hitting these kinds of limits – though there are many interesting companies that have written about hitting them (OpenAI, for example, wrote about scaling to 7500 nodes in 2021). If you are hitting these scale problems, you’ve already been through the cultural challenges of this entire blog 😊.

Because teams don’t get to scalability, true scalability, without friendship, sharing, and removing any blockers that stand in their way – especially the ones that slow down teams deploying to production by adding extra communication steps, infrastructure, and change barriers that prevent them from doing so.

[1] Forsgren N, Humble J, and Kim G (2018). “Chapter 5: Architecture.” Accelerate: Building and Scaling High Performing Technology Organizations.Portland. It Revolution. pp. 62-65.

[2] “Inverse Conway Maneuver.” Thoughtworks. 2015 Jan 28.

Header Image Credit by NASA combines the data from the James Webb Space Telescope, Chandra, Hubble, Spitzer, XMM-Newton, and ESO telescopes in May 2023. More images can be found on Flickr. This is an image of a Closeup of the NGC 346 star cluster. Specifically “This image: NGC 346. Here, thousands of specks of light blanket the blackness of space…Between these gas plumes, centered near the top of the image, the star cluster is densely packed with specks of white, blue, and purple light.

[12] This is the twelfth clue to the puzzle.