We have updated our Privacy Policy, effective from 25 September 2024. It’s available here

< BLOG HOME

Technical Learning Curves: Not All Have Equal Value

image

Tech folks love to learn. This is a good thing. Climbing up steep learning curves is an absolute requirement for success in IT, DevOps, software development, and allied fields.

Of course, that’s true in other parts of the organization, as well. Business, Sales, and Marketing leaders don’t reach the C-suite unless they’re constantly learning. Once there, their impact strongly reflects their willingness to keep on digging deep. 

But there are differences between Tech and Managerial approaches to learning.

In my experience, managerial learning tends to focus on business relevant issues. Leaders study how to communicate differentiating benefits better; how to build category dominance. They read about how to quantify business progress and improve business processes. The best surround themselves with smart and trusted counselors inside and outside the orgs they serve, and depend on these coaches to keep them focused, challenging them with new ideas and directions for learning.

Even when executives dip into theory, they tend to concentrate on what’s business-relevant: that is, unpacking business culture, or understanding macro-scale phenomena that influence business growth and survival. lt was a CEO, for example, who first pointed me in the direction of Nicholas Taleb’s explorations of disruption and resilience (“Black Swan,” “Antifragile,” and so on), and Daniel Kahneman’s work on modes of thinking (notably “Thinking Fast and Slow”) – all material I’ve used, over the past decade, to build better clouds and applications. 

Technical learning should be results-focused, too. And sometimes it is. In fact, what’s arguably the most powerful mechanism for results-focus, category creation, business growth, and wealth creation of the late 20th/early 21st centuries emerged from tech: “commoditize your complements.” That’s shorthand for two things:

  • “Focus on what’s truly business-relevant and open up to let others do everything else.”

  • “Help make the things that you don’t do, and upon which you depend, as cheap as possible.”

The actual technique is much older; as old as the idea of standardized parts and supply chains (which dates back to the dawn of mechanized manufacturing). More recent poster children for “commoditize your complements” were IBM and Microsoft, whose open hardware architecture and non-exclusive OS licenses ushered in modern computing, with its vast ecosystem of competing software and hardware makers. 

Software engineer and StackOverflow co-founder Joel Spolsky’s famous 2002 blog popularized the phrase, and Spolsky also used it to explain phenomena like “Why is IBM paying people to contribute to open source projects?” Answer: Because IBM was, at the time, repositioning as an IT consultancy, so their complements were software components. Ergo, they invested in helping other folks build and provide this software at low- or no cost.

These days, you see “commoditize your complements” at work everywhere in tech – not least in the flood of “rent, don’t own” SaaS solutions and cloud technologies we all use every day.

But failure modes remain

And yet … tech orgs often still fall into failure modes like “not manufactured here disease,” where engineers feel they can’t trust “outsiders” to deliver parts of a product or service.

This can lead to misallocation of time, effort, headspace, and headcount – as engineers study and train, and orgs hire and tool up to design, build, and manage parts of a solution stack (like OpenStack and/or Kubernetes) that are technically required for many use-cases, but (in all their details) not business-essential.

Especially with respect to complex cloud platforms, the line between “technically required” and “business essential” can seem blurry, even (maybe especially) to sophisticated experts.

Good and bad habits of thought

There’s a persistent – and mostly wholesome – idea that engineers should be builders. And even when engineers don’t intend to build, they often figure that knowledge sufficient to build and manage is required before you can trust a solution. Tech folks also tend to assume – and not without reason – that generic platform knowledge is profitably transportable from opportunity to opportunity. So why not learn all you can?

That reasoning is mostly healthy. But it’s currently causing a lot of tech folks to try climbing the wrong learning curves, and wasting a lot of time and money for their organizations, in the process.

Let’s take OpenStack and Kubernetes. Both open source. Both complicated. Complicated enough that tech folks should be thinking hard about exactly what, and how much, to learn about them. Consider:

  • Production clusters of OpenStack and Kubernetes (and arguably, dev/test/staging clusters as well) evolve as deliberately-architected things: often tuned to specific use-cases.

  • They require sophisticated operations tooling, observability/metrics, and highly-trained specialists – not just SREs and operators but networking, storage, compute, facilities, observability, security, and other experts – to build and run.

  • A single, sizeable cluster in production use can require several hundred technical interventions per year (updates, scaling, modifications, fault remediation) – equivalent to an average of 7-10 full-time trained administrators, plus additional specialized skills as circumstances require.

  • Many organizations require multiple clusters. Most need dev/test/staging/production or some subset of these, maintained in parity with one another. Some need standardized clusters deployed across many locations – in datacenters, or increasingly, out at the network edge, where the workloads need to be.

All this represents a ton of learning curves for any tech organization that wants to “do it yourself” with OpenStack, Kubernetes, and/or OpenStack on Kubernetes (containerized control plane). And the biggest part of this learning is arguably the least important, because a lot of it isn’t relevant to your specific business requirements. Sure – building a consulting/hosting business that engineers and maintains clouds for a big, diverse portfolio of demanding customers – that business needs to know everything.

But you don’t. What you need to do, instead, is treat these platforms like black boxes, engineered for commoditization. That’s what they are: tools providing APIs and stable, dependable functionality that your applications can consume, without breaking or needing to change, over business-meaningful spans of time.

Building cloud-native applications is your job. That’s what lets you deliver value, and win as a business. So this is the learning curve you should climb. Bonus: this is the learning curve where the knowledge you gain is actually transportable. Platform code changes fast. But APIs are supposed to last forever-ish – particularly if you work with partners who understand and anticipate breaking changes, deprecations, and other challenges, and help you work around them.

Ironically, physical-world businesses are now discovering that “commoditize your complements” has limits (supply-chain issues, anyone?) But in software, the idea that you should “let someone else learn and build everything that isn’t differentiating” is more true, important, and achievable than ever.

Choose your cloud native journey.

Whatever your role, we’re here to help with open source tools and world-class support.

GET STARTED

Join Our Exclusive Newsletter

Get cloud-native insights and expert commentary straight to your inbox.

SUBSCRIBE NOW