Like Swings of the Pendulum … So are Changing Tech Tides

The technology industry is witnessing yet another paradigm shift, as enterprises rethink their cloud strategies, data gravity reshapes infrastructure decisions, and AI fuels demand for specialized computing. The cloud-first narrative is evolving—what was once old is new again.
Like any good soap opera, the tech industry can be full of drama, especially given the recent AI hype, DeepSeek-quakes, Musk government renovations, power-hungry extrapolations, and so much more. However, one thing is certain in tech-land: the pendulum always swings back. Or, perhaps more accurately, we see the rise of (seemingly) old tech as new once again and sometimes it not only rears its head, but becomes a true disruptive force. Recently my esteemed colleague and fellow Clouderati, David Linthicum wrote an interesting piece called “The cloud giants stumble” that I thought highlighted the most recent pendulum swing, but unfortunately, it didn’t go quite as deep as I hoped. First, I suggest you read his piece if you haven’t already. Then let’s dig into it!
Dave talks about the recent slowing of public cloud growth, mentioning mounting costs and egress fees; however, I think he really hit the nail on the head when he mentions that “enterprises are becoming more sophisticated in their approach to cloud computing … [and that the] ‘lift and shift’ approach, once touted as a quick path to cloud adoption, has proven to be more complex and expensive than initially projected.” He also mentions the challenges with data sovereignty, privacy, and compliance, niche cloud providers, the rise of edge computing, and the need for specialized hardware for AI workloads.
I want to examine most of these key points, which I would summarize as:
Enterprise IT cloud sophistication
Data gravity & sovereignty
Edge computing
Artificial Intelligence & Specialty Hardware
In the very earliest days of the shift to cloud, I was continually stunned at what I found when talking to enterprise customers. The average enterprise had what I would call a highly dysfunctional IT organization. They were unable to deliver services in a timely manner or for a reasonable cost. I recall consulting with Kaiser Permanente in 2009 and finding out that a simple small configuration 2 RU rackmount server would be cross-charged to departments for as much as $10,000, a 4x markup. Years later in 2016, while at EMC, I was involved with a high level conversation with Kaiser Permanente again where they thanked senior EMC leaders for the VCE vBlock, because it had significantly reduced their costs to deploy infrastructure. I was flabbergasted obviously, given how notoriously expensive a vBlock was. A major enterprise like Kaiser Permanente could not deliver simple infrastructure at a reasonable price internally. In this way, it was somewhat inevitable that public clouds would get adoption beyond the “base workload” and with legacy workloads.
Enterprise IT has Changed
Back in 2013, I included the above image in a presentation deck. “Own the base and rent the spike” was a saying that all of the Clouderati were very familiar with as it was apparent to anyone with eyes that if you did have a functional IT team that you could deploy infrastructure at a somewhat competitive cost to the public clouds. I think Dave is right that IT teams cloud sophistication has increased, but I would go further and say that, in general, enterprise IT teams levels of just plain old IT skills and architecture have become more sophisticated. Open source is more widely used. There are many orchestration systems internally now. IT teams have been using public and private clouds for 10+ years now and have leveled up. Of course, they will be able to supply infrastructure at a more reasonable cost than they have in the past. It was only inevitable that they would gain these skills, which reduces dependence on public clouds and makes “owning the base” more feasible. They have also learned that not everything should be on the public cloud.
We can really see this trend with the advent of Platform Engineering, which I would characterize as the enterprise’s attempt to turn DevOps and SRE principles into something that a normal enterprise can consume and deliver to internal developers. Many of the ideals that DevOps and SRE aspired to were always going to be deeply challenged by institutional inertia. Breaking down silos? It can’t happen easily. However, Platform Engineering, the idea that you can build an internal platform that can be used across development teams, providing “Golden Paths” (the spiritual successor to the “Golden Images” of the VMware days of yore) that help developers deliver quickly, while staying “within the guard rails” has been a winning play. Enterprises are already delivering internal developer platforms (IDPs) that manage across internal and external infrastructure, reducing dependence on public clouds and providing a general purpose cloud abstraction layer that all but obviates public cloud API endpoints. This trend is real and here to stay.
Data Sovereignty? Try Data Gravity.
Data has a gravity all of its own as the esteemed Dave McCrory, another famous Clouderati for whatever that’s worth, originally opined. Sitting behind those enterprise firewalls is close to 100 zettabytes of data according to Seagate and Forrester. That represents half of the known data in the datasphere. That 45-50% of data in the datasphere has stayed as a constant percentage for decades and is unlikely to change. Enterprises throw away 98% of the data they generate every year. Yet, much of this data is not only proprietary (e.g. medical radiology images, actuarial tables, fraud and bank transfer data), but can be used to train AI models to provide competitive advantage as we move into this coming Intelligence Era. The data is not moving. The processing needs to come to it, not vice versa. Data gravity is real and it won’t go away.
The Edge, The Edge, my Kingdom for the Edge
Related to this, much of that 98% of data that is being thrown away is being generated at the edge. Security cameras, oil and gas sensors, home appliances and TVs, LoRaWAN devices, and so much more. The IoT world is growing exponentially, generating incredible amounts of data and is poised to go on afterburners with the advent of AI. Most importantly, public clouds don’t play at the far edge where most of these devices are. They have no place in it. They don’t even really play at the near-edge for the most part, as their business models require delivering economies of scale through aggregation while edge computing is inherently distributed. ARM, RISC-V, and other lower power devices are everywhere, growing exponentially, and suffering from extreme distribution.
Supercomputing Is Now Mainstream
Another turn of the pendulum and supercomputing is back. Supercomputing? Yes, supercomputing. High Performance Computing (HPC) has been a thing for a while, but always in a niche. With the oncoming AI Apocalypse and rise of the Information Era, it has come into its own. Classic HPC systems were built on homogeneous scale-out whitebox x86 servers combined with ultra-low latency high end networking (usually Infiniband by Mellanox, now part of NVIDIA). This was because MPI workloads needed to get massive amounts of RAM across many different boxes to look like one single memory space. NVIDIA GPUs have the same problem. This is why NVIDIA acquired Mellanox in one of its more brilliant moves. Just as virtualization, which predated VMware by dozens of years, and containers, which predated Docker and Kubernetes by dozens of years, have seen their time shine, now it's the time for HPC. It’s baaaaack!
Now however, the model is quite a bit different. Specialized GPUs need to be interconnected by ultra-low latency high end specialized networking, and everything needs to be watercooled because you can have as much as 40KVA (40,000 watts) in a single rack (4x what a rack could generate in the heyday of Silicon Valley datacenter booms). You might call it supercomputing on steroids. Regardless, we have taken a hard turn back from generic, whitebox, x86 servers designed to operate at hyperscale, towards special purpose GPUs, LPUs, and NPUs that are designed for the very specific vector processing tasks that AI needs. Generic homogeneous software is giving rise once again to heterogeneous specialized hardware and it isn’t going to change any time soon.
So what does it all mean? Dave’s article lead with the idea that cloud giants were “stumbling,” but I think the outcomes of all of this disruption are quite a bit more nuanced. Here is what we know:
Enterprise IT is changing and embracing techniques like Platform Engineering & MLOps
Private enterprise data is necessary for training new AI models to compete effectively
Enterprises need to deploy and manage apps from the core to the edge, across public or private clouds, and really infrastructure of any kind
AI is changing everything
It looks like the future is a mix of the new and the old, with an ever-increasing number of applications and services, all of which are widely distributed, attached to public and private data sources, and nothing is getting easier. We need to look at public cloud and private cloud one part of the equation (the utility infrastructure we consume), we need to view abstractions like Kubernetes and Platform Engineering as another part of the equation (the orchestration layer that manages utility), and, we need to adopt new AI solutions on top to help use deliver, manage, and understand what it is we are all building together.
Then, when we’re done with all of that, we’ll see where the pendulum and changing tech tides take us next.