Compute is the New Oil: Insights from RAISE Summit 2025
)
At the RAISE Summit in 2025, Mirantis CTO Shaun O’Meara joined executives from Crusoe, Lambda, Nebius, and Sesterce to discuss the urgency and complexity of infrastructure challenges around AI, and how AI is shaping infrastructure’s future.
Watch the recording below to hear practical insights on how organizations can meet AI infrastructure challenges and where the industry is heading next, or continue reading for an overview of the panel discussion.
Why Traditional Cloud Can’t Keep Up
The conversation opened by recognizing that the traditional cloud model, built on generalized, multi-tenant CPU servers, is no longer adequate for modern AI workloads. These workloads require highly specialized infrastructure that incorporates powerful GPUs, high-speed networking, and tightly integrated system design. GPU servers, unlike traditional CPUs, often cannot be time-shared and come with high costs. As a result, infrastructure must be purpose-built, with new deployment models that prioritize performance, scale, and availability.
To meet this challenge, infrastructure providers are adopting vertically integrated strategies. Advanced virtualization and orchestration solutions now allow massive GPU clusters to be divided into efficient sub-clusters, giving organizations more flexibility in accessing compute without overcommitting resources. These innovations help expand access to high-performance infrastructure while maintaining performance and reliability.
Reducing Complexity and Increasing Usability
Modern AI infrastructure involves far more than just installing hardware. Managing GPU allocation, storage integration, and multi-tenant isolation presents significant technical complexity. Providers are investing in software that abstracts these layers in order to enable faster setup, improve security, and simplify user experience.
Usability is becoming a critical factor in adoption. While early adopters may have the expertise to work with low-level tools, broader enterprise audiences expect intuitive interfaces, seamless APIs, and integration with their existing workflows. Speed of deployment remains vital, but simplicity and support are now equally important for long-term success.
Sovereign Infrastructure Gains Momentum
Sovereign infrastructure is rapidly becoming a hot topic, especially in regions like Europe where local compute capacity remains limited compared to global leaders. Building domestic data centers, leveraging local energy sources, and ensuring data locality are key to infrastructure strategy.
This movement is about more than compliance; it is a matter of digital independence and economic resilience. Organizations are increasingly aware of the risks associated with relying solely on foreign cloud providers, especially when it comes to AI workloads that process sensitive or proprietary data. Sovereign infrastructure aims to close this gap while maintaining competitive performance and quality.
Working Toward Widespread Enterprise Adoption
While much of the current demand is driven by AI-native startups and research labs, the next wave of growth will come from enterprise adoption. Yet many enterprises find the current landscape confusing. Offerings vary widely in pricing models, technical requirements, and service levels.
To address this, infrastructure providers are beginning to shift from offering raw compute toward delivering complete solutions. Managed inference services, orchestration platforms, and developer-friendly APIs are becoming essential tools. At the same time, collaborations are helping providers move toward enterprise-ready platforms that reduce infrastructure complexity.
Looking Ahead
The RAISE Summit panel on Compute is the New Oil offered a timely look at how infrastructure is emerging as a critical foundation for innovation and growth. Today, infrastructure-as-a-service dominates due to overwhelming demand; however, longer-term value may lie in building more complete solutions that include platform and software services.
There is no single path forward. Some providers will focus on infrastructure scale and specialization, while others move toward managed service and enterprise platforms. What remains clear is that infrastructure matters more than ever.
To support this shift in AI infrastructure, Mirantis delivers k0rdent: a declarative, Kubernetes-native platform for scalable, multi-tenant AI. It automates orchestration, enforces policy, and brings consistency across clouds, edge, and data centers.
To learn more about k0rdent and see how you can build your own scalable and sovereign AI Factory, download the Mirantis AI Factory Reference Architecture.

)
)
)


)