Optimized Scalable Storage Enabled through Kubernetes
Ceph is a storage back end for cloud environments. Mirantis Ceph is a distributed object store and file system designed to provide excellent performance, reliability, and scalability.
Why Ceph Storage?
While your developers need a cloud storage solution, IT/OPS needs a unified pool of storage that can be scaled up by simply adding storage server nodes. Ceph is a unified, distributed storage system designed for excellent performance, reliability and scalability. It is an open source software project to build scale-out storage that meets all of the above requirements for both block and object storage for OpenStack. Ceph has built-in self-healing and can withstand underlying hardware failures; additional capacity can be added whenever required.
Mirantis Flow offers Ceph as a primary solution for all storage types, including image storage, ephemeral storage, persistent block storage, and object storage.
Scalable Cloud Native Storage
Scale-up storage no longer works, clouds require storage to scale‑out by simply adding nodes.
Self-Healing & Resilience in Software
Resilience in a cloud gets moved to the software layer as opposed to a hardware layer.
Storage in a cloud environment needs to be automated both for IT/OPS and for developers.
Open source is a must in cloud environments. Open source ensures velocity, access to continuous innovation and no lock‑in.
With Ceph storage you won’t struggle to scale vertically. Mirantis’ experts can show you how to add storage nodes and seamlessly increase storage while ensuring reliability.
Automatic Scaling with Mirantis Cloud
DriveTrain’s model-driven architecture includes formulas to implement and scale fully integrated Ceph clusters and the desired interfaces within the MCP open cloud.
Enhance Your Storage Observability With StackLight
Keep tabs on the performance and utilization of the Ceph storage across your workloads from within Mirantis StackLight OSS. The DevOps Portal within StackLight provides real-time data metrics regarding how your users are employing the Ceph clusters.