Breaking records in scaling Mount Everest is like modern data storage
New world record, most times to summit Mount Everest
Stories of people achieving greatness inspire us to be better. On May 17, 2023, Kami Rita Sherpa scaled Mount Everest for the 27th time. Climbing Everest once is more than most of us could ever dream of achieving. Kami has been doing this since 1992, and he is now into his early 50s. Amazing.
Another, Pasang Dawa Sherpa, is at Kami’s heels to beat the record, with 26 climbs notched on his rope belt. Where one has gone, others will follow.
This reminds me of the IMAX documentary Everest from 1998. Narrated by Qui-Gon Jinn – I mean Liam Neeson – we are transported to the summit in a series of breathtaking views that have you clutching your theater seat. My most vivid memory is of crossing the make-shift ladder-turned-bridge wedged across a deadly gap in the path to the top.
Then (at least as my memory replays it) they cut to scenes from “the making of” with smiling sherpas taking care of the camp site and lifting heavy movie equipment that imprints 70mm film for the IMAX experience. That’s when you realize who is doing the hardest work on the journey up the mountain and back.
What that has to do with data
Believe it or not, that reminds me of technology for managing data storage. In particular, I think of Rook and Ceph. Rook is open source software for bringing data storage run by Ceph into a Kubernetes cluster. That’s one big bundle of software that handles everything about keeping your data safe, resilient to corruption, and available wherever you need it.
Allow me to explain.
At the lowest level, we store data in physical media. Spinning magnetic disks are still the most common, although solid-state drives are faster and adoption is rising as their cost drops. Non-Volatile Memory (NVMe) is the up-and-coming technology that eliminates all of the moving parts, beating even SSDs in speed. For high-volume long-term storage, magnetic tape is still in common use as well.
Flash drives replaced floppy disks as handy, portable storage, and optical discs (CDs, DVDs and Blu-ray) were great before network speeds made them unnecessary. In the future, we will certainly find ways to store even more data more quickly. For this discussion, we can put aside legacy and future storage media.
Ceph does the heavy lifting
Ceph steps in to give you control over your data. Ceph handles object, block, and file storage using “reliable autonomic distributed object store” (RADOS) to do data replication, fault tolerance, placement, and load balancing across the data cluster. Basically it acts like a sherpa, making sure that data is packed well, items are accessible, the load is spread evenly, and nothing gets lost.
For larger expeditions, a climbing party might need two or three sherpas. Likewise, Ceph scales horizontally. Simply attach more storage nodes, and Ceph will rebalance data across the available space.
One important function of Ceph is self-healing. It has a way to detect issues with the data and make corrections while problems are still recoverable. Through a process called scrubbing, Ceph spot checks small portions of data against copies of the same data. When it finds differences, it corrects the deviations. Scrubbing is done both in the background and as part of a regular, more intense maintenance discipline.
You might say that Ceph meditates to find inner peace.
Rook leads the expedition
At the next level up, we have Rook. Think of Rook as the expedition coordinator that gives Kubernetes containers access to persistent storage that Ceph manages. Applications running in Kubernetes wake up ready to climb, but first they have to find their data. Rook integrates with Kubernetes to connect to data using persistent volumes and persistent volume claims.
Rook offers all of the benefits of Ceph – self-healing, auto-recovery, highly scalable – without having to worry about so many of the details. Plus, Rook adds dynamic volume provisioning for making data resources available on demand.
This is the perfect solution for scaling your data. Ceph clusters can run anywhere and scale horizontally. Kubernetes containers can run anywhere and scale horizontally. Organizations with the largest data volumes are breaking records with Rook and Ceph.
For instance, a scientific research group in the UK manages 60 petabytes of data on Ceph and has plans to double that any day (drives are racked and ready). Check out one of my favorite talks at the recent Cephalocon in Amsterdam.
As long as people inhabit the Earth, we will keep scaling the highest peaks. For a safer option, we can climb using VR. Imagine the data required to reproduce the complete experience of climbing Everest for real, without the risk of death from falling or exposure.
Even then, only the humble sherpa gets the life-affirming satisfaction of achievement that comes from taking such a risk, repeatedly, throughout a lifetime. Respect.