Simple OSD deployment scenario for a Rook Ceph cluster

In a Rook Ceph cluster, the storage of your cluster is provided from Ceph’s Object Storage Daemon (OSDs). This Object Storage Daemon stores data, handles data replication, recovery, rebalancing, and provides some monitoring information to Ceph Monitors and Managers by checking other Ceph OSD Daemons for a heartbeat. At least three Ceph OSDs are normally required for redundancy and high availability.
In this series, we’ll talk about different ways of specifying OSD deployment topology based on your environment and use case and guide you on selecting your configuration while deploying your cluster.
To begin with, we’ll discuss simple ways of specifying the storage configuration details, which will guide you in deploying your OSDs.
How to configure the OSDs?
For simplistic deployments where we just want to utilize the available storage devices or HDDs for backup or archival needs, among others, the following configurations can be referenced:
- To use all the available devices on the nodes in the cluster, here’s an example configuration:
For further information, please refer to our Documentation.
- To use specific devices in the cluster on all the nodes. For example if we just want to use block devices
sda
,sdc
,sdd
following configuration example can be referenced:
- To use the exact block devices on the hosts which are connected to PCI bus and to allow the selection of devices and partitions to be consumed by OSDs, here’s an example:
If you want to learn further about these configuration settings, please refer to the Storage Selection Settings documentation.
Thanks for reading! In an upcoming blog, we’ll discuss more about Advanced OSD Deployment topologies using various other configuration settings.