Ceph
PLEASE NOTE: This document applies to v0.7 version and not to the latest stable release v1.8
Documentation for other releases can be found by using the version selector in the top right of any doc page.Pool CRD
Rook allows creation and customization of storage pools through the custom resource definitions (CRDs). The following settings are available for pools.
Sample
apiVersion: rook.io/v1alpha1
kind: Pool
metadata:
name: ecpool
namespace: rook
spec:
replicated:
# size: 3
erasureCoded:
dataChunks: 2
codingChunks: 1
crushRoot: default
Pool Settings
Metadata
name
: The name of the pool to create.namespace
: The namespace of the Rook cluster where the pool is created.
Spec
replicated
: Settings for a replicated pool. If specified,erasureCoded
settings must not be specified.size
: The number of copies of the data in the pool.
erasureCoded
: Settings for an erasure-coded pool. If specified,replicated
settings must not be specified.dataChunks
: Number of data chunks per object in an erasure coded storage poolcodingChunks
: Number of coding chunks per object in an erasure coded storage pool
failureDomain
: The failure domain across which the replicas or chunks of data will be spread. Possible values areosd
orhost
, with the default ofhost
. For example, if you have replication of size3
and the failure domain ishost
, all three copies of the data will be placed on osds that are found on unique hosts. In that case you would be guaranteed to tolerate the failure of two hosts. If the failure domain wereosd
, you would be able to tolerate the loss of two devices. Similarly for erasure coding, the data and coding chunks would be spread across the requested failure domain.crushRoot
: The root in the crush map to be used by the pool. If left empty or unspecified, the default root will be used. Creating a crush hierarchy for the OSDs currently requires the Rook toolbox to run the Ceph tools described here.