You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: Documentation/CRDs/Cluster/ceph-cluster-crd.md
+4-4
Original file line number
Diff line number
Diff line change
@@ -26,7 +26,7 @@ Settings can be specified at the global level to apply to the cluster as a whole
26
26
*`external`:
27
27
*`enable`: if `true`, the cluster will not be managed by Rook but via an external entity. This mode is intended to connect to an existing cluster. In this case, Rook will only consume the external cluster. However, Rook will be able to deploy various daemons in Kubernetes such as object gateways, mds and nfs if an image is provided and will refuse otherwise. If this setting is enabled **all** the other options will be ignored except `cephVersion.image` and `dataDirHostPath`. See [external cluster configuration](external-cluster/external-cluster.md). If `cephVersion.image` is left blank, Rook will refuse the creation of extra CRs like object, file and nfs.
28
28
*`cephVersion`: The version information for launching the ceph daemons.
29
-
*`image`: The image used for running the ceph daemons. For example, `quay.io/ceph/ceph:v18.2.4`. For more details read the [container images section](#ceph-container-images).
29
+
*`image`: The image used for running the ceph daemons. For example, `quay.io/ceph/ceph:v19.2.0`. For more details read the [container images section](#ceph-container-images).
30
30
For the latest ceph images, see the [Ceph DockerHub](https://hub.docker.com/r/ceph/ceph/tags/).
31
31
To ensure a consistent version of the image is running across all nodes in the cluster, it is recommended to use a very specific image version.
32
32
Tags also exist that would give the latest version, but they are only recommended for test environments. For example, the tag `v19` will be updated each time a new Squid build is released.
Copy file name to clipboardexpand all lines: Documentation/CRDs/Object-Storage/ceph-object-store-crd.md
+16-1
Original file line number
Diff line number
Diff line change
@@ -138,11 +138,26 @@ The following options can be configured in the `keystone`-section:
138
138
*`tokenCacheSize`: specifies the maximum number of entries in each Keystone token cache.
139
139
*`url`: The url of the Keystone API endpoint to use.
140
140
141
-
The protocols section is divided into two parts:
141
+
### Protocols Settings
142
142
143
+
The protocols section is divided into three parts:
144
+
145
+
-`enableAPIs` - list of APIs to be enabled in RGW instance. If no values set, all APIs will be enabled. Possible values: `s3, s3website, swift, swift_auth, admin, sts, iam, notifications`. Represents RGW [rgw_enable_apis](https://docs.ceph.com/en/reef/radosgw/config-ref/#confval-rgw_enable_apis) config parameter.
143
146
- a section to configure S3
144
147
- a section to configure swift
145
148
149
+
```yaml
150
+
spec:
151
+
[...]
152
+
protocols:
153
+
enableAPIs: []
154
+
swift:
155
+
# a section to configure swift
156
+
s3:
157
+
# a section to configure s3
158
+
[...]
159
+
```
160
+
146
161
#### protocols/S3 settings
147
162
148
163
In the `s3` section of the `protocols` section the following options can be configured:
Copy file name to clipboardexpand all lines: Documentation/CRDs/Shared-Filesystem/ceph-filesystem-crd.md
+7-1
Original file line number
Diff line number
Diff line change
@@ -111,8 +111,10 @@ Also see an example in the [`storageclass-ec.yaml`](https://github.com/rook/rook
111
111
The pools allow all of the settings defined in the Pool CRD spec. For more details, see the [Pool CRD](../Block-Storage/ceph-block-pool-crd.md) settings. In the example above, there must be at least three hosts (size 3) and at least eight devices (6 data + 2 coding chunks) in the cluster.
112
112
113
113
* `metadataPool`: The settings used to create the filesystem metadata pool. Must use replication.
114
+
* `name`: (optional) Override the default generated name of the metadata pool.
114
115
* `dataPools`: The settings to create the filesystem data pools. Optionally (and we highly recommend), a pool name can be specified with the `name` field to override the default generated name; see more below. If multiple pools are specified, Rook will add the pools to the filesystem. Assigning users or files to a pool is left as an exercise for the reader with the [CephFS documentation](http://docs.ceph.com/docs/master/cephfs/file-layouts/). The data pools can use replication or erasure coding. If erasure coding pools are specified, the cluster must be running with bluestore enabled on the OSDs.
115
-
* `name`: (optional, and highly recommended) Override the default generated name of the pool. The final pool name will consist of the filesystem name and pool name, e.g., `<fsName>-<poolName>`. We highly recommend to specify `name` to prevent issues that can arise from modifying the spec in a way that causes Rook to lose the original pool ordering.
116
+
* `name`: (optional, and highly recommended) Override the default generated name of the pool. We highly recommend to specify `name` to prevent issues that can arise from modifying the spec in a way that causes Rook to lose the original pool ordering.
117
+
* `preservePoolNames`: Preserve pool names as specified.
116
118
* `preserveFilesystemOnDelete`: If it is set to 'true' the filesystem will remain when the
117
119
CephFilesystem resource is deleted. This is a security measure to avoid loss of data if the
118
120
CephFilesystem resource is deleted accidentally. The default value is 'false'. This option
@@ -121,6 +123,10 @@ The pools allow all of the settings defined in the Pool CRD spec. For more detai
121
123
`preserveFilesystemOnDelete`. For backwards compatibility and upgradeability, if this is set to
122
124
'true', Rook will treat `preserveFilesystemOnDelete` as being set to 'true'.
123
125
126
+
### Generated Pool Names
127
+
128
+
Both `metadataPool` and `dataPools` support defining names as required. The final pool name will consist of the filesystem name and pool name, e.g., `<fsName>-<poolName>` or `<fsName>-metadata` for `metadataPool`. For more granular configuration you may want to set `preservePoolNames` to `true` in `pools` to disable generation of names. In that case all pool names defined are used as given.
129
+
124
130
## Metadata Server Settings
125
131
126
132
The metadata server settings correspond to the MDS daemon settings.
If no value provided then all APIs will be enabled: s3, s3website, swift, swift_auth, admin, sts, iam, notifications
11443
+
If enabled APIs are set, all remaining APIs will be disabled.
11444
+
This option overrides S3.Enabled value.</p>
11445
+
</td>
11446
+
</tr>
11447
+
<tr>
11448
+
<td>
11401
11449
<code>s3</code><br/>
11402
11450
<em>
11403
11451
<a href="#ceph.rook.io/v1.S3Spec">
@@ -11901,7 +11949,8 @@ bool
11901
11949
</td>
11902
11950
<td>
11903
11951
<em>(Optional)</em>
11904
-
<p>Whether to enable S3. This defaults to true (even if protocols.s3 is not present in the CRD). This maintains backwards compatibility – by default S3 is enabled.</p>
11952
+
<p>Deprecated: use protocol.enableAPIs instead.
11953
+
Whether to enable S3. This defaults to true (even if protocols.s3 is not present in the CRD). This maintains backwards compatibility – by default S3 is enabled.</p>
|`csi.cephcsi.tag`| Ceph CSI image tag |`"v3.12.3"`|
63
+
|`csi.cephcsi.tag`| Ceph CSI image tag |`"v3.13.0"`|
64
64
|`csi.cephfsLivenessMetricsPort`| CSI CephFS driver metrics port |`9081`|
65
65
|`csi.cephfsPodLabels`| Labels to add to the CSI CephFS Deployments and DaemonSets Pods |`nil`|
66
66
|`csi.clusterName`| Cluster name identifier to set as metadata on the CephFS subvolume and RBD images. This will be useful in cases like for example, when two container orchestrator clusters (Kubernetes/OCP) are using a single ceph cluster |`nil`|
which includes a full list of supported encryption configurations.
222
-
More details on encrypting CephFS PVCs can be found [here](https://github.com/ceph/ceph-csi/blob/v3.12.3/docs/deploy-cephfs.md#cephfs-volume-encryption).
223
-
A sample KMS configmap can be found [here](https://github.com/ceph/ceph-csi/blob/v3.12.3/examples/kms/vault/kms-config.yaml).
222
+
More details on encrypting CephFS PVCs can be found [here](https://github.com/ceph/ceph-csi/blob/v3.13.0/docs/deploy-cephfs.md#cephfs-volume-encryption).
223
+
A sample KMS configmap can be found [here](https://github.com/ceph/ceph-csi/blob/v3.13.0/examples/kms/vault/kms-config.yaml).
224
224
225
225
!!! note
226
226
Not all KMS are compatible with fscrypt. Generally, KMS that either store secrets to use directly (like Vault)
0 commit comments