Skip to content

Commit b20a25a

Browse files
author
Ceph Jenkins
committed
Merge commit '72ddccaa2e0f3fdb6b1085603a004e52c1a39f6c' into sync_us--master
Signed-off-by: Ceph Jenkins <ceph-jenkins@redhat.com>
2 parents c9f6743 + 72ddcca commit b20a25a

File tree

78 files changed

+1085
-396
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

78 files changed

+1085
-396
lines changed

.github/workflows/canary-integration-suite.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -31,5 +31,5 @@ jobs:
3131
canary-tests:
3232
uses: ./.github/workflows/canary-integration-test.yml
3333
with:
34-
ceph_images: '["quay.io/ceph/ceph:v18"]'
34+
ceph_images: '["quay.io/ceph/ceph:v19"]'
3535
secrets: inherit

.github/workflows/canary-integration-test.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ on:
55
inputs:
66
ceph_images:
77
description: "JSON list of Ceph images for creating Ceph cluster"
8-
default: '["quay.io/ceph/ceph:v18"]'
8+
default: '["quay.io/ceph/ceph:v19"]'
99
type: string
1010

1111
defaults:

.github/workflows/codespell.yaml

+1-1
Original file line numberDiff line numberDiff line change
@@ -57,4 +57,4 @@ jobs:
5757
with:
5858
fetch-depth: 0
5959
- name: misspell
60-
uses: reviewdog/action-misspell@ef8b22c1cca06c8d306fc6be302c3dab0f6ca12f # v1.23.0
60+
uses: reviewdog/action-misspell@18ffb61effb93b47e332f185216be7e49592e7e1 # v1.26.1

.github/workflows/daily-nightly-jobs.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -356,5 +356,5 @@ jobs:
356356
if: github.repository == 'rook/rook'
357357
uses: ./.github/workflows/canary-integration-test.yml
358358
with:
359-
ceph_images: '["quay.io/ceph/ceph:v18", "quay.io/ceph/daemon-base:latest-main-devel", "quay.io/ceph/daemon-base:latest-reef-devel", "quay.io/ceph/daemon-base:latest-squid-devel"]'
359+
ceph_images: '["quay.io/ceph/ceph:v18", "quay.io/ceph/ceph:v19", "quay.ceph.io/ceph-ci/ceph:main", "quay.ceph.io/ceph-ci/ceph:reef", "quay.ceph.io/ceph-ci/ceph:squid"]'
360360
secrets: inherit

.github/workflows/scorecards.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -64,6 +64,6 @@ jobs:
6464
# Upload the results to GitHub's code scanning dashboard (optional).
6565
# Commenting out will disable upload of results to your repo's Code Scanning dashboard
6666
- name: "Upload to code-scanning"
67-
uses: github/codeql-action/upload-sarif@f09c1c0a94de965c15400f5634aa42fac8fb8f88 # v3.27.5
67+
uses: github/codeql-action/upload-sarif@aa578102511db1f4524ed59b8cc2bae4f6e88195 # v3.27.6
6868
with:
6969
sarif_file: results.sarif

.mergify.yml

+20-21
Original file line numberDiff line numberDiff line change
@@ -207,27 +207,26 @@ pull_request_rules:
207207
- "check-success=crds-gen"
208208
- "check-success=docs-check"
209209
- "check-success=pylint"
210-
- "check-success=canary-tests / canary (quay.io/ceph/ceph:v18)"
211-
- "check-success=canary-tests / raw-disk-with-object (quay.io/ceph/ceph:v18)"
212-
- "check-success=canary-tests / two-osds-in-device (quay.io/ceph/ceph:v18)"
213-
- "check-success=canary-tests / osd-with-metadata-partition-device (quay.io/ceph/ceph:v18)"
214-
- "check-success=canary-tests / osd-with-metadata-device (quay.io/ceph/ceph:v18)"
215-
- "check-success=canary-tests / encryption (quay.io/ceph/ceph:v18)"
216-
- "check-success=canary-tests / lvm (quay.io/ceph/ceph:v18)"
217-
- "check-success=canary-tests / pvc (quay.io/ceph/ceph:v18)"
218-
- "check-success=canary-tests / pvc-db (quay.io/ceph/ceph:v18)"
219-
- "check-success=canary-tests / pvc-db-wal (quay.io/ceph/ceph:v18)"
220-
- "check-success=canary-tests / encryption-pvc (quay.io/ceph/ceph:v18)"
221-
- "check-success=canary-tests / encryption-pvc-db (quay.io/ceph/ceph:v18)"
222-
- "check-success=canary-tests / encryption-pvc-db-wal (quay.io/ceph/ceph:v18)"
223-
- "check-success=canary-tests / encryption-pvc-kms-vault-token-auth (quay.io/ceph/ceph:v18)"
224-
- "check-success=canary-tests / encryption-pvc-kms-vault-k8s-auth (quay.io/ceph/ceph:v18)"
225-
- "check-success=canary-tests / lvm-pvc (quay.io/ceph/ceph:v18)"
226-
- "check-success=canary-tests / multi-cluster-mirroring (quay.io/ceph/ceph:v18)"
227-
- "check-success=canary-tests / rgw-multisite-testing (quay.io/ceph/ceph:v18)"
228-
- "check-success=canary-tests / encryption-pvc-kms-ibm-kp (quay.io/ceph/ceph:v18)"
229-
- "check-success=canary-tests / multus-public-and-cluster (quay.io/ceph/ceph:v18)"
230-
- "check-success=canary-tests / csi-hostnetwork-disabled (quay.io/ceph/ceph:v18)"
210+
- "check-success=canary-tests / canary (quay.io/ceph/ceph:v19)"
211+
- "check-success=canary-tests / raw-disk-with-object (quay.io/ceph/ceph:v19)"
212+
- "check-success=canary-tests / two-osds-in-device (quay.io/ceph/ceph:v19)"
213+
- "check-success=canary-tests / osd-with-metadata-partition-device (quay.io/ceph/ceph:v19)"
214+
- "check-success=canary-tests / osd-with-metadata-device (quay.io/ceph/ceph:v19)"
215+
- "check-success=canary-tests / encryption (quay.io/ceph/ceph:v19)"
216+
- "check-success=canary-tests / lvm (quay.io/ceph/ceph:v19)"
217+
- "check-success=canary-tests / pvc (quay.io/ceph/ceph:v19)"
218+
- "check-success=canary-tests / pvc-db (quay.io/ceph/ceph:v19)"
219+
- "check-success=canary-tests / pvc-db-wal (quay.io/ceph/ceph:v19)"
220+
- "check-success=canary-tests / encryption-pvc (quay.io/ceph/ceph:v19)"
221+
- "check-success=canary-tests / encryption-pvc-db (quay.io/ceph/ceph:v19)"
222+
- "check-success=canary-tests / encryption-pvc-db-wal (quay.io/ceph/ceph:v19)"
223+
- "check-success=canary-tests / encryption-pvc-kms-vault-token-auth (quay.io/ceph/ceph:v19)"
224+
- "check-success=canary-tests / encryption-pvc-kms-vault-k8s-auth (quay.io/ceph/ceph:v19)"
225+
- "check-success=canary-tests / lvm-pvc (quay.io/ceph/ceph:v19)"
226+
- "check-success=canary-tests / multi-cluster-mirroring (quay.io/ceph/ceph:v19)"
227+
- "check-success=canary-tests / rgw-multisite-testing (quay.io/ceph/ceph:v19)"
228+
- "check-success=canary-tests / encryption-pvc-kms-ibm-kp (quay.io/ceph/ceph:v19)"
229+
- "check-success=canary-tests / multus-public-and-cluster (quay.io/ceph/ceph:v19)"
231230
- "check-success=TestCephSmokeSuite (v1.27.16)"
232231
- "check-success=TestCephSmokeSuite (v1.31.0)"
233232
- "check-success=TestCephHelmSuite (v1.27.16)"

CODE-OWNERS

+1
Original file line numberDiff line numberDiff line change
@@ -14,3 +14,4 @@ approvers:
1414
reviewers:
1515
- Madhu-1
1616
- parth-gr
17+
- arttor

Documentation/CRDs/Cluster/ceph-cluster-crd.md

+4-4
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ Settings can be specified at the global level to apply to the cluster as a whole
2626
* `external`:
2727
* `enable`: if `true`, the cluster will not be managed by Rook but via an external entity. This mode is intended to connect to an existing cluster. In this case, Rook will only consume the external cluster. However, Rook will be able to deploy various daemons in Kubernetes such as object gateways, mds and nfs if an image is provided and will refuse otherwise. If this setting is enabled **all** the other options will be ignored except `cephVersion.image` and `dataDirHostPath`. See [external cluster configuration](external-cluster/external-cluster.md). If `cephVersion.image` is left blank, Rook will refuse the creation of extra CRs like object, file and nfs.
2828
* `cephVersion`: The version information for launching the ceph daemons.
29-
* `image`: The image used for running the ceph daemons. For example, `quay.io/ceph/ceph:v18.2.4`. For more details read the [container images section](#ceph-container-images).
29+
* `image`: The image used for running the ceph daemons. For example, `quay.io/ceph/ceph:v19.2.0`. For more details read the [container images section](#ceph-container-images).
3030
For the latest ceph images, see the [Ceph DockerHub](https://hub.docker.com/r/ceph/ceph/tags/).
3131
To ensure a consistent version of the image is running across all nodes in the cluster, it is recommended to use a very specific image version.
3232
Tags also exist that would give the latest version, but they are only recommended for test environments. For example, the tag `v19` will be updated each time a new Squid build is released.
@@ -431,7 +431,7 @@ metadata:
431431
namespace: rook-ceph
432432
spec:
433433
cephVersion:
434-
image: quay.io/ceph/ceph:v18.2.4
434+
image: quay.io/ceph/ceph:v19.2.0
435435
dataDirHostPath: /var/lib/rook
436436
mon:
437437
count: 3
@@ -538,7 +538,7 @@ metadata:
538538
namespace: rook-ceph
539539
spec:
540540
cephVersion:
541-
image: quay.io/ceph/ceph:v18.2.4
541+
image: quay.io/ceph/ceph:v19.2.0
542542
dataDirHostPath: /var/lib/rook
543543
mon:
544544
count: 3
@@ -668,7 +668,7 @@ kubectl -n rook-ceph get CephCluster -o yaml
668668
deviceClasses:
669669
- name: hdd
670670
version:
671-
image: quay.io/ceph/ceph:v18.2.4
671+
image: quay.io/ceph/ceph:v19.2.0
672672
version: 16.2.6-0
673673
conditions:
674674
- lastHeartbeatTime: "2021-03-02T21:22:11Z"

Documentation/CRDs/Cluster/host-cluster.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ metadata:
2222
spec:
2323
cephVersion:
2424
# see the "Cluster Settings" section below for more details on which image of ceph to run
25-
image: quay.io/ceph/ceph:v18.2.4
25+
image: quay.io/ceph/ceph:v19.2.0
2626
dataDirHostPath: /var/lib/rook
2727
mon:
2828
count: 3
@@ -49,7 +49,7 @@ metadata:
4949
namespace: rook-ceph
5050
spec:
5151
cephVersion:
52-
image: quay.io/ceph/ceph:v18.2.4
52+
image: quay.io/ceph/ceph:v19.2.0
5353
dataDirHostPath: /var/lib/rook
5454
mon:
5555
count: 3
@@ -101,7 +101,7 @@ metadata:
101101
namespace: rook-ceph
102102
spec:
103103
cephVersion:
104-
image: quay.io/ceph/ceph:v18.2.4
104+
image: quay.io/ceph/ceph:v19.2.0
105105
dataDirHostPath: /var/lib/rook
106106
mon:
107107
count: 3

Documentation/CRDs/Cluster/pvc-cluster.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ metadata:
1818
namespace: rook-ceph
1919
spec:
2020
cephVersion:
21-
image: quay.io/ceph/ceph:v18.2.4
21+
image: quay.io/ceph/ceph:v19.2.0
2222
dataDirHostPath: /var/lib/rook
2323
mon:
2424
count: 3
@@ -72,7 +72,7 @@ spec:
7272
requests:
7373
storage: 10Gi
7474
cephVersion:
75-
image: quay.io/ceph/ceph:v18.2.4
75+
image: quay.io/ceph/ceph:v19.2.0
7676
allowUnsupported: false
7777
dashboard:
7878
enabled: true
@@ -128,7 +128,7 @@ metadata:
128128
namespace: rook-ceph
129129
spec:
130130
cephVersion:
131-
image: quay.io/ceph/ceph:v18.2.4
131+
image: quay.io/ceph/ceph:v19.2.0
132132
dataDirHostPath: /var/lib/rook
133133
mon:
134134
count: 3

Documentation/CRDs/Cluster/stretch-cluster.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ spec:
3434
- name: b
3535
- name: c
3636
cephVersion:
37-
image: quay.io/ceph/ceph:v18.2.4
37+
image: quay.io/ceph/ceph:v19.2.0
3838
allowUnsupported: true
3939
# Either storageClassDeviceSets or the storage section can be specified for creating OSDs.
4040
# This example uses all devices for simplicity.

Documentation/CRDs/Object-Storage/ceph-object-store-crd.md

+16-1
Original file line numberDiff line numberDiff line change
@@ -138,11 +138,26 @@ The following options can be configured in the `keystone`-section:
138138
* `tokenCacheSize`: specifies the maximum number of entries in each Keystone token cache.
139139
* `url`: The url of the Keystone API endpoint to use.
140140

141-
The protocols section is divided into two parts:
141+
### Protocols Settings
142142

143+
The protocols section is divided into three parts:
144+
145+
- `enableAPIs` - list of APIs to be enabled in RGW instance. If no values set, all APIs will be enabled. Possible values: `s3, s3website, swift, swift_auth, admin, sts, iam, notifications`. Represents RGW [rgw_enable_apis](https://docs.ceph.com/en/reef/radosgw/config-ref/#confval-rgw_enable_apis) config parameter.
143146
- a section to configure S3
144147
- a section to configure swift
145148

149+
```yaml
150+
spec:
151+
[...]
152+
protocols:
153+
enableAPIs: []
154+
swift:
155+
# a section to configure swift
156+
s3:
157+
# a section to configure s3
158+
[...]
159+
```
160+
146161
#### protocols/S3 settings
147162

148163
In the `s3` section of the `protocols` section the following options can be configured:

Documentation/CRDs/Shared-Filesystem/ceph-filesystem-crd.md

+7-1
Original file line numberDiff line numberDiff line change
@@ -111,8 +111,10 @@ Also see an example in the [`storageclass-ec.yaml`](https://github.com/rook/rook
111111
The pools allow all of the settings defined in the Pool CRD spec. For more details, see the [Pool CRD](../Block-Storage/ceph-block-pool-crd.md) settings. In the example above, there must be at least three hosts (size 3) and at least eight devices (6 data + 2 coding chunks) in the cluster.
112112

113113
* `metadataPool`: The settings used to create the filesystem metadata pool. Must use replication.
114+
* `name`: (optional) Override the default generated name of the metadata pool.
114115
* `dataPools`: The settings to create the filesystem data pools. Optionally (and we highly recommend), a pool name can be specified with the `name` field to override the default generated name; see more below. If multiple pools are specified, Rook will add the pools to the filesystem. Assigning users or files to a pool is left as an exercise for the reader with the [CephFS documentation](http://docs.ceph.com/docs/master/cephfs/file-layouts/). The data pools can use replication or erasure coding. If erasure coding pools are specified, the cluster must be running with bluestore enabled on the OSDs.
115-
* `name`: (optional, and highly recommended) Override the default generated name of the pool. The final pool name will consist of the filesystem name and pool name, e.g., `<fsName>-<poolName>`. We highly recommend to specify `name` to prevent issues that can arise from modifying the spec in a way that causes Rook to lose the original pool ordering.
116+
* `name`: (optional, and highly recommended) Override the default generated name of the pool. We highly recommend to specify `name` to prevent issues that can arise from modifying the spec in a way that causes Rook to lose the original pool ordering.
117+
* `preservePoolNames`: Preserve pool names as specified.
116118
* `preserveFilesystemOnDelete`: If it is set to 'true' the filesystem will remain when the
117119
CephFilesystem resource is deleted. This is a security measure to avoid loss of data if the
118120
CephFilesystem resource is deleted accidentally. The default value is 'false'. This option
@@ -121,6 +123,10 @@ The pools allow all of the settings defined in the Pool CRD spec. For more detai
121123
`preserveFilesystemOnDelete`. For backwards compatibility and upgradeability, if this is set to
122124
'true', Rook will treat `preserveFilesystemOnDelete` as being set to 'true'.
123125

126+
### Generated Pool Names
127+
128+
Both `metadataPool` and `dataPools` support defining names as required. The final pool name will consist of the filesystem name and pool name, e.g., `<fsName>-<poolName>` or `<fsName>-metadata` for `metadataPool`. For more granular configuration you may want to set `preservePoolNames` to `true` in `pools` to disable generation of names. In that case all pool names defined are used as given.
129+
124130
## Metadata Server Settings
125131

126132
The metadata server settings correspond to the MDS daemon settings.

Documentation/CRDs/specification.md

+55-6
Original file line numberDiff line numberDiff line change
@@ -1127,8 +1127,8 @@ FilesystemSpec
11271127
<td>
11281128
<code>metadataPool</code><br/>
11291129
<em>
1130-
<a href="#ceph.rook.io/v1.PoolSpec">
1131-
PoolSpec
1130+
<a href="#ceph.rook.io/v1.NamedPoolSpec">
1131+
NamedPoolSpec
11321132
</a>
11331133
</em>
11341134
</td>
@@ -1151,6 +1151,18 @@ PoolSpec
11511151
</tr>
11521152
<tr>
11531153
<td>
1154+
<code>preservePoolNames</code><br/>
1155+
<em>
1156+
bool
1157+
</em>
1158+
</td>
1159+
<td>
1160+
<em>(Optional)</em>
1161+
<p>Preserve pool names as specified</p>
1162+
</td>
1163+
</tr>
1164+
<tr>
1165+
<td>
11541166
<code>preservePoolsOnDelete</code><br/>
11551167
<em>
11561168
bool
@@ -6582,8 +6594,8 @@ FilesystemSnapshotScheduleStatusRetention
65826594
<td>
65836595
<code>metadataPool</code><br/>
65846596
<em>
6585-
<a href="#ceph.rook.io/v1.PoolSpec">
6586-
PoolSpec
6597+
<a href="#ceph.rook.io/v1.NamedPoolSpec">
6598+
NamedPoolSpec
65876599
</a>
65886600
</em>
65896601
</td>
@@ -6606,6 +6618,18 @@ PoolSpec
66066618
</tr>
66076619
<tr>
66086620
<td>
6621+
<code>preservePoolNames</code><br/>
6622+
<em>
6623+
bool
6624+
</em>
6625+
</td>
6626+
<td>
6627+
<em>(Optional)</em>
6628+
<p>Preserve pool names as specified</p>
6629+
</td>
6630+
</tr>
6631+
<tr>
6632+
<td>
66096633
<code>preservePoolsOnDelete</code><br/>
66106634
<em>
66116635
bool
@@ -9830,6 +9854,13 @@ If spec.sharedPools are also empty, then RGW pools (spec.dataPool and spec.metad
98309854
</tr>
98319855
</tbody>
98329856
</table>
9857+
<h3 id="ceph.rook.io/v1.ObjectStoreAPI">ObjectStoreAPI
9858+
(<code>string</code> alias)</h3>
9859+
<p>
9860+
(<em>Appears on:</em><a href="#ceph.rook.io/v1.ProtocolSpec">ProtocolSpec</a>)
9861+
</p>
9862+
<div>
9863+
</div>
98339864
<h3 id="ceph.rook.io/v1.ObjectStoreHostingSpec">ObjectStoreHostingSpec
98349865
</h3>
98359866
<p>
@@ -11147,7 +11178,7 @@ This list allows defining additional StorageClasses on top of default STANDARD s
1114711178
<h3 id="ceph.rook.io/v1.PoolSpec">PoolSpec
1114811179
</h3>
1114911180
<p>
11150-
(<em>Appears on:</em><a href="#ceph.rook.io/v1.FilesystemSpec">FilesystemSpec</a>, <a href="#ceph.rook.io/v1.NamedBlockPoolSpec">NamedBlockPoolSpec</a>, <a href="#ceph.rook.io/v1.NamedPoolSpec">NamedPoolSpec</a>, <a href="#ceph.rook.io/v1.ObjectStoreSpec">ObjectStoreSpec</a>, <a href="#ceph.rook.io/v1.ObjectZoneSpec">ObjectZoneSpec</a>)
11181+
(<em>Appears on:</em><a href="#ceph.rook.io/v1.NamedBlockPoolSpec">NamedBlockPoolSpec</a>, <a href="#ceph.rook.io/v1.NamedPoolSpec">NamedPoolSpec</a>, <a href="#ceph.rook.io/v1.ObjectStoreSpec">ObjectStoreSpec</a>, <a href="#ceph.rook.io/v1.ObjectZoneSpec">ObjectZoneSpec</a>)
1115111182
</p>
1115211183
<div>
1115311184
<p>PoolSpec represents the spec of ceph pool</p>
@@ -11398,6 +11429,23 @@ alive or ready to receive traffic.</p>
1139811429
<tbody>
1139911430
<tr>
1140011431
<td>
11432+
<code>enableAPIs</code><br/>
11433+
<em>
11434+
<a href="#ceph.rook.io/v1.ObjectStoreAPI">
11435+
[]ObjectStoreAPI
11436+
</a>
11437+
</em>
11438+
</td>
11439+
<td>
11440+
<em>(Optional)</em>
11441+
<p>Represents RGW &lsquo;rgw_enable_apis&rsquo; config option. See: <a href="https://docs.ceph.com/en/reef/radosgw/config-ref/#confval-rgw_enable_apis">https://docs.ceph.com/en/reef/radosgw/config-ref/#confval-rgw_enable_apis</a>
11442+
If no value provided then all APIs will be enabled: s3, s3website, swift, swift_auth, admin, sts, iam, notifications
11443+
If enabled APIs are set, all remaining APIs will be disabled.
11444+
This option overrides S3.Enabled value.</p>
11445+
</td>
11446+
</tr>
11447+
<tr>
11448+
<td>
1140111449
<code>s3</code><br/>
1140211450
<em>
1140311451
<a href="#ceph.rook.io/v1.S3Spec">
@@ -11901,7 +11949,8 @@ bool
1190111949
</td>
1190211950
<td>
1190311951
<em>(Optional)</em>
11904-
<p>Whether to enable S3. This defaults to true (even if protocols.s3 is not present in the CRD). This maintains backwards compatibility – by default S3 is enabled.</p>
11952+
<p>Deprecated: use protocol.enableAPIs instead.
11953+
Whether to enable S3. This defaults to true (even if protocols.s3 is not present in the CRD). This maintains backwards compatibility – by default S3 is enabled.</p>
1190511954
</td>
1190611955
</tr>
1190711956
<tr>

Documentation/Helm-Charts/operator-chart.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@ The following table lists the configurable parameters of the rook-operator chart
6060
| `csi.cephFSPluginUpdateStrategy` | CSI CephFS plugin daemonset update strategy, supported values are OnDelete and RollingUpdate | `RollingUpdate` |
6161
| `csi.cephFSPluginUpdateStrategyMaxUnavailable` | A maxUnavailable parameter of CSI cephFS plugin daemonset update strategy. | `1` |
6262
| `csi.cephcsi.repository` | Ceph CSI image repository | `"quay.io/cephcsi/cephcsi"` |
63-
| `csi.cephcsi.tag` | Ceph CSI image tag | `"v3.12.3"` |
63+
| `csi.cephcsi.tag` | Ceph CSI image tag | `"v3.13.0"` |
6464
| `csi.cephfsLivenessMetricsPort` | CSI CephFS driver metrics port | `9081` |
6565
| `csi.cephfsPodLabels` | Labels to add to the CSI CephFS Deployments and DaemonSets Pods | `nil` |
6666
| `csi.clusterName` | Cluster name identifier to set as metadata on the CephFS subvolume and RBD images. This will be useful in cases like for example, when two container orchestrator clusters (Kubernetes/OCP) are using a single ceph cluster | `nil` |

Documentation/Storage-Configuration/Block-Storage-RBD/block-storage.md

+4-4
Original file line numberDiff line numberDiff line change
@@ -93,10 +93,10 @@ reclaimPolicy: Delete
9393
allowVolumeExpansion: true
9494
```
9595
96-
If you've deployed the Rook operator in a namespace other than "rook-ceph",
97-
change the prefix in the provisioner to match the namespace
98-
you used. For example, if the Rook operator is running in the namespace "my-namespace" the
99-
provisioner value should be "my-namespace.rbd.csi.ceph.com".
96+
If you've deployed the Rook operator in a namespace other than `rook-ceph`,
97+
change the prefix in the provisioner to match the namespace you used. For
98+
example, if the Rook operator is running in the namespace `my-namespace` the
99+
provisioner value should be `my-namespace.rbd.csi.ceph.com`.
100100

101101
Create the storage class.
102102

Documentation/Storage-Configuration/Ceph-CSI/ceph-csi-drivers.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -217,10 +217,10 @@ CSI-Addons supports the following operations:
217217

218218
Ceph-CSI supports encrypting PersistentVolumeClaims (PVCs) for both RBD and CephFS.
219219
This can be achieved using LUKS for RBD and fscrypt for CephFS. More details on encrypting RBD PVCs can be found
220-
[here](https://github.com/ceph/ceph-csi/blob/v3.12.3/docs/deploy-rbd.md#encryption-for-rbd-volumes),
220+
[here](https://github.com/ceph/ceph-csi/blob/v3.13.0/docs/deploy-rbd.md#encryption-for-rbd-volumes),
221221
which includes a full list of supported encryption configurations.
222-
More details on encrypting CephFS PVCs can be found [here](https://github.com/ceph/ceph-csi/blob/v3.12.3/docs/deploy-cephfs.md#cephfs-volume-encryption).
223-
A sample KMS configmap can be found [here](https://github.com/ceph/ceph-csi/blob/v3.12.3/examples/kms/vault/kms-config.yaml).
222+
More details on encrypting CephFS PVCs can be found [here](https://github.com/ceph/ceph-csi/blob/v3.13.0/docs/deploy-cephfs.md#cephfs-volume-encryption).
223+
A sample KMS configmap can be found [here](https://github.com/ceph/ceph-csi/blob/v3.13.0/examples/kms/vault/kms-config.yaml).
224224

225225
!!! note
226226
Not all KMS are compatible with fscrypt. Generally, KMS that either store secrets to use directly (like Vault)

0 commit comments

Comments
 (0)