diff --git a/.github/workflows/vale.yml b/.github/workflows/vale.yml
new file mode 100644
index 00000000..05918ece
--- /dev/null
+++ b/.github/workflows/vale.yml
@@ -0,0 +1,14 @@
+---
+name: Linting
+on: [push]
+
+jobs:
+ style:
+ runs-on: ubuntu-latest
+ steps:
+ - name: Checkout
+ uses: actions/checkout@master
+ - name: Style
+ uses: xcp-ng/vale-styles@v0.1
+ with:
+ files: docs/*
diff --git a/.vale.ini b/.vale.ini
new file mode 100644
index 00000000..44272be8
--- /dev/null
+++ b/.vale.ini
@@ -0,0 +1,7 @@
+StylesPath = .github/styles/
+MinAlertLevel = suggestion
+
+# Only Markdown files;
+[*.{md}]
+# List of styles to load.
+BasedOnStyles = gitlab, vates
diff --git a/docs/README.md b/docs/README.md
index 59b8fd09..ac4bd8bd 100644
--- a/docs/README.md
+++ b/docs/README.md
@@ -32,5 +32,5 @@ About the project itself, please see the [project page](project.md).
:::tip
-No flamewar! There are no miracle solutions, but only solutions adapted to your usage. We truly respect all other virtualization platforms!
+No flame war! There are no miracle solutions, but only solutions adapted to your usage. We truly respect all other virtualization platforms!
:::
diff --git a/docs/answerfile.md b/docs/answerfile.md
index cb87a957..bce6268f 100644
--- a/docs/answerfile.md
+++ b/docs/answerfile.md
@@ -89,7 +89,7 @@ Where type is one of:
`source` defines the location of the installation repository or a Supplemental Pack. There may be multiple 'source' elements.
-`driver-source` defines the source of a Supplemental Pack containing device drivers to be loaded by the installer and included after installation of the main repository. It can be
+`driver-source` defines the source of a Supplemental Pack containing device drivers to be loaded by the installer and included after installation of the main repository.
Repository formats:
@@ -328,4 +328,4 @@ Your answer file can also be used to upgrade your machines. Here is an example:
```
-As you can see, `mode` is set on `upgrade`. Be sure to target the right disk to search for previous existing installations (here `sda`). Do NOT specify `primary-disk` and `guest-disk`!
\ No newline at end of file
+As you can see, `mode` is set on `upgrade`. Be sure to target the right disk to search for previous existing installations (here `sda`). Do NOT specify `primary-disk` and `guest-disk`!
diff --git a/docs/architecture.md b/docs/architecture.md
index 8e6f3e27..ad3fd231 100644
--- a/docs/architecture.md
+++ b/docs/architecture.md
@@ -234,7 +234,7 @@ References and interesting links:
### Xen Grant table
-The grant table is a mechanism to share memory between domains: it's essentially used in this part to share data between a PV driver of a `DomU` and the `Dom0`. Each domain has its own grant table and it can give an access to its memory pages to another domain using Write/Read permissions. Each entry of the table are identified by a `grant reference`, it's a simple integer which indexes into the grant table.
+The grant table is a mechanism to share memory between domains: it's essentially used in this part to share data between a PV driver of a `DomU` and the `Dom0`. Each domain has its own grant table and it can give an access to its memory pages to another domain using Write/Read permissions. Each entry of the table are identified by a `grant reference`, it's a simple integer which indexes into the grant table.
Normally the grant table is used in the kernel space, but it exists a `/dev/xen/gntdev` device used to map granted pages in user space. It's useful to implement Xen backends in userspace for qemu and tapdisk: we can write and read in the blkif ring with this helper.
diff --git a/docs/cli_reference.md b/docs/cli_reference.md
index e07838f2..752e7ac2 100644
--- a/docs/cli_reference.md
+++ b/docs/cli_reference.md
@@ -1,6 +1,6 @@
# xe CLI reference
-The xe CLI can be used locally on any XCP-ng host, it's installed along with it. However, it's poolwide only. If you want a CLI or an API to control multiple pools at once, we strongly advise to use [Xen Orchestra CLI](https://xen-orchestra.com/docs/architecture.html#xo-cli-cli).
+The xe CLI can be used locally on any XCP-ng host, it's installed along with it. However, it's pool-wide only. If you want a CLI or an API to control multiple pools at once, we strongly advise to use [Xen Orchestra CLI](https://xen-orchestra.com/docs/architecture.html#xo-cli-cli).
## Getting help with xe commands
@@ -71,11 +71,11 @@ xe vm-list -user username -password password -server hostname
Shorthand syntax is also available for remote connection arguments:
-* -u user name
-* -pw password
-* -pwf password file
-* -p port
-* -s server
+* `-u user name`
+* `-pw password`
+* `-pwf password file`
+* `-p port`
+* `-s server`
Example: On a remote XCP-ng server:
@@ -113,61 +113,61 @@ The CLI commands can be split in two halves. Low-level commands are concerned wi
The low-level commands are:
-* class-list
+* `class-list`
-* class-param-get
+* `class-param-get`
-* class-param-set
+* `class-param-set`
-* class-param-list
+* `class-param-list`
-* class-param-add
+* `class-param-add`
-* class-param-remove
+* `class-param-remove`
-* class-param-clear
+* `class-param-clear`
Where class is one of:
-* bond
+* `bond`
-* console
+* `console`
-* host
+* `host`
-* host-crashdump
+* `host-crashdump`
-* host-cpu
+* `host-cpu`
-* network
+* `network`
-* patch
+* `patch`
-* pbd
+* `pbd`
-* pif
+* `pif`
-* pool
+* `pool`
-* sm
+* `sm`
-* sr
+* `sr`
-* task
+* `task`
-* template
+* `template`
-* vbd
+* `vbd`
-* vdi
+* `vdi`
-* vif
+* `vif`
-* vlan
+* `vlan`
-* vm
+* `vm`
-Not every value of class has the full set of class-param-action commands. Some values of class have a smaller set of commands.
+Not every value of class has the full set of `class-param-action` commands. Some values of class have a smaller set of commands.
## Parameter types
@@ -204,7 +204,7 @@ In previous releases, the hyphen character (-) was used to specify map parameter
## Low-level parameter commands
-There are several commands for operating on parameters of objects: class-param-get, class-param-set, class-param-add, class-param-remove, class-param-clear, and class-param-list. Each of these commands takes a uuid parameter to specify the particular object. Since these commands are considered low-level commands, they must use the `UUID` and not the VM name label.
+There are several commands for operating on parameters of objects: `class-param-get`, `class-param-set`, `class-param-add`, `class-param-remove`, `class-param-clear`, and `class-param-list`. Each of these commands takes a `uuid` parameter to specify the particular object. Since these commands are considered low-level commands, they must use the `UUID` and not the VM name label.
* `class-param-list uuid=uuid`
@@ -212,7 +212,7 @@ Lists all of the parameters and their associated values. Unlike the class-list c
* `class-param-get uuid=uuid param-name=parameter param-key=key`
-Returns the value of a particular parameter. For a map parameter, specifying the param-key gets the value associated with that key in the map. If param-key is not specified or if the parameter is a set, the command returns a string representation of the set or map.
+Returns the value of a particular parameter. For a map parameter, specifying the `param-key` gets the value associated with that key in the map. If `param-key` is not specified or if the parameter is a set, the command returns a string representation of the set or map.
* `class-param-set uuid=uuid param=value`
@@ -220,7 +220,7 @@ Sets the value of one or more parameters.
* `class-param-add uuid=uuid param-name=parameter key=value param-key=key`
-Adds to either a map or a set parameter. For a map parameter, add key/value pairs by using the key=value syntax. If the parameter is a set, add keys with the param-key=key syntax.
+Adds to either a map or a set parameter. For a map parameter, add key/value pairs by using the key=value syntax. If the parameter is a set, add keys with the `param-key=key` syntax.
* `class-param-remove uuid=uuid param-name=parameter param-key=key`
@@ -237,7 +237,7 @@ The class-list command lists the objects of type class. By default, this type of
* It can filter the objects so that it only outputs a subset
* The parameters that are printed can be modified.
-To change the parameters that are printed, specify the argument params as a comma-separated list of the required parameters. For example:
+To change the parameters that are printed, specify the argument `params` as a comma-separated list of the required parameters. For example:
```
xe vm-list params=name-label,other-config
@@ -255,7 +255,7 @@ The list command doesn’t show some parameters that are expensive to calculate.
allowed-VBD-devices (SRO):
```
-To obtain these fields, use either the command class-param-list or class-param-get
+To obtain these fields, use either the command `class-param-list` or `class-param-get`.
To filter the list, the CLI matches parameter values with those values specified on the command-line, only printing objects that match all of the specified constraints. For example:
@@ -314,7 +314,7 @@ Appliance commands have the following parameters:
|Parameter Name|Description|Type|
|:-------------|:----------|:---|
-|`uuid`|The appliance uuid|Required|
+|`uuid`|The appliance UUID|Required|
|`name-description`|The appliance description|Optional|
|`paused`| |Optional|
|`force`|Force shutdown|Optional|
@@ -479,7 +479,7 @@ CDs have the following parameters:
|`sr-uuid`|The unique identifier/object reference for the SR this CD is part of|Read only|
|`sr-name-label`|The name for the SR this CD is part of|Read only|
|`vbd-uuids`|A list of the unique identifiers for the VBDs on VMs that connect to this CD|Read only set parameter|
-|`crashdump-uuids`|Not used on CDs. Because crashdumps cannot be written to CDs|Read only set parameter|
+|`crashdump-uuids`|Not used on CDs. Because crash dumps cannot be written to CDs|Read only set parameter|
|`virtual-size`|Size of the CD as it appears to VMs (in bytes)|Read only|
|`physical-utilisation`|Amount of physical space that the CD image takes up on the SR (in bytes)|Read only|
|`type`|Set to User for CDs|Read only|
@@ -492,7 +492,7 @@ CDs have the following parameters:
|`location`|The path on which the device is mounted|Read only|
|`managed`|Value is `true` if the device is managed|Read only|
|`xenstore-data`|Data to be inserted into the `xenstore` tree|Read only map parameter|
-|`sm-config`|Names and descriptions of storage manager device config keys|Read only map parameter|
+|`sm-config`|Names and descriptions of storage manager device configuration keys|Read only map parameter|
|`is-a-snapshot`|Value is `true` if this template is a CD snapshot|Read only|
|`snapshot_of`|The UUID of the CD that this template is a snapshot of|Read only|
|`snapshots`|The UUIDs of any snapshots that have been taken of this CD|Read only|
@@ -506,7 +506,7 @@ cd-list [params=param1,param2,...] [parameter=parameter_value]
List the CDs and ISOs (CD image files) on the XCP-ng server or pool, filtering on the optional argument `params`.
-If the optional argument `params` is used, the value of params is a string containing a list of parameters of this object that you want to display. Alternatively, you can use the keyword `all` to show all parameters. When `params` is not used, the returned list shows a default subset of all available parameters.
+If the optional argument `params` is used, the value of `params` is a string containing a list of parameters of this object that you want to display. Alternatively, you can use the keyword `all` to show all parameters. When `params` is not used, the returned list shows a default subset of all available parameters.
Optional arguments can be any number of the [CD parameters](#cd-parameters) listed at the beginning of this section.
@@ -780,7 +780,7 @@ xe event-wait class=vm uuid=$VM start-time=/=$(xe vm-list uuid=$VM params=start-
Blocks other commands until a VM with UUID *\$VM* reboots. The command uses the value of `start-time` to decide when the VM reboots.
-The class name can be any of the [event classes](#event-classes) listed at the beginning of this section. The parameters can be any of the parameters listed in the CLI command *class*-param-list.
+The class name can be any of the [event classes](#event-classes) listed at the beginning of this section. The parameters can be any of the parameters listed in the CLI command `${class}-param-list`.
### GPU commands
@@ -1019,8 +1019,8 @@ Crash dumps on XCP-ng servers have the following parameters:
|Parameter Name|Description|Type|
|:-------------|:----------|:---|
-|`uuid`|The unique identifier/object reference for the crashdump|Read only|
-|`host`|XCP-ng server the crashdump corresponds to|Read only|
+|`uuid`|The unique identifier/object reference for the crash dump|Read only|
+|`host`|XCP-ng server the crash dump corresponds to|Read only|
|`timestamp`|Timestamp of the date and time that the crashdump occurred, in the form `yyyymmdd-hhmmss-ABC`, where *ABC* is the timezone indicator, for example, GMT|Read only|
|`size`|Size of the crashdump, in bytes|Read only|
@@ -1120,7 +1120,7 @@ Upload a crashdump to the Support FTP site or other location. If optional parame
host-declare-dead uuid=host_uuid
```
-Declare that the the host is dead without contacting it explicitly.
+Declare that the host is dead without contacting it explicitly.
:::warning
This call is dangerous and can cause data loss if the host is not actually dead.
@@ -1454,7 +1454,7 @@ Change the host name of the XCP-ng server specified by `host-uuid`. This command
host-set-power-on-mode host=host_uuid power-on-mode={"" | "wake-on-lan" | "iLO" | "DRAC" | "custom"} [ power-on-config:power_on_ip=ip-address power-on-config:power_on_user=user power-on-config:power_on_password_secret=secret-uuid ]
```
-Use to enable the *Host Power On* function on XCP-ng hosts that are compatible with remote power solutions. When using the `host-set-power-on` command, you must specify the type of power management solution on the host (that is, the power-on-mode). Then specify configuration options using the power-on-config argument and its associated key-value pairs.
+Use to enable the *Host Power On* function on XCP-ng hosts that are compatible with remote power solutions. When using the `host-set-power-on` command, you must specify the type of power management solution on the host (that is, the power-on-mode). Then specify configuration options using the `power-on-config` argument and its associated key-value pairs.
To use the secrets feature to store your password, specify the key `"power_on_password_secret"`. For more information, see [Secrets](#secrets).
@@ -1589,7 +1589,7 @@ Reopen all loggers (use this for rotating files).
log-set-output output=output [key=key] [level=level]
```
-Set all loggers to the specified output (nil, stderr, string, file:*file name*, syslog:*something*).
+Set all loggers to the specified output (`nil`, `stderr`, `string`, `file:*file name*`, `syslog:*something*`).
### Message commands
@@ -1605,7 +1605,7 @@ The message objects can be listed with the standard object listing command (`xe
|`name`|The unique name of the message|Read only|
|`priority`|The message priority. Higher numbers indicate greater priority|Read only|
|`class`|The message class, for example VM.|Read only|
-|`obj-uuid`|The uuid of the affected object.|Read only|
+|`obj-uuid`|The UUID of the affected object.|Read only|
|`timestamp`|The time that the message was generated.|Read only|
|`body`|The message content.|Read only|
@@ -1841,7 +1841,7 @@ PBDs have the following parameters:
|`device-config`|Extra configuration information that is provided to the SR-backend-driver of a host|Read only map parameter|
|`currently-attached`|True if the SR is attached on this host, False otherwise|Read only|
|`host-uuid`|UUID of the physical machine on which the PBD is available|Read only|
-|`host`|The host field is deprecated. Use host\_uuid instead.|Read only|
+|`host`|The host field is deprecated. Use `host\_uuid` instead.|Read only|
|`other-config`|Extra configuration information.|Read/write map parameter|
#### `pbd-create`
@@ -1852,7 +1852,7 @@ pbd-create host-uuid=uuid_of_host sr-uuid=uuid_of_sr [device-config:key=correspo
Create a PBD on your XCP-ng server. The read-only `device-config` parameter can only be set on creation.
-To add a mapping from ‘path’ to ‘/tmp’, the command line should contain the argument `device-config:path=/tmp`
+To add a mapping from `path` to `/tmp`, the command line should contain the argument `device-config:path=/tmp`
For a full list of supported device-config key/value pairs on each SR type, see [Storage](./storage.md).
@@ -2408,7 +2408,7 @@ Forget a PVS server.
#### `pvs-server-introduce`
```
-pvs-server-introduce addresses=adresses first-port=first_port last-port=last_port pvs-site-uuid=pvs_site_uuid
+pvs-server-introduce addresses=addresses first-port=first_port last-port=last_port pvs-site-uuid=pvs_site_uuid
```
Introduce new PVS server.
@@ -2510,7 +2510,7 @@ Force the VM power state to halted in the management toolstack database only. Th
snapshot-revert [uuid=uuid] [snapshot-uuid=snapshot_uuid]
```
-Revert an existing VM to a previous checkpointed or snapshot state.
+Revert an existing VM to a previous checkpoint or snapshot state.
#### `snapshot-uninstall`
@@ -2542,8 +2542,8 @@ SRs have the following parameters:
|`physical-utilisation`|Physical space currently utilized on this SR, in bytes. For thin provisioned disk formats, physical utilization may be less than virtual allocation|Read only|
|`physical-size`|Total physical size of the SR, in bytes|Read only|
|`type`|Type of the SR, used to specify the SR back-end driver to use|Read only|
-|`introduced-by`|The drtask (if any) which introduced the SR|Read only|
-|`content-type`|The type of the SR’s content. Used to distinguish ISO libraries from other SRs. For storage repositories that store a library of ISOs, the content-type must be set to iso. In other cases, we recommend that you set this parameter either to empty, or the string user.|Read only|
+|`introduced-by`|The disaster recovery task (if any) which introduced the SR|Read only|
+|`content-type`|The type of the SR’s content. Used to distinguish ISO libraries from other SRs. For storage repositories that store a library of ISOs, the content-type must be set to ISO. In other cases, we recommend that you set this parameter either to empty, or the string user.|Read only|
|`shared`|True if this SR can be shared between multiple XCP-ng servers; False otherwise|Read/write|
|`other-config`|List of key/value pairs that specify extra configuration parameters for the SR|Read/write map parameter|
|`host`|The storage repository host name|Read only|
@@ -2657,7 +2657,7 @@ The exact `device-config` parameters differ depending on the device `type`. For
sr-probe-ext type=type [host-uuid=host_uuid] [device-config:=config] [sm-config:-sm_config]
```
-Perform a storage probe. The device-config parameters can be specified by for example device-config:devs=/dev/sdb1. Unlike sr-probe, this command returns results in the same human-readable format for every SR type.
+Perform a storage probe. The device-config parameters can be specified by for example `device-config:devs=/dev/sdb1`. Unlike `sr-probe`, this command returns results in the same human-readable format for every SR type.
#### `sr-scan`
@@ -2831,7 +2831,7 @@ xe template-param-set uuid= vCPUs-params:mask=1,2,3
```
A VM created from this template run on physical CPUs 1, 2, and 3 only.
-You can also tune the vCPU priority (xen scheduling) with the cap and weight parameters. For example:
+You can also tune the vCPU priority (Xen scheduling) with the cap and weight parameters. For example:
```
xe template-param-set uuid= VCPUs-params:weight=512 xe template-param-set uuid= VCPUs-params:cap=100
@@ -3148,13 +3148,13 @@ VDIs have the following parameters:
|`sr-name-label`|Name of the containing storage repository|Read only|
|`location`|Location information|Read only|
|`managed`|True if the VDI is managed|Read only|
-|`xenstore-data`|Data to be inserted into the xenstore tree (/local/domain/0/backend/ vbd/*domid*/*device-id*/smdata) after the VDI is attached. The SM back-ends usually set this field on `vdi_attach`.|Read only map parameter|
+|`xenstore-data`|Data to be inserted into the `xenstore` tree (`/local/domain/0/backend/vbd/*domid*/*device-id*/smdata`) after the VDI is attached. The SM back-ends usually set this field on `vdi_attach`.|Read only map parameter|
|`sm-config`|SM dependent data|Read only map parameter|
|`is-a-snapshot`|True if this VDI is a VM storage snapshot|Read only|
|`snapshot_of`|The UUID of the storage this VDI is a snapshot of|Read only|
|`snapshots`|The UUIDs of all snapshots of this VDI|Read only|
|`snapshot_time`|The timestamp of the snapshot operation that created this VDI|Read only|
-|`metadata-of-pool`|The uuid of the pool which created this metadata VDI|Read only|
+|`metadata-of-pool`|The UUID of the pool which created this metadata VDI|Read only|
|`metadata-latest`|Flag indicating whether the VDI contains the latest known metadata for this pool|Read only|
|`cbt-enabled`|Flag indicating whether changed block tracking is enabled for the VDI|Read/write|
@@ -3499,7 +3499,7 @@ Commands for controlling VMs and their attributes.
#### VM selectors
-Several of the commands listed here have a common mechanism for selecting one or more VMs on which to perform the operation. The simplest way is by supplying the argument `vm=name_or_uuid`. An easy way to get the uuid of an actual VM is to, for example, execute `xe vm-list power-state=running`. (Get the full list of fields that can be matched by using the command `xe vm-list params=all`. ) For example, specifying `power-state=halted` selects VMs whose `power-state` parameter is equal to `halted`. Where multiple VMs are matching, specify the option `--multiple` to perform the operation. The full list of parameters that can be matched is described at the beginning of this section.
+Several of the commands listed here have a common mechanism for selecting one or more VMs on which to perform the operation. The simplest way is by supplying the argument `vm=name_or_uuid`. An easy way to get the `uuid` of an actual VM is to, for example, execute `xe vm-list power-state=running`. (Get the full list of fields that can be matched by using the command `xe vm-list params=all`. ) For example, specifying `power-state=halted` selects VMs whose `power-state` parameter is equal to `halted`. Where multiple VMs are matching, specify the option `--multiple` to perform the operation. The full list of parameters that can be matched is described at the beginning of this section.
The VM objects can be listed with the standard object listing command (`xe vm-list`), and the parameters manipulated with the standard parameter commands. For more information, see [Low-level parameter commands](#low-level-parameter-commands)
@@ -3536,7 +3536,7 @@ You can tune a vCPU’s pinning with
xe vm-param-set uuid= VCPUs-params:mask=1,2,3
```
-The selected VM then runs on physical CPUs 1, 2, and 3 only. You can also tune the vCPU priority (xen scheduling) with the cap and weight parameters. For example:
+The selected VM then runs on physical CPUs 1, 2, and 3 only. You can also tune the vCPU priority (Xen scheduling) with the cap and weight parameters. For example:
```
xe vm-param-set uuid= VCPUs-params:weight=512 xe vm-param-set uuid= VCPUs-params:cap=100
@@ -3608,7 +3608,7 @@ xe vm-param-get uuid= param-name=platform param-key=acpi_laptop_slate
- `possible-hosts` potential hosts of this VM read only
- `dom-id` (read only) domain ID (if available, -1 otherwise)
- `recommendations` (read only) XML specification of recommended values and ranges for properties of this VM
-- `xenstore-data` (read/write map parameter) data to be inserted into the xenstore tree (/local/domain/*domid*/vm-data) after the VM is created
+- `xenstore-data` (read/write map parameter) data to be inserted into the `xenstore` tree (`/local/domain/*domid*/vm-data`) after the VM is created
- `is-a-snapshot` (read only) True if this VM is a snapshot
- `snapshot_of` (read only) the UUID of the VM that this snapshot is of
- `snapshots` (read only) the UUIDs of all snapshots of this VM
@@ -3635,7 +3635,7 @@ Tests whether storage is available to recover this VM.
vm-call-plugin vm-uuid=vm_uuid plugin=plugin fn=function [args:key=value]
```
-Calls the function within the plug-in on the given VM with optional arguments (args:key=value). To pass a "value" string with special characters in it (for example new line), an alternative syntax args:key:file=local\_file can be used in place, where the content of local\_file will be retrieved and assigned to "key" as a whole.
+Calls the function within the plug-in on the given VM with optional arguments (`args:key=value`). To pass a `value` string with special characters in it (for example new line), an alternative syntax `args:key:file=local\_file` can be used in place, where the content of `local\_file` will be retrieved and assigned to `key` as a whole.
#### `vm-cd-add`
@@ -3763,7 +3763,7 @@ vm-crashdump-list [vm-selector=vm selector value...]
List crashdumps associated with the specified VMs.
-When you use the optional argument `params`, the value of params is a string containing a list of parameters of this object that you want to display. Alternatively, you can use the keyword `all` to show all parameters. If `params` is not used, the returned list shows a default subset of all available parameters.
+When you use the optional argument `params`, the value of `params` is a string containing a list of parameters of this object that you want to display. Alternatively, you can use the keyword `all` to show all parameters. If `params` is not used, the returned list shows a default subset of all available parameters.
The VM or VMs on which this operation is performed are selected using the standard selection mechanism. For more information, see [VM selectors](#vm-selectors). Optional arguments can be any number of the [VM parameters](#vm-parameters) listed at the beginning of this section.
@@ -4170,7 +4170,7 @@ We advise to use Xen Orchestra instead of this method. See [Xen Orchestra rollin
Commands for controlling VM scheduled snapshots and their attributes.
-The vmss objects can be listed with the standard object listing command (`xe vmss-list`), and the parameters manipulated with the standard parameter commands. For more information, see [Low-level parameter commands](#low-level-parameter-commands)
+The `vmss` objects can be listed with the standard object listing command (`xe vmss-list`), and the parameters manipulated with the standard parameter commands. For more information, see [Low-level parameter commands](#low-level-parameter-commands)
#### `vmss-create`
diff --git a/docs/cloud.md b/docs/cloud.md
index 71adbbb6..eda92320 100644
--- a/docs/cloud.md
+++ b/docs/cloud.md
@@ -23,23 +23,23 @@ The self-service feature allows users to create new VMs within a **limited amoun
## CloudStack
-At the outset this writeup is an outcome of this XCP-ng forum [discussion](https://xcp-ng.org/forum/topic/1109/xcp-ng-issues-with-cloudstack-4-11-2-with-iscsi-sr/10). Basically, setting up XCP-Ng using XCP-ng Center is very straightforward but to overlay Cloudstack and get them all to work in unison is the tricky part. To provide more background , consider a 2 node XCP-ng 7.6.0 pool setup with iSCSI target running on a different host with all necessary traffic segregation principles applied (guest, storage and management).
+At the outset this writeup is an outcome of this XCP-ng forum [discussion](https://xcp-ng.org/forum/topic/1109/xcp-ng-issues-with-cloudstack-4-11-2-with-iscsi-sr/10). Basically, setting up XCP-Ng using XCP-ng Center is very straightforward but to overlay CloudStack and get them all to work in unison is the tricky part. To provide more background , consider a 2 node XCP-ng 7.6.0 pool setup with iSCSI target running on a different host with all necessary traffic segregation principles applied (guest, storage and management).
### Installation Steps (with tips and tricks)
-1. Follow along the Cloudstack Management Server installation [steps](http://docs.cloudstack.apache.org/en/4.11.2.0/installguide/hypervisor/xenserver.html#system-requirements-for-xenserver-hosts).
+1. Follow along the CloudStack Management Server installation [steps](http://docs.cloudstack.apache.org/en/4.11.2.0/installguide/hypervisor/xenserver.html#system-requirements-for-xenserver-hosts).
-> Tip #1: if you need iSCSI , when you login to Cloudstack Management UI avoid the "Basic setup" and choose the option "I have used CloudStack before" (the button that is less obvious) since with basic for some reasons forces you into NFS.
-But don't proceed with configuring your Cloudstack Management Server just yet.
+> Tip #1: if you need iSCSI , when you login to CloudStack Management UI avoid the "Basic setup" and choose the option "I have used CloudStack before" (the button that is less obvious) since with basic for some reasons forces you into NFS.
+But don't proceed with configuring your CloudStack Management Server just yet.
-2. If you have not setup your iSCSI storage on the XCP-ng pool . Please proceed to do it and ensure that they list in XCP-ng Center or Xen Orchestra and ensure everything looks good. So as listed in the Cloudstack Installation guide, we will be using the "Presetup" option to setup Primary ISCSI storage on Cloudstack.
+2. If you have not setup your iSCSI storage on the XCP-ng pool . Please proceed to do it and ensure that they list in XCP-ng Center or Xen Orchestra and ensure everything looks good. So as listed in the CloudStack Installation guide, we will be using the "Presetup" option to setup Primary ISCSI storage on CloudStack.
3. Now SSH into the Cloudstack Management host, go to the folder (`/usr/share/cloudstack-common/scripts/vm/hypervisor/xenserver/`) . This folder contains several useful scripts which comes in handy pick the one that says `setup_heartbeat_sr.sh` and copy that over to the Xen Pool master host and ensure you have executable rights on script file .
4. Before you run the script , run the `lvscan` command which scan for presence of LVM on your pool and produce some result if you had made any undesirable edits to `/etc/lvm/lvm.conf` file this step will most likely fail. If it does , make sure you restore the `lvm.conf` to its original state.
-5. Now execute the `setup_heartbeat_sr.sh` with the UUID of the iSCSI SR that you had setup in Step #2. Internally does lvcreate with a bunch of params . Which basically creates a hb-volume (heartbeat volume) for the SR.
+5. Now execute the `setup_heartbeat_sr.sh` with the UUID of the iSCSI SR that you had setup in Step #2. Internally does lvcreate with a bunch of parameters. Which basically creates a hb-volume (heartbeat volume) for the SR.
6. After this succeeds proceed to setting up your Cloudstack Management host and your infrastructure and that point of adding your primary storage use the "Presetup" option. You'll see that it works without issues.
-> Note: At the time of writing this page: Cloudstack 4.12 and XCP-Ng 7.6.0 where the latest versions of the respective software.
+> Note: At the time of writing this page: Cloudstack 4.12 and XCP-Ng 7.6.0 where the latest versions of the respective software.
diff --git a/docs/compute.md b/docs/compute.md
index af1c0fc7..a4be9572 100644
--- a/docs/compute.md
+++ b/docs/compute.md
@@ -37,7 +37,7 @@ and an error in `/var/log/xen/hypervisor.log`
[2020-08-22 10:09:03] (XEN) [ 297.542136] d[IO]: assign (0000:08:00.0) failed (-1)
```
-This indicates that your device is using [RMRR](https://access.redhat.com/sites/default/files/attachments/rmrr-wp1.pdf). Intel [IOMMU does not allow DMA to these devices](https://www.kernel.org/doc/Documentation/Intel-IOMMU.txt) and therefore PCI passthrough is not supported.
+This indicates that your device is using [RMRR](https://access.redhat.com/sites/default/files/attachments/rmrr-wp1.pdf). Intel [IOMMU does not allow DMA to these devices](https://www.kernel.org/doc/Documentation/Intel-IOMMU.txt) and therefore PCI passthrough is not supported.
### 1. Find your devices ID ([B/D/F](https://en.wikipedia.org/wiki/PCI_configuration_space#BDF)) on the PCI bus using one of the following methods:
@@ -182,7 +182,7 @@ Due to a proprietary piece of code in XenServer, XCP-ng doesn't have (yet) suppo
### MxGPU (AMD vGPU)
-AMD GPU are trivial using industry standard.
+AMD GPU are trivial using industry standard.
Version 2.0 of the mxgpu iso should work on any 8.X version of XCP-ng
1. Enable SR-IOV in the server's BIOS
@@ -196,7 +196,7 @@ Version 2.0 of the mxgpu iso should work on any 8.X version of XCP-ng
`xe-install-supplemental-pack mxgpu-2.0.0.amd.iso`
6. Reboot the XCP-ng
-7. Assign an MxGPU to the VM from the VM properties page. Go to the GPU section. From the Drop down choose how big of a slice of the GPU you want on the VM and click OK
+7. Assign an MxGPU to the VM from the VM properties page. Go to the GPU section. From the Drop down choose how big of a slice of the GPU you want on the VM and click OK
Start the VM and log into the guest OS and load the appropriate guest driver from AMD's Drivers & Support page.
diff --git a/docs/develprocess.md b/docs/develprocess.md
index 58afcc4f..bcbe23ca 100644
--- a/docs/develprocess.md
+++ b/docs/develprocess.md
@@ -7,9 +7,9 @@ In this document, we will try to give you an overview of the development process
XCP-ng is a collection of components, that put together create a complete turnkey virtualization solution that you can install to bare-metal servers. Those components are packaged in the [RPM](http://rpm.org) format.
As usual in the Free Software world, we stand *on the shoulders of giants*:
-* **CentOS**: many RPM packages come from the [CentOS](https://www.centos.org/) Linux distribution, which in turn is based on Red Hat's [RHEL](https://en.wikipedia.org/wiki/Red_Hat_Enterprise_Linux), mostly based itself on the work of the [Fedora](https://getfedora.org/) project, all based on the work of all the developers who wrote the [FLOSS](https://en.wikipedia.org/wiki/Free/Libre_Open_Source_Software) software that is packaged in those Linux distributions. Examples: glibc, GNU coreutils, openssh, crontabs, iptables, openssl and many, many more.
+* **CentOS**: many RPM packages come from the [CentOS](https://www.centos.org/) Linux distribution, which in turn is based on Red Hat's [RHEL](https://en.wikipedia.org/wiki/Red_Hat_Enterprise_Linux), mostly based itself on the work of the [Fedora](https://getfedora.org/) project, all based on the work of all the developers who wrote the [FLOSS](https://en.wikipedia.org/wiki/Free/Libre_Open_Source_Software) software that is packaged in those Linux distributions. Examples: `glibc`, `GNU coreutils`, `openssh`, `crontabs`, `iptables`, `openssl` and many, many more.
* **EPEL**: a few packages come from [EPEL](https://fedoraproject.org/wiki/EPEL).
-* **XenServer**: most packages that make XCP-ng what it is have been rebuilt from source RPMs released by the [XenServer](https://xenserver.org/) project, with or without modifications. This includes xen, a patched Linux kernel, the Xen API, and many others. This also includes redistributable drivers or tools from third party vendors.
+* **XenServer**: most packages that make XCP-ng what it is have been rebuilt from source RPMs released by the [XenServer](https://xenserver.org/) project, with or without modifications. This includes Xen, a patched Linux kernel, the Xen API, and many others. This also includes redistributable drivers or tools from third party vendors.
* **XCP-ng**: the remaining packages are additions (or replacements of closed-source components) to the original XenServer distribution.
## Release process overview
@@ -81,7 +81,7 @@ Here are the usual steps. We will expand on them afterwards:
* **Develop**: happens on a software git repository as in any software project. Example: . Skip if we are not the upstream developer for that software and are not contributing to it yet.
* **Release**: decide that your software is good to be released as part of XCP-ng, either as an update to an existing release of XCP-ng or in the next release. Create a tag in the git repository. Example: . Skip if we are not the upstream developer for that software.
* **Packaging**
- * **Create or update RPM specs** and commit them to appropriate repository in the ['xcp-ng-rpms' github organization](https://github.com/xcp-ng-rpms/). Example: .
+ * **Create or update RPM specs** and commit them to appropriate repository in the [xcp-ng-rpms github organization](https://github.com/xcp-ng-rpms/). Example: .
* **Add or update patches** to be applied above the upstream source tarball to that same repository.
* **Submit build** to the build system ([koji](https://koji.xcp-ng.org/)).
* **Publish the build** to the appropriate RPM repository (`testing` for stable releases, `base` for development release of XCP-ng)
@@ -262,7 +262,7 @@ V8.x (packages)
v8.x-updates (builds)
v8.x-testing (builds)
```
-* `V8.x` is associated to all the packages used in XCP-ng 8.x, either as installed packages on servers or as build dependencies in Koji. Notice the capslock V which is a convention I'll try to follow to identify tags that are specifically associated to *packages*, not *builds*.
+* `V8.x` is associated to all the packages used in XCP-ng 8.x, either as installed packages on servers or as build dependencies in Koji. Notice the capitalized "V" which is a convention I'll try to follow to identify tags that are specifically associated to *packages*, not *builds*.
* `v8.x-base` inherits `V8.x` so we were able to associate it to all the builds in base XCP-ng 8.x. The `base` RPM repository for 8.x is exported from this tag.
* `v8.x-updates` inherits `v8.x-base` which means it contains all builds from `v8.x-base` plus builds specifically tagged `v8.x-updates`. Those are exported to the `updates` RPM repository for 8.x.
* `v8.x-testing` inherits `v8.x-updates` so it contains all builds from `v8.x-base`, all builds from `v8.x-updates` and builds specifically tagged `v8.x-testing`. Why? As we will see below with build targets, this allows to make any released update taken into account when pulling dependencies for building packages in `v8.x-testing`. Builds specifically tagged `v8.x-testing` are exported to the `testing` RPM repository for 8.x.
@@ -415,7 +415,7 @@ You probably want to bring modifications to the RPM definitions before you rebui
* From within the container:
* Enter the directory containing the sources for the RPM you had cloned earlier from . Example: `cd /data/git/xen`.
* Install the build dependencies in the container: `sudo yum-builddep SPECS/*.spec -y`.
- * Build the RPM: `rpmbuild -ba SPECS/*.spec --define "_topdir $(pwd)"`. This `_topdir` strange thing is necessary to make rpmbuild accept to work in the current directory rather than in its default working directory, `~/rpmbuild`.
+ * Build the RPM: `rpmbuild -ba SPECS/*.spec --define "_topdir $(pwd)"`. This `_topdir` strange thing is necessary to make `rpmbuild` accept to work in the current directory rather than in its default working directory, `~/rpmbuild`.
* When the build completes, new directories are created: `RPMS/` and `SRPMS/`, that contain the build results. In a container started with the appropriate `-v` switch, the build results will be instantly available outside the container too.
:::tip
@@ -441,7 +441,7 @@ We can't cover every situation here, so we will address a simple case: add patch
Then follow the same steps as before to build the RPM.
### An XCP-ng host as a build environment
-You can also turn any XCP-ng host (preferrably installed in a VM. Don't sacrifice a physical host for that) into a build environment: all the tools and build dependencies are available from the default RPM repositories for XCP-ng, or from CentOS and EPEL repositories.
+You can also turn any XCP-ng host (preferably installed in a VM. Don't sacrifice a physical host for that) into a build environment: all the tools and build dependencies are available from the default RPM repositories for XCP-ng, or from CentOS and EPEL repositories.
You won't benefit from the convenience scripts from [xcp-ng-build-env](https://github.com/xcp-ng/xcp-ng-build-env) though.
@@ -450,15 +450,15 @@ You won't benefit from the convenience scripts from [xcp-ng-build-env](https://g
This document explains how to locally build the [XAPI](https://github.com/xcp-ng/xen-api).
Here are the steps:
-- First, set up a build env:
+- First, set up a build environment:
- Install the following packages: `dlm-devel` `gmp` `gmp-devel` `libffi-devel` `openssl-devel` `pciutils-devel` `systemd-devel` `xen-devel` `xxhash-devel`.
- - Install [`opam`](https://opam.ocaml.org/doc/Install.html) to set up a build env.
+ - Install [`opam`](https://opam.ocaml.org/doc/Install.html) to set up a build environment.
- Run `opam init`.
- - Run `opam switch create toolstack 4.08.1`, this sets up an opam `switch` which is a virtual ocaml env.
- - Run `opam repo add xs-opam https://github.com/xapi-project/xs-opam.git`, this adds the [`xs-opam` repo](https://github.com/xapi-project/xs-opam.git) to your env.
- - Run `opam repo remove default`, this removes the the default repo from your env as we only want the `xs-opam` one.
+ - Run `opam switch create toolstack 4.08.1`, this sets up an `opam` `switch` which is a virtual OCaml environment.
+ - Run `opam repo add xs-opam https://github.com/xapi-project/xs-opam.git`, this adds the [`xs-opam` repo](https://github.com/xapi-project/xs-opam.git) to your environment.
+ - Run `opam repo remove default`, this removes the default repo from your environment as we only want the `xs-opam` one.
- Run `opam depext -vv -y xs-toolstack`, this installs the dependency needed to build `xs-toolstack`
- - Run `opam install xs-toolstack -y`, this installs the toolstack to build the xapi in your env.
+ - Run `opam install xs-toolstack -y`, this installs the toolstack to build XAPI in your environment.
- Build the XAPI:
- Go to the dir where your `xen-api` code base is.
@@ -516,7 +516,7 @@ Only supported modules are in this case.
The modules whose name does not come from XenServer RPMs follow this base naming scheme:
-`{module-name}-module`. Example: `ceph.ko` => `ceph-module`.
+`{module-name}-module`. Example: `ceph.ko` => `ceph-module`.
If the RPM contains several modules (to be avoided), then find an unambiguous name and add the `modules` suffix:
@@ -675,7 +675,7 @@ RPMs that provide modules for an alternate kernel must follow these conventions:
* The remaining part of the naming convention is the same as that of packages that provide modules for the main supported kernel:
* `{inherited-name-from-XS}-kernel{MAJOR.MINOR}`
* `{name}-module-kernel{MAJOR.MINOR}`
- * "kmod" packages
+ * `kmod` packages
* Modules are installed in `/lib/modules/{kernel_version}/updates` or `/lib/modules/{kernel_version}/extra` whether they are updates for built-in modules (if that situation happens) or additional packages.
* `Requires` the appropriate alternate kernel package.
@@ -737,7 +737,7 @@ If you want to use commands in the installer's filesystem context, as root:
```
chroot install/
```
-To use `yum` or `rpm`, you'll also need to mount `urandom` in your chrooted dir.
+To use `yum` or `rpm`, you'll also need to mount `urandom` in your chroot root.
From outside the chroot run:
```
touch install/dev/urandom
@@ -750,7 +750,7 @@ For example, you can list all RPMs present in that "system":
rpm -qa | sort
```
-Exit chroot with `exit` or Ctrl + D.
+Exit chroot with `exit` or `Ctrl + D`.
#### Alter the filesystem
@@ -762,7 +762,7 @@ To modify the installed RPMs on a host see [change the list of installed RPMs](c
:::
Example use cases:
-* Update drivers: replace an existing driver module (*.ko) with yours, or, if you have built a RPM with that driver, install it. For example, you could rebuild a patched `qlogic-qla2xxx` RPM package and install it instead of the one that is included by default. Note that this will *not* install the newer driver on the final installed XCP-ng. We're only in the context of the system that runs during the installation phase, here.
+* Update drivers: replace an existing driver module (`*.ko`) with yours, or, if you have built a RPM with that driver, install it. For example, you could rebuild a patched `qlogic-qla2xxx` RPM package and install it instead of the one that is included by default. Note that this will *not* install the newer driver on the final installed XCP-ng. We're only in the context of the system that runs during the installation phase, here.
* Modify the installer itself to fix a bug or add new features (see below)
#### Modify the installer code itself
@@ -794,7 +794,7 @@ Read [the usual warnings about the installation of third party RPMs on XCP-ng.](
To achieve this:
* Change the RPMs in the `Packages/` directory. If you add new packages, be careful about dependencies, else they'll fail to install and the whole installation process will fail.
-* If you need to add new RPMs not just replace existing ones, they need to be pulled by another existing RPM as dependencies. If there's none suitable, you can add the dependency to the [xcp-ng-deps RPM](https://github.com/xcp-ng-rpms/xcp-ng-deps).
+* If you need to add new RPMs not just replace existing ones, they need to be pulled by another existing RPM as dependencies. If there's none suitable, you can add the dependency to the [`xcp-ng-deps` RPM](https://github.com/xcp-ng-rpms/xcp-ng-deps).
* Update `repodata/`
```
rm repodata/ -rf
@@ -830,7 +830,7 @@ Give priority to tests on actual hardware, but if you don't have any hardware av
- verify installation
- verify connectivity with your interfaces
-- verify connectivity to Shared Storages
+- verify connectivity to Shared Storage
- verify creation of a new Linux VM (install guest tools)
- verify creation of a new Windows VM (install guest tools)
- verify basic VM functionality (start, reboot, suspend, shutdown)
@@ -851,7 +851,7 @@ Give priority to tests on actual hardware, but if you don't have any hardware av
### Live migration tests
-Live migration needs to be tested, with or without storage motion (ie. moving the VM disk data to another storage repository). It is both a very important feature and something that can break in subtle ways, especially across different versions of XenServer or XCP-ng.
+Live migration needs to be tested, with or without storage motion (i.e., moving the VM disk data to another storage repository). It is both a very important feature and something that can break in subtle ways, especially across different versions of XenServer or XCP-ng.
**TODO: create (and link to) a page dedicated to live migration and known issues, gotchas or incompatibilities, especially across different releases and/or during pool upgrade.**
@@ -948,7 +948,7 @@ and
- compare speed of interfaces in the old and in the new release
- (add more here...)
-### Example Storage Performance Tests Using fio
+### Example Storage Performance Tests Using `fio`
#### Random write test for IOP/s, i.e. lots of small files
@@ -977,10 +977,10 @@ sync;fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test
### VM Export / Import
-* Export using ZSTD compression
-* Import using ZSTD compression
-* Export using gzip compression
-* Import using gzip compression
+* Export using `zstd` compression
+* Import using `zstd` compression
+* Export using `gzip` compression
+* Import using `gzip` compression
### Guest tools and drivers
diff --git a/docs/ecosystem.md b/docs/ecosystem.md
index c13450ad..8dbacc90 100644
--- a/docs/ecosystem.md
+++ b/docs/ecosystem.md
@@ -65,7 +65,7 @@ We are integrating storage solution available directly from Xen Orchestra (XOSAN
### LINBIT
-LINBIT is a software clustering company specialized in data replication – including persistent block storage. The heart of LINBIT’s open-source technology is DRBD®. DRBD allows block storage between servers to be replicated asynchronously or synchronously without sacrificing performance or reliability. LINBIT has led the way in High Availability since 2001 and developed the solution LINSTOR.
+LINBIT is a software clustering company specialized in data replication – including persistent block storage. The heart of LINBIT’s open-source technology is DRBD®. DRBD allows block storage between servers to be replicated asynchronously or synchronously without sacrificing performance or reliability. LINBIT has led the way in High Availability since 2001 and developed the solution LINSTOR.

@@ -75,7 +75,7 @@ With this alliance, we are aiming to offer DRBD support inside the XCP-ng hyperv
* LINBIT High Availability
* LINBIT Disaster Recovery
-Making XCP-ng compatible with LINBIT's solutions will allow us to provide new solutions for users that are looking for performance and reliability with DRBD in their infrastructure.
+Making XCP-ng compatible with LINBIT's solutions will allow us to provide new solutions for users that are looking for performance and reliability with DRBD in their infrastructure.
There is still a lot of work to do in order to package DRBD in the XCP-ng kernel, make LINBIT's solutions compatible with XCP-ng, and finally provide an easy way to stay up-to-date.
diff --git a/docs/guests.md b/docs/guests.md
index 18f6b1bb..93a51aae 100644
--- a/docs/guests.md
+++ b/docs/guests.md
@@ -128,8 +128,8 @@ Depending on the situation, just update from your distribution's online reposito
FreeBSD is a 30-year-old operating system used widely to run all sorts of systems and has served as the basis for a number of operating systems, including MacOS, pfSense, and FreeNAS. The Xen kernel modules are built and distributed in the GENERIC kernel, so if you haven't customised or recompiled your kernel, the drivers will be present.
To communicate with the hypervisor, you need to install two [ports](https://www.freebsd.org/ports/):
-* [sysutils/xe-guest-utilities](https://www.freshports.org/sysutils/xe-guest-utilities/)
-* [sysutils/xen-guest-tools](https://www.freshports.org/sysutils/xen-guest-tools/)
+* [sysutils/xe-guest-utilities](https://www.freshports.org/sysutils/xe-guest-utilities/)
+* [sysutils/xen-guest-tools](https://www.freshports.org/sysutils/xen-guest-tools/)
The `install.sh` script on the guest tools ISO does not yet support FreeBSD, so there is no point in mounting the guest tools ISO on a FreeBSD VM.
@@ -165,33 +165,33 @@ To install it on versions 11 or higher, until version 12.0-U1 of TrueNAS that in
```bash
# sed -i '' 's/enabled: yes/enabled: no/' /usr/local/etc/pkg/repos/local.conf
```
-
+
2. Create a temporary directory and move into it:
```bash
# mkdir /tmp/repo
# cd /tmp/repo
```
-
+
3. Fetch the required packages. A directory **All** will be created and you will find the packages with their current versions under there:
- ```bash
+ ```bash
# pkg fetch -o /tmp/repo/ xen-guest-tools
# pkg fetch -o /tmp/repo/ xe-guest-utilities
```
-
-4. Add the downloaded packages, without their dependencies:
+
+4. Add the downloaded packages, without their dependencies:
```bash
# pkg add -M All/xen-guest-tools-4.14.0.txz
# pkg add -M All/xe-guest-utilities-6.2.0_3.txz
```
The versions reported here are just the current version and they maybe different in your installation.
-
+
5. Revert the repos to their original settings to avoid surprises down the road. The second command should be run just if you disabled the local repo in step 1:
```bash
# sed -i '' 's/enabled: yes/enabled: no/' /usr/local/etc/pkg/repos/FreeBSD.conf
# sed -i '' 's/enabled: no/enabled: yes/' /usr/local/etc/pkg/repos/local.conf
```
A restart of the VM will perform a reset of these files to their original settings too.
-
+
6. Once the package is installed, you need to tell FreeNAS to start the `xe-daemon` process when starting:
1. Go to _Tasks -> Init/Shutdown Script_
2. Create a new task with the following settings:
diff --git a/docs/guides.md b/docs/guides.md
index 3d80aea1..1fac874f 100644
--- a/docs/guides.md
+++ b/docs/guides.md
@@ -15,7 +15,7 @@ pfSense and OPNsense do work great in a VM, but there are a few extra steps that
There are 2 ways of doing that, either using the CLI (pfSense or OPNsense) or the Web UI (pfSense).
-Option 1 via console/ssh:
+Option 1 via console/ssh:
Now that you have the VM running, we need to install guest utilities and tell them to run on boot. SSH (or other CLI method) to the VM and perform the following:
```
@@ -25,22 +25,22 @@ ln -s /usr/local/etc/rc.d/xenguest /usr/local/etc/rc.d/xenguest.sh
service xenguest start
```
-Option 2 is via webgui (only for pfSense):
-Open management page under http(s)://your-configured-ip and go to:
-*System -> Firmware -> Plugins*
-Scroll down to **os-xen** and let the gui do the steps needed. Next: Reboot the system to have the guest started (installer doesn't do that):
+Option 2 is via web GUI (only for pfSense):
+Open management page under `http(s)://your-configured-ip` and go to:
+*System -> Firmware -> Plugins*
+Scroll down to `os-xen` and let the GUI do the steps needed. Next: Reboot the system to have the guest started (installer doesn't do that):
*Power -> Reboot*
Guest Tools are now installed and running, and will automatically run on every boot of the VM.
### 3. Disable TX Checksum Offload
-Now is the most important step: we must disable tx checksum offload on the virtual xen interfaces of the VM. This is because network traffic between VMs in a hypervisor is not populated with a typical ethernet checksum, since they only traverse server memory and never leave over a physical cable. The majority of operating systems know to expect this when virtualized and handle ethernet frames with empty checksums without issue. However `pf` in FreeBSD does not handle them correctly and will drop them, leading to broken performance.
+Now is the most important step: we must disable TX checksum offload on the virtual Xen interfaces of the VM. This is because network traffic between VMs in a hypervisor is not populated with a typical Ethernet checksum, since they only traverse server memory and never leave over a physical cable. The majority of operating systems know to expect this when virtualized and handle Ethernet frames with empty checksums without issue. However `pf` in FreeBSD does not handle them correctly and will drop them, leading to broken performance.
-The solution is to simply turn off checksum-offload on the virtual xen interfaces for pfSense in the TX direction only (TX towards the VM itself). Then the packets will be checksummed like normal and `pf` will no longer complain.
+The solution is to simply turn off checksum-offload on the virtual Xen interfaces for pfSense in the TX direction only (TX towards the VM itself). Then the packets will be checksummed like normal and `pf` will no longer complain.
:::tip
-Disabling checksum offloading is only necessary for virtual interfaces. When using [PCI Passthrough](https://github.com/xcp-ng/xcp/wiki/PCI-Passtrough) to provide a VM with direct access to physical or virtual (using [SR-IOV](https://en.wikipedia.org/wiki/Single-root_input/output_virtualization)) devices it is unnecessary to disable tx checksum offloading on any interfaces on those devices.
+Disabling checksum offloading is only necessary for virtual interfaces. When using [PCI Passthrough](https://github.com/xcp-ng/xcp/wiki/PCI-Passtrough) to provide a VM with direct access to physical or virtual (using [SR-IOV](https://en.wikipedia.org/wiki/Single-root_input/output_virtualization)) devices it is unnecessary to disable TX checksum offloading on any interfaces on those devices.
:::
:::warning
@@ -202,7 +202,7 @@ Once your host's network is set up, we'll look at configuring the XCP-ng virtual
* For the other virtual machine settings, some explanations :
* Dual CPU sockets for improving vCPU performance.
* **The virtual disk must be at least 60 GB in size to install XCP-ng !**
- * **LSI Logic SAS** controller is choosen to maximize at possible the compatibility and the performance. vNVMe
+ * **LSI Logic SAS** controller is chosen to maximize at possible the compatibility and the performance. vNVMe
controller works too, it can reduce CPU overhead and latency. **PVSCSI controller won't work**.
* **Unlike the PVSCSI controller, the VMXNET3 controller works with XCP-ng**. It will be useful if heavy network
loads are planned between different XCP-ng virtual machines (XOSAN)
@@ -224,10 +224,10 @@ Once your host's network is set up, we'll look at configuring the XCP-ng virtual
* An additional option is to be added to the virtual machine's .vmx file. You will also add the option to enable
promiscuous mode for the virtual machine.
- **hypervisor.cpuid.v0 = "FALSE"** : Addition to the checked CPU option on Workstation
- **ethernet0.noPromisc = "FALSE"** : Enable Promiscuous Mode
+ `hypervisor.cpuid.v0 = "FALSE"` : Addition to the checked CPU option on Workstation
+ `ethernet0.noPromisc = "FALSE"` : Enable Promiscuous Mode
- _**Be careful, 'ethernet0' is the name of the bridged network interface of my virtual machine, remember to check that it's the same name in your .vmx file (search the 'ethernet' string using your favorite text editor).**_
+ _**Be careful, `ethernet0` is the name of the bridged network interface of my virtual machine, remember to check that it's the same name in your `.vmx` file (search the `'ethernet'` string using your favorite text editor).**_
* If you want to use the VMXNET3 card, this is possible. For this you must also modify the .vmx file of your XCP-ng virtual machine.
@@ -330,39 +330,39 @@ Finally, install/use XCP-ng !
## VLAN Trunking in a VM
-This document will describe how to configure a VLAN trunk port for use in a VM running on xcp-ng. The typical use case for this is you want to run your network's router as a VM and your network has multiple vlans.
+This document will describe how to configure a VLAN trunk port for use in a VM running on xcp-ng. The typical use case for this is you want to run your network's router as a VM and your network has multiple vlans.
-With some help from others in the [forums](https://xcp-ng.org/forum/topic/729/how-to-connect-vlan-trunk-to-vm/11), I was able to get a satisfactory solution implemented using [pfSense](https://pfsense.org) and so this document will discuss how to implement this solution using pfSense as your router. In theory, the same solution should apply to other router solutions, but it is untested. Feel free to update this document with your results.
+With some help from others in the [forums](https://xcp-ng.org/forum/topic/729/how-to-connect-vlan-trunk-to-vm/11), I was able to get a satisfactory solution implemented using [pfSense](https://pfsense.org) and so this document will discuss how to implement this solution using pfSense as your router. In theory, the same solution should apply to other router solutions, but it is untested. Feel free to update this document with your results.
### Two Approaches
-There are two approaches to vlans in xcp-ng. The first is to create a vif for each VLAN you want your router to route traffic for then attach the vif to your VM. The second is to pass through a trunk port from dom0 onto your router VM.
+There are two approaches to vlans in xcp-ng. The first is to create a virtual interface for each VLAN you want your router to route traffic for then attach the virtual interface to your VM. The second is to pass through a trunk port from dom0 onto your router VM.
#### Multiple VIFs
-By far, this is the easiest solution and perhaps the "officially supported" approach for xcp-ng. When you do this, dom0 handles all the VLAN tagging for you and each vif is just presented to your router VM as a separate virtual network interface. It's like you have a bunch of separate network cards installed on your router where each represents a different VLAN and is essentially attached to a VLAN access (untagged) port on your switch. There is nothing special for you to do, this _just works_. If you require 7 vifs or less for your router then this might be the easiest approach.
+By far, this is the easiest solution and perhaps the "officially supported" approach for xcp-ng. When you do this, dom0 handles all the VLAN tagging for you and each virtual interface is just presented to your router VM as a separate virtual network interface. It's like you have a bunch of separate network cards installed on your router where each represents a different VLAN and is essentially attached to a VLAN access (untagged) port on your switch. There is nothing special for you to do, this _just works_. If you require 7 virtual interfaces or less for your router then this might be the easiest approach.
-The problem with this approach is when you have many vlans you want to configure on your router. If you read through the thread I linked to at the top of this page you'll notice the discussion about where I was unable to attach more than 7 vifs to my pfSense VM. XO nor XCP-ng Center allow you to attach more than seven. This appears to be some kind of limit somewhere in Xen. Other users have been able to attach more than 7 vifs via CLI, however when I tried to do this myself my pfSense VM became unresponsive once I added the 8th vif. More details on that problem are discussed in the thread.
+The problem with this approach is when you have many vlans you want to configure on your router. If you read through the thread I linked to at the top of this page you'll notice the discussion about where I was unable to attach more than 7 virtual interfaces to my pfSense VM. XO nor XCP-ng Center allow you to attach more than seven. This appears to be some kind of limit somewhere in Xen. Other users have been able to attach more than 7 virtual interfaces via CLI, however when I tried to do this myself my pfSense VM became unresponsive once I added the 8th virtual interface. More details on that problem are discussed in the thread.
-Another problem with this approach, perhaps only specific to pfSense, is that when you attach many vifs, you must disable tx offloading on each and every vif otherwise you'll have all kinds of problems. This was definitely a red flag for me. Initially I'm starting with 7 vlans and 9 networks total with short term requirements for at least another 3 vlans for sure and then who knows how many down the road. In this approach, every time you have to create a new VLAN by adding a vif to the VM, you will have to reboot the VM.
+Another problem with this approach, perhaps only specific to pfSense, is that when you attach many virtual interfaces, you must disable TX offloading on each and every virtual interface otherwise you'll have all kinds of problems. This was definitely a red flag for me. Initially I'm starting with 7 vlans and 9 networks total with short term requirements for at least another 3 vlans for sure and then who knows how many down the road. In this approach, every time you have to create a new VLAN by adding a virtual interface to the VM, you will have to reboot the VM.
-Having to reboot my network's router every time I need to create a new VLAN is not ideal for the environment I'm working in; especially because in the current production environment running VMware, we do not need to reboot the router VM to create new vlans. (FWIW, I've come to xcp-ng as the IT department has asked me to investigate possibly replacing our VMware env with XCP-ng. I started my adventures with xcp-ng by diving in head first at home and replacing my home environment, previously ESXi, with xcp-ng. Now I'm in the early phases of working with xcp-ng in the test lab at work.)
+Having to reboot my network's router every time I need to create a new VLAN is not ideal for the environment I'm working in; especially because in the current production environment running VMware, we do not need to reboot the router VM to create new vlans. (FWIW, I've come to xcp-ng as the IT department has asked me to investigate possibly replacing our VMware env with XCP-ng. I started my adventures with xcp-ng by diving in head first at home and replacing my home environment, previously ESXi, with xcp-ng. Now I'm in the early phases of working with xcp-ng in the test lab at work.)
-In conclusion, if you have seven or fewer vifs and you're fairly confident that you'll never need to exceed seven vifs then this approach is probably the path of least resistance. On the other hand, if you know you'll need more than seven or fairly certain you will some day. Or you're in an environment where you need to be able to create vlans on the fly then you'll probably want to proceed with the alternative below.
+In conclusion, if you have seven or fewer virtual interfaces and you're fairly confident that you'll never need to exceed seven virtual interfaces then this approach is probably the path of least resistance. On the other hand, if you know you'll need more than seven or fairly certain you will some day. Or you're in an environment where you need to be able to create vlans on the fly then you'll probably want to proceed with the alternative below.
This document is about the alternative approach, but a quick summary of how this solution works in xcp-ng:
-* Make sure the pif connected to your xcp-ng server is carrying all the required tagged vlans
-* Within XO or XCP Center, create multiple networks off of the pif, adding the VLAN tag as needed for each VLAN
-* For each VLAN you want your router to route for, add a vif for that specific VLAN to the VM
-* For pfSense, disable tx offloading for each vif added and reboot the VM. This [page](https://github.com/xcp-ng/xcp/wiki/pfSense-in-a-VM) will fully explain all of the config changes required when running pfSense in xcp-ng.
+* Make sure the physical interface connected to your xcp-ng server is carrying all the required tagged vlans
+* Within XO or XCP Center, create multiple networks off of the physical interface, adding the VLAN tag as needed for each VLAN
+* For each VLAN you want your router to route for, add a virtual interface for that specific VLAN to the VM
+* For pfSense, disable TX offloading for each virtual interface added and reboot the VM. This [page](https://github.com/xcp-ng/xcp/wiki/pfSense-in-a-VM) will fully explain all of the config changes required when running pfSense in xcp-ng.
### Adding VLAN Trunk to VM
-The alternative approach involves attaching the VLAN trunk port directly to your router VM, and handling the VLANs in pfSense directly. This has the biggest advantage of not requiring a VM reboot each time you need to setup a new VLAN. However note you will need to manually edit a configuration file in pfSense every time it is upgraded. The physical interface you are using to trunk VLANs into the pfSense VM should also not be the same physical interface that your xcp-ng management interface is on. This is because one of the steps required is setting the physical interface MTU to 1504, and this will potentially cause MTU mismatches if xen is using this same physical interface for management traffic (1504-byte sized packets being sent from the xen management interface to your MTU 1500 network).
+The alternative approach involves attaching the VLAN trunk port directly to your router VM, and handling the VLANs in pfSense directly. This has the biggest advantage of not requiring a VM reboot each time you need to setup a new VLAN. However note you will need to manually edit a configuration file in pfSense every time it is upgraded. The physical interface you are using to trunk VLANs into the pfSense VM should also not be the same physical interface that your xcp-ng management interface is on. This is because one of the steps required is setting the physical interface MTU to 1504, and this will potentially cause MTU mismatches if Xen is using this same physical interface for management traffic (1504-byte sized packets being sent from the Xen management interface to your MTU 1500 network).
-The problem we face with this solution is that, at least in pfSense, the xn driver used for the paravirtualization in FreeBSD does not support 802.1q tagging. So we have to account for this ourselves both in dom0 and in the pfSense VM. Once you're aware of this limitation, it actually isn't a big deal to get it all working but it just never occurred to me that a presumably relatively modern network driver would not support 802.1q.
+The problem we face with this solution is that, at least in pfSense, the xn driver used for the paravirtualization in FreeBSD does not support 802.1q tagging. So we have to account for this ourselves both in dom0 and in the pfSense VM. Once you're aware of this limitation, it actually isn't a big deal to get it all working but it just never occurred to me that a presumably relatively modern network driver would not support 802.1q.
-Anyway, the first step is to modify the MTU setting of the **pif** that is carrying your tagged vlans into the xcp-ng server from 1500 to 1504. The extra 4 bytes is, of course, the size of the VLAN tagging within each frame. **Warning:** You're going to have to detach or shutdown any VMs that are currently using this interface. For this example, let's say it's `eth1` that is the pif carrying all our tagged traffic.
+Anyway, the first step is to modify the MTU setting of the physical interface that is carrying your tagged vlans into the xcp-ng server from 1500 to 1504. The extra 4 bytes is, of course, the size of the VLAN tagging within each frame. **Warning:** You're going to have to detach or shutdown any VMs that are currently using this interface. For this example, let's say it's `eth1` that is the physical interface carrying all our tagged traffic.
1. List all your networks
@@ -376,13 +376,13 @@ xe network-param-set uuid=xxx MTU=1504
3. Reboot your XCP-ng host to apply the MTU change on the physical network cards
-Once this is done, attach a new vif to your pfSense VM and select `eth1` as the network. This will attach the VLAN trunk to pfSense. Boot up pfSense and disable tx offloading, etc. on the vif, reboot as necessary then login to pfSense.
+Once this is done, attach a new virtual interface to your pfSense VM and select `eth1` as the network. This will attach the VLAN trunk to pfSense. Boot up pfSense and disable TX offloading, etc. on the virtual interface, reboot as necessary then login to pfSense.
-Configure the interface within pfSense by also increasing the MTU value to 1504. Again, the xn driver does not support VLAN tagging, so we have to deal with it ourselves. **NOTE:** You only increase the MTU on the **parent interface** only in both xcp-ng **and** pfSense. The MTU for vlans will always be 1500.
+Configure the interface within pfSense by also increasing the MTU value to 1504. Again, the xn driver does not support VLAN tagging, so we have to deal with it ourselves. **NOTE:** You only increase the MTU on the **parent interface** only in both xcp-ng **and** pfSense. The MTU for vlans will always be 1500.
-Finally, along the same lines, since the xn driver does not support 802.1q, pfSense will not allow you to create vlans on any interface using the xn driver. We have to modify pfSense to allow us to do this.
+Finally, along the same lines, since the `xn` driver does not support 802.1q, pfSense will not allow you to create vlans on any interface using the `xn` driver. We have to modify pfSense to allow us to do this.
-From a shell in pfSense, edit `/etc/inc/interfaces.inc` and modify the `is_jumbo_capable` function at around line 6761. Edit it so it reads like so:
+From a shell in pfSense, edit `/etc/inc/interfaces.inc` and modify the `is_jumbo_capable` function at around line 6761. Edit it so it reads like so:
```
function is_jumbo_capable($iface) {
@@ -406,22 +406,22 @@ function is_jumbo_capable($iface) {
}
```
:::tip
-This modification is based on pfSense 2.4.4p1, ymmv. However, I copied this mod from [here](https://eliasmoraispereira.wordpress.com/2016/10/05/pfsense-virtualizacao-com-xenserver-criando-vlans/), which was based on pfSense 2.3.x, so this code doesn't change often.
+This modification is based on pfSense 2.4.4p1, your mileage may vary. However, I copied this mod from [here](https://eliasmoraispereira.wordpress.com/2016/10/05/pfsense-virtualizacao-com-xenserver-criando-vlans/), which was based on pfSense 2.3.x, so this code doesn't change often.
:::
Keep in mind that you will need to reapply this mod anytime you upgrade pfSense.
-That's it, you're good to go! Go to your interfaces > assignments in pfSense, select the VLANs tab and create your vlans. Everything should work as expected.
+That's it, you're good to go! Go to your interfaces > assignments in pfSense, select the VLANs tab and create your VLANs. Everything should work as expected.
### Links/References
* [Forums: My initial question and discussion about VLAN trunk support](https://xcp-ng.org/forum/topic/729/how-to-connect-vlan-trunk-to-vm)
* [pfSense interface does not support VLANs](https://forum.netgate.com/topic/112359/xenserver-vlan-doesn-t-supporting-eth-device-for-vlan)
-* [pfSense: Adding VLAN support for Xen xn interfaces](https://eliasmoraispereira.wordpress.com/2016/10/05/pfsense-virtualizacao-com-xenserver-criando-vlans/)
+* [pfSense: Adding VLAN support for Xen `xn` interfaces](https://eliasmoraispereira.wordpress.com/2016/10/05/pfsense-virtualizacao-com-xenserver-criando-vlans/)
## TLS certificate for XCP-ng
-After installing XCP-ng, access to xapi via XCP-ng Center or XenOrchestra is protected by TLS with a [self-signed certificate](https://en.wikipedia.org/wiki/Self-signed_certificate) : this means that you have to either verify the certificate signature before allowing the connection (comparing against signature shown on the console of the server), either work on trust-on-first-use basis (i.e. assume that the first time you connect to the server, nobody is tampering with the connection).
+After installing XCP-ng, access to XAPI via XCP-ng Center or XenOrchestra is protected by TLS with a [self-signed certificate](https://en.wikipedia.org/wiki/Self-signed_certificate) : this means that you have to either verify the certificate signature before allowing the connection (comparing against signature shown on the console of the server), either work on trust-on-first-use basis (i.e. assume that the first time you connect to the server, nobody is tampering with the connection).
If you would like to replace this certificate by a valid one, either from an internal Certificate Authority or from a public one, you'll find here some indications on how to do that.
@@ -443,7 +443,7 @@ openssl req -new -key /etc/xensource/xapi-ssl.pem -subj '/CN=XCP-ng hypervisor/'
The certificate, intermediate certificates (if needed), certificate authority and private key are stored in `/etc/xensource/xapi-ssl.pem`, in that order. You have to replace all lines before `-----BEGIN RSA PRIVATE KEY-----̀` with the certificate and the chain you got from your provider, using your favorite editor (`nano` is present on XCP-ng by default).
-Then, you have to restart xapi :
+Then, you have to restart XAPI:
```
systemctl restart xapi
```
@@ -648,7 +648,7 @@ Sometimes what happens is that the system either does not find all of the parts
This can also happen to the `md127` boot array where it will show with only one of the two drives in place and running. If it does not start and run at all, we will fail to get a normal boot of the system and likely be tossed into an emergency shell instead of the normal boot process. This is usually not consistent and another reboot will start the system. This can even happen when the boot RAID is the only RAID array in the system but fortunately that rarely happens.
-So what can we do about this? Fortunately, we can give the system more information about what RAID arrays are in the system and specify that they should be started up at boot.
+So what can we do about this? Fortunately, we can give the system more information about what RAID arrays are in the system and specify that they should be started up at boot.
### Stabilizing the RAID Boot Configuration: The mdadm.conf File
@@ -685,7 +685,7 @@ mdadm --examine --scan >> /etc/mdadm.conf
```
And then edit the file to change the format of the array names from `/dev/md/0` to `/dev/md0` and remove the `name=` parameters from each line. This isn't strictly necessary but keeps the array names in the file consistent with what is reported in `/proc/mdstat` and `/proc/partitions` and avoids giving each array another name (in our case those names would be `localhost:127` and `XCP-ng:0`).
-So what do these lines do? The first line instructs the system to allow or attempt automatic assembly for all arrays defined in the file. The second specifies to report errors in the system by email to the root user. The third is a list of all drives in the system participating in RAID arrays. Not all drives need to be specified on a single DEVICE line. Drives can be split among multiple lines and we could even have one DEVICE line for each drive. The last two are descriptions of each array in the system.
+So what do these lines do? The first line instructs the system to allow or attempt automatic assembly for all arrays defined in the file. The second specifies to report errors in the system by email to the root user. The third is a list of all drives in the system participating in RAID arrays. Not all drives need to be specified on a single DEVICE line. Drives can be split among multiple lines and we could even have one DEVICE line for each drive. The last two are descriptions of each array in the system.
This file gives the system a description of what arrays are configured in the system and what drives are used to create them but doesn't specify what to do with them. The system should be able to use this information at boot for automatic assembly of the arrays. Booting with the `mdadm.conf` file in place is more reliable but still runs into same problems as before.
@@ -797,7 +797,7 @@ This type of problem is very difficult to diagnose and correct. It may be possib
### More and Different
-So what if we don't have or don't want a system that's identical to the example we just built in these instructions? Here are some of the possible and normal variations of software RAID under XCP-ng.
+So what if we don't have or don't want a system that's identical to the example we just built in these instructions? Here are some of the possible and normal variations of software RAID under XCP-ng.
#### No preexisting XCP-ng RAID 1
diff --git a/docs/guides.md.orig b/docs/guides.md.orig
new file mode 100644
index 00000000..0b44104c
--- /dev/null
+++ b/docs/guides.md.orig
@@ -0,0 +1,858 @@
+# Guides
+
+This section is grouping various guides regarding XCP-ng use cases.
+
+## pfSense / OPNsense VM
+
+pfSense and OPNsense do work great in a VM, but there are a few extra steps that need to be taken first.
+
+### 1. Create VM as normal.
+
+* When creating the VM, choose the `other install media` VM template
+* Continue through the installer like normal
+
+### 2. Install Guest Utilities
+
+There are 2 ways of doing that, either using the CLI (pfSense or OPNsense) or the Web UI (pfSense).
+
+Option 1 via console/ssh:
+Now that you have the VM running, we need to install guest utilities and tell them to run on boot. SSH (or other CLI method) to the VM and perform the following:
+
+```
+pkg install xe-guest-utilities
+echo 'xenguest_enable="YES"' >> /etc/rc.conf.local
+ln -s /usr/local/etc/rc.d/xenguest /usr/local/etc/rc.d/xenguest.sh
+service xenguest start
+```
+
+Option 2 is via web GUI (only for pfSense):
+Open management page under `http(s)://your-configured-ip` and go to:
+*System -> Firmware -> Plugins*
+Scroll down to `os-xen` and let the GUI do the steps needed. Next: Reboot the system to have the guest started (installer doesn't do that):
+*Power -> Reboot*
+
+Guest Tools are now installed and running, and will automatically run on every boot of the VM.
+
+### 3. Disable TX Checksum Offload
+
+Now is the most important step: we must disable TX checksum offload on the virtual Xen interfaces of the VM. This is because network traffic between VMs in a hypervisor is not populated with a typical Ethernet checksum, since they only traverse server memory and never leave over a physical cable. The majority of operating systems know to expect this when virtualized and handle Ethernet frames with empty checksums without issue. However `pf` in FreeBSD does not handle them correctly and will drop them, leading to broken performance.
+
+The solution is to simply turn off checksum-offload on the virtual Xen interfaces for pfSense in the TX direction only (TX towards the VM itself). Then the packets will be checksummed like normal and `pf` will no longer complain.
+
+:::tip
+Disabling checksum offloading is only necessary for virtual interfaces. When using [PCI Passthrough](https://github.com/xcp-ng/xcp/wiki/PCI-Passtrough) to provide a VM with direct access to physical or virtual (using [SR-IOV](https://en.wikipedia.org/wiki/Single-root_input/output_virtualization)) devices it is unnecessary to disable TX checksum offloading on any interfaces on those devices.
+:::
+
+:::warning
+Many guides on the internet for pfSense in Xen VMs will tell you to uncheck checksum options in the pfSense web UI, or to also disable RX offload on the Xen side. These are not only unnecessary, but some of them will make performance worse.
+ :::
+
+#### Using Xen Orchestra
+
+- Head to the "Network" tab of your VM : in the advanced settings (click the blue gear icon) for each adapter, disable TX checksumming.
+- Restart the VM.
+
+That's it !
+
+#### Using CLI
+
+SSH to dom0 on your XCP-NG hypervisor and run the following:
+
+First get the UUID of the VM to modify:
+
+```
+xe vm-list
+```
+Find your pfSense / OPNsense VM in the list, and copy the UUID. Now stick the UUID in the following command:
+
+```
+xe vif-list vm-uuid=08fcfc01-bda4-21b5-2262-741da6f5bfb0
+```
+
+This will list all the virtual interfaces assigned to the VM:
+
+```
+uuid ( RO) : 789358b4-54c8-87d3-bfb3-0b7721e4661b
+ vm-uuid ( RO): 57a27650-6dab-268e-1200-83ee17ee3a55
+ device ( RO): 1
+ network-uuid ( RO): 5422a65f-4ff0-0f8c-e8c3-a1e926934eed
+
+
+uuid ( RO) : a9380705-8da2-4bf7-bbb0-f167d8f0d645
+ vm-uuid ( RO): 57a27650-6dab-268e-1200-83ee17ee3a55
+ device ( RO): 0
+ network-uuid ( RO): 4f7e43ef-d28a-29bd-f933-68f5a8f36241
+```
+
+For each interface, you need to take the UUID (the one at the top labeled `uuid ( RO)`) and insert it in the `xe vif-param-set uuid=xxx other-config:ethtool-tx="off"` command. So for our two virtual interfaces the commands to run would look like this:
+
+```
+xe vif-param-set uuid=789358b4-54c8-87d3-bfb3-0b7721e4661b other-config:ethtool-tx="off"
+xe vif-param-set uuid=a9380705-8da2-4bf7-bbb0-f167d8f0d645 other-config:ethtool-tx="off"
+```
+
+That's it! For this to take effect you need to fully shut down the VM then power it back on. Then you are good to go!
+
+:::tip
+If you ever add more virtual NICs to your VM, you will need to go back and do the same steps for these interfaces as well.
+:::
+
+## XCP-ng in a VM
+
+This page details how to install XCP-ng as a guest VM inside different hypervisors to test the solution before a bare-metal installation.
+
+:::warning
+This practice is not recommended for production, nested virtualization has only tests/labs purpose.
+:::
+
+Here is the list of hypervisors on which you can try XCP-ng :
+
+* [XCP-ng](https://github.com/xcp-ng/xcp/wiki/Testing-XCP-ng-in-Virtual-Machine-%28Nested-Virtualization%29/#nested-xcp-ng-using-xcp-ng)
+* [VMware ESXi & Workstation](https://github.com/xcp-ng/xcp/wiki/Testing-XCP-ng-in-Virtual-Machine-(Nested-Virtualization)#nested-xcp-ng-using-vmware-esxi-and-workstation)
+* [Hyper-V 2016](https://github.com/xcp-ng/xcp/wiki/Testing-XCP-ng-in-Virtual-Machine-(Nested-Virtualization)#nested-xcp-ng-using-microsoft-hyper-v-windows-10---windows-server-2016)
+* [QEMU/KVM](https://github.com/xcp-ng/xcp/wiki/Testing-XCP-ng-in-Virtual-Machine-(Nested-Virtualization)#nested-xcp-ng-using-qemukvm)
+* [Virtualbox](https://www.virtualbox.org) (Nested Virtualisation implemented only in v6.1.x and above - )
+
+### Nested XCP-ng using XCP-ng
+
+* create a new VM from CentOS 7 template with minimum 2 vCPU and 4GB RAM
+* change disk size to 100GB
+* enable nested virtualisation with the special command on CLI: `xe vm-param-set uuid= platform:exp-nested-hvm=true`
+* default NIC type of realtek may create stability issue for nested XCP-NG, change it to intel e1000 : `xe vm-param-set uuid= platform:nic_type="e1000"`
+* install/use it like normal :-)
+
+### Nested XCP-ng using Xen
+
+It's a pretty standard HVM, but you need to use a `vif` of `ioemu` type. Check this configuration example:
+
+```
+builder='hvm'
+memory = 4096
+name = 'xcp-ng'
+vcpus=6
+pae=1
+acpi=1
+apic=1
+vif = [ 'mac=xx:xx:xx:xx:xx:xx,type=ioemu,bridge=virbr0' ]
+disk = [ 'file:/foo/bar/xcp-ng.img,hdc,w', 'file:/foo/bar/xcp-ng/xcp-ng-8.1.0-2.iso,hdb:cdrom,r' ]
+boot='dc'
+vnc=1
+serial='pty'
+tsc_mode='default'
+viridian=0
+usb=1
+usbdevice='tablet'
+gfx_passthru=0
+localtime=1
+xen_platform_pci=1
+pci_power_mgmt=1
+stdvga = 0
+serial = 'pty'
+hap=1
+nestedhvm=1
+on_poweroff = 'destroy'
+on_reboot = 'destroy'
+on_crash = 'destroy'
+```
+
+### Nested XCP-ng using VMware (ESXi and Workstation)
+
+_The following steps can be performed under VMware Workstation Pro, the settings will remain the same but the configuration will be slightly different. We will discuss this point at the end of this section about VMware._
+
+#### Networking settings
+
+The first step, and without a doubt the most important step, will be to modify the virtual network configuration of our ESXi host. Without this configuration, the network will not work for your virtual machines running on your nested XCP-ng.
+
+ * Start by going to the network settings of your ESXi host.
+
+ 
+
+ * Then select the **port group** on which your XCP-ng virtual machine will be connected. By default, this concerns the
+ vSwitch0 and the '**VM Network**' group port.
+
+ Click on the "Edit Settings" button to edit the parameters of this port group.
+
+ **Here is the default settings** :
+
+ 
+
+ * Click on the **Accept** checkbox for Promiscuous mode.
+ * Save this settings by using the Save button at the bottom of the window.
+
+ A little explanation from the VMware documentation website : [Promiscuous mode under VMware ESXi](https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.security.doc/GUID-92F3AB1F-B4C5-4F25-A010-8820D7250350.html)
+
+ **These settings can be done at the vSwitch itself (same configuration menu). By default, the group port inherits the vSwitch settings on which it is configured. It all depends on the network configuration you want to accomplish on your host.**
+
+
+#### XCP-ng virtual machine settings
+
+Once your host's network is set up, we'll look at configuring the XCP-ng virtual machine.
+
+ * Create a virtual machine and move to the "Customize settings" section. Here a possible virtual machine configuration :
+
+ 
+
+ * Then edit the CPU settings and check the "**Expose hardware assisted virtualization to the guest OS**" box in the
+ "**Hardware Virtualization**" line.
+
+ 
+
+ _Enable virtualized CPU performance counters can be checked if necessary_ : [VMware CPU Performance Counters ](https://kb.vmware.com/s/article/2030221)
+
+ * For the other virtual machine settings, some explanations :
+ * Dual CPU sockets for improving vCPU performance.
+ * **The virtual disk must be at least 60 GB in size to install XCP-ng !**
+ * **LSI Logic SAS** controller is chosen to maximize at possible the compatibility and the performance. vNVMe
+ controller works too, it can reduce CPU overhead and latency. **PVSCSI controller won't work**.
+ * **Unlike the PVSCSI controller, the VMXNET3 controller works with XCP-ng**. It will be useful if heavy network
+ loads are planned between different XCP-ng virtual machines (XOSAN)
+
+ * Finally, install XCP-ng as usual, everything should work as expected. After installation, your XCP-ng virtual machine
+ is manageable from XCP-ng Center or Xen Orchestra.
+
+ 
+
+ * You can then create a virtual machine and test how it works (network especially).
+
+#### Configuration under VMware Workstation Pro 14/15
+
+ * Create a XCP-ng virtual machine like in ESXi.
+ * Check the following CPU setting : **Virtualize Intel VT-x/EPT or AMD-V/RVI**
+
+ 
+
+ * An additional option is to be added to the virtual machine's .vmx file. You will also add the option to enable
+ promiscuous mode for the virtual machine.
+
+ `hypervisor.cpuid.v0 = "FALSE"` : Addition to the checked CPU option on Workstation
+ `ethernet0.noPromisc = "FALSE"` : Enable Promiscuous Mode
+
+ _**Be careful, `ethernet0` is the name of the bridged network interface of my virtual machine, remember to check that it's the same name in your `.vmx` file (search the `'ethernet'` string using your favorite text editor).**_
+
+
+ * If you want to use the VMXNET3 card, this is possible. For this you must also modify the .vmx file of your XCP-ng virtual machine.
+
+ Replace _**ethernet0.virtualDev = "e1000"**_ by _**ethernet0.virtualDev = "vmxnet3"**_
+
+ * Check if the virtual machine correctly works by trying to connect using XCP-ng Center and by creating a virtual machine on your nested XCP-ng.
+
+
+### Nested XCP-ng using Microsoft Hyper-V (Windows 10 - Windows Server 2016)
+
+
+_The following steps can be performed with Hyper-V on Windows 10 (version 1607 minimum) and Windows Server 2016 (Hyper-V Server also). The settings will remain the same for both OS._
+
+**This feature is not available with Windows 8 and Windows Server 2012/2012 R2 AND an Intel CPU is required (AMD not supported yet).**
+
+Unlike VMware, you must first create the virtual machine to configure nested virtualization. Indeed, under Hyper-V, the configuration of the nested virtualization is a parameter to be applied to the virtual machine, it is not a global configuration of the hypervisor.
+
+#### XCP-ng virtual machine settings
+
+The configuration of the virtual machine uses legacy components. Indeed XenServer / XCP-ng does not have the necessary drivers to work on a "modern" Hyper-V virtual hardware . **The consequences are that the performance of this XCP-ng virtual machine will be poor.**
+
+The VM settings :
+* **VM Generation** : 1 (even if the latest versions of CentOS work in Gen 2)
+* **Memory** : 4GB minimum
+* **Disk Controller** : IDE
+* **Dynamic Memory** : Disabled (even if activated, the hypervisor will disable it in case of nested virtualization)
+* **Network Controller** : Legacy Network Card
+
+#### CPU and Network settings
+
+* Once the virtual machine is created, it is possible to enable nested virtualization for this virtual machine. Open a PowerShell Administrator prompt :
+
+ `Set-VMProcessor -VMName -ExposeVirtualizationExtensions $true`
+
+* Then, it will be about configuring the network to allow guest virtual machines to access to the outside network.
+
+ `Get-VMNetworkAdapter -VMName | Set-VMNetworkAdapter -MacAddressSpoofing On`
+
+ **Important : This settings has to be applied even if you use the NAT Default Switch (since Windows 10 1709)**
+
+* After these configurations, you should be able to manage this XCP-ng host from XCP-ng Center or from a Xen Orchestra instance.
+
+
+
+### Nested XCP-ng using QEMU/KVM
+
+_The following steps can be performed using QEMU/KVM on a Linux host, Proxmox or oVirt._
+
+Like VMware, you must first enable the nested virtualization feature on your host before creating your XCP-ng virtual machine.
+
+#### Configure KVM nested virtualization (Intel)
+
+* Check if your CPU support virtualization and EPT (Intel)
+
+ On most Linux distributions :
+
+ `egrep -wo 'vmx|ept' /proc/cpuinfo `
+
+ EPT is required to run nested XS/XCP-ng :
+
+* If everything is OK, you can check if the nested virtualization is already activated.
+
+ `$ cat /sys/module/kvm_intel/parameters/nested`
+
+ If the command returns "Y", nested virtualization is activated, if not, you should activate it (next steps).
+
+* Firstly, check if you don't have any virtual machine running. Then, unload the KVM module using root user or sudo :
+
+ `# modprobe -r kvm_intel`
+
+* Activate nested virtualization feature :
+
+ `# modprobe kvm_intel nested=1`
+
+* Nested virtualization is enabled until the host is rebooted. To enable it permanently, add the following line to the `/etc/modprobe.d/kvm.conf` file:
+
+ `options kvm_intel nested=1`
+
+#### Configure KVM nested virtualization (AMD)
+
+On recent kernels, when enabling AMD virtualization in the BIOS, it should enable nested virtualization without any further configuration. Verify that `cat /sys/module/kvm_amd/parameters/nested` returns `1`.
+
+#### XCP-ng virtual machine settings
+
+The configuration of the virtual machine will use mostly emulated components for disks and network. VirtIO drivers are not include in the XS/XCP-ng kernel.
+
+The VM settings :
+
+* **CPU configuration** : host-model or host-passthrough (mandatory, prefer host-passthrough)
+* **Boot loader**: use BIOS rather than UEFI. The installation may complete successfully but it may not boot up.
+* **Memory** : 4GB minimum / 8GB recommended
+* **Disk Controller** : LSI Logic SCSI
+* **Disk**: at least 50GiB
+* **Network** : E1000
+
+Finally, install/use XCP-ng !
+
+
+
+## VLAN Trunking in a VM
+
+This document will describe how to configure a VLAN trunk port for use in a VM running on xcp-ng. The typical use case for this is you want to run your network's router as a VM and your network has multiple vlans.
+
+With some help from others in the [forums](https://xcp-ng.org/forum/topic/729/how-to-connect-vlan-trunk-to-vm/11), I was able to get a satisfactory solution implemented using [pfSense](https://pfsense.org) and so this document will discuss how to implement this solution using pfSense as your router. In theory, the same solution should apply to other router solutions, but it is untested. Feel free to update this document with your results.
+
+### Two Approaches
+
+There are two approaches to vlans in xcp-ng. The first is to create a virtual interface for each VLAN you want your router to route traffic for then attach the virtual interface to your VM. The second is to pass through a trunk port from dom0 onto your router VM.
+
+#### Multiple VIFs
+
+By far, this is the easiest solution and perhaps the "officially supported" approach for xcp-ng. When you do this, dom0 handles all the VLAN tagging for you and each virtual interface is just presented to your router VM as a separate virtual network interface. It's like you have a bunch of separate network cards installed on your router where each represents a different VLAN and is essentially attached to a VLAN access (untagged) port on your switch. There is nothing special for you to do, this _just works_. If you require 7 virtual interfaces or less for your router then this might be the easiest approach.
+
+The problem with this approach is when you have many vlans you want to configure on your router. If you read through the thread I linked to at the top of this page you'll notice the discussion about where I was unable to attach more than 7 virtual interfaces to my pfSense VM. XO nor XCP-ng Center allow you to attach more than seven. This appears to be some kind of limit somewhere in Xen. Other users have been able to attach more than 7 virtual interfaces via CLI, however when I tried to do this myself my pfSense VM became unresponsive once I added the 8th virtual interface. More details on that problem are discussed in the thread.
+
+Another problem with this approach, perhaps only specific to pfSense, is that when you attach many virtual interfaces, you must disable TX offloading on each and every virtual interface otherwise you'll have all kinds of problems. This was definitely a red flag for me. Initially I'm starting with 7 vlans and 9 networks total with short term requirements for at least another 3 vlans for sure and then who knows how many down the road. In this approach, every time you have to create a new VLAN by adding a virtual interface to the VM, you will have to reboot the VM.
+
+Having to reboot my network's router every time I need to create a new VLAN is not ideal for the environment I'm working in; especially because in the current production environment running VMware, we do not need to reboot the router VM to create new vlans. (FWIW, I've come to xcp-ng as the IT department has asked me to investigate possibly replacing our VMware env with XCP-ng. I started my adventures with xcp-ng by diving in head first at home and replacing my home environment, previously ESXi, with xcp-ng. Now I'm in the early phases of working with xcp-ng in the test lab at work.)
+
+In conclusion, if you have seven or fewer virtual interfaces and you're fairly confident that you'll never need to exceed seven virtual interfaces then this approach is probably the path of least resistance. On the other hand, if you know you'll need more than seven or fairly certain you will some day. Or you're in an environment where you need to be able to create vlans on the fly then you'll probably want to proceed with the alternative below.
+
+This document is about the alternative approach, but a quick summary of how this solution works in xcp-ng:
+* Make sure the physical interface connected to your xcp-ng server is carrying all the required tagged vlans
+* Within XO or XCP Center, create multiple networks off of the physical interface, adding the VLAN tag as needed for each VLAN
+* For each VLAN you want your router to route for, add a virtual interface for that specific VLAN to the VM
+* For pfSense, disable TX offloading for each virtual interface added and reboot the VM. This [page](https://github.com/xcp-ng/xcp/wiki/pfSense-in-a-VM) will fully explain all of the config changes required when running pfSense in xcp-ng.
+
+### Adding VLAN Trunk to VM
+
+The alternative approach involves attaching the VLAN trunk port directly to your router VM, and handling the VLANs in pfSense directly. This has the biggest advantage of not requiring a VM reboot each time you need to setup a new VLAN. However note you will need to manually edit a configuration file in pfSense every time it is upgraded. The physical interface you are using to trunk VLANs into the pfSense VM should also not be the same physical interface that your xcp-ng management interface is on. This is because one of the steps required is setting the physical interface MTU to 1504, and this will potentially cause MTU mismatches if Xen is using this same physical interface for management traffic (1504-byte sized packets being sent from the Xen management interface to your MTU 1500 network).
+
+The problem we face with this solution is that, at least in pfSense, the xn driver used for the paravirtualization in FreeBSD does not support 802.1q tagging. So we have to account for this ourselves both in dom0 and in the pfSense VM. Once you're aware of this limitation, it actually isn't a big deal to get it all working but it just never occurred to me that a presumably relatively modern network driver would not support 802.1q.
+
+Anyway, the first step is to modify the MTU setting of the physical interface that is carrying your tagged vlans into the xcp-ng server from 1500 to 1504. The extra 4 bytes is, of course, the size of the VLAN tagging within each frame. **Warning:** You're going to have to detach or shutdown any VMs that are currently using this interface. For this example, let's say it's `eth1` that is the physical interface carrying all our tagged traffic.
+
+
+1. List all your networks
+```
+xe network-list
+```
+2. Set MTU on the relevant network(s)
+```
+xe network-param-set uuid=xxx MTU=1504
+```
+3. Reboot your XCP-ng host to apply the MTU change on the physical network cards
+
+
+Once this is done, attach a new virtual interface to your pfSense VM and select `eth1` as the network. This will attach the VLAN trunk to pfSense. Boot up pfSense and disable TX offloading, etc. on the virtual interface, reboot as necessary then login to pfSense.
+
+Configure the interface within pfSense by also increasing the MTU value to 1504. Again, the xn driver does not support VLAN tagging, so we have to deal with it ourselves. **NOTE:** You only increase the MTU on the **parent interface** only in both xcp-ng **and** pfSense. The MTU for vlans will always be 1500.
+
+Finally, along the same lines, since the `xn` driver does not support 802.1q, pfSense will not allow you to create vlans on any interface using the `xn` driver. We have to modify pfSense to allow us to do this.
+
+From a shell in pfSense, edit `/etc/inc/interfaces.inc` and modify the `is_jumbo_capable` function at around line 6761. Edit it so it reads like so:
+
+```
+function is_jumbo_capable($iface) {
+ $iface = trim($iface);
+ $capable = pfSense_get_interface_addresses($iface);
+
+ if (isset($capable['caps']['vlanmtu'])) {
+ return true;
+ }
+
+ // hack for some lagg modes missing vlanmtu, but work fine w/VLANs
+ if (substr($iface, 0, 4) == "lagg") {
+ return true;
+ }
+
+ // hack for Xen xn interfaces
+ if (substr($iface, 0, 2) == "xn")
+ return true;
+
+ return false;
+}
+```
+:::tip
+This modification is based on pfSense 2.4.4p1, your mileage may vary. However, I copied this mod from [here](https://eliasmoraispereira.wordpress.com/2016/10/05/pfsense-virtualizacao-com-xenserver-criando-vlans/), which was based on pfSense 2.3.x, so this code doesn't change often.
+:::
+
+Keep in mind that you will need to reapply this mod anytime you upgrade pfSense.
+
+That's it, you're good to go! Go to your interfaces > assignments in pfSense, select the VLANs tab and create your VLANs. Everything should work as expected.
+
+### Links/References
+
+* [Forums: My initial question and discussion about VLAN trunk support](https://xcp-ng.org/forum/topic/729/how-to-connect-vlan-trunk-to-vm)
+* [pfSense interface does not support VLANs](https://forum.netgate.com/topic/112359/xenserver-vlan-doesn-t-supporting-eth-device-for-vlan)
+* [pfSense: Adding VLAN support for Xen `xn` interfaces](https://eliasmoraispereira.wordpress.com/2016/10/05/pfsense-virtualizacao-com-xenserver-criando-vlans/)
+
+## TLS certificate for XCP-ng
+
+After installing XCP-ng, access to XAPI via XCP-ng Center or XenOrchestra is protected by TLS with a [self-signed certificate](https://en.wikipedia.org/wiki/Self-signed_certificate) : this means that you have to either verify the certificate signature before allowing the connection (comparing against signature shown on the console of the server), either work on trust-on-first-use basis (i.e. assume that the first time you connect to the server, nobody is tampering with the connection).
+
+If you would like to replace this certificate by a valid one, either from an internal Certificate Authority or from a public one, you'll find here some indications on how to do that.
+
+Note that if you use an non-public certificate authority and XenOrchestra, you have [additional configuration to specify on XenOrchestra side](https://xen-orchestra.com/docs/configuration.html#custom-certificate-authority)
+
+:::warning
+This indication is valid for XCP-ng up to v8.1. Version 8.2 is expected to improve deployment of new certificates, like [Citrix did for XenServer 8.2](https://docs.citrix.com/en-us/citrix-hypervisor/hosts-pools.html#install-a-tls-certificate-on-your-server).
+:::
+
+### Generate certificate signing request
+
+You can use the auto-generated key to create a certificate signing request :
+
+```
+openssl req -new -key /etc/xensource/xapi-ssl.pem -subj '/CN=XCP-ng hypervisor/' -out xcp-ng.csr
+```
+
+### Install the certificate chain
+
+The certificate, intermediate certificates (if needed), certificate authority and private key are stored in `/etc/xensource/xapi-ssl.pem`, in that order. You have to replace all lines before `-----BEGIN RSA PRIVATE KEY-----̀` with the certificate and the chain you got from your provider, using your favorite editor (`nano` is present on XCP-ng by default).
+
+Then, you have to restart XAPI:
+```
+systemctl restart xapi
+```
+
+## Dom0 memory
+
+:::tip
+Dom0 is another word to talk about the *privileged domain*, also known as the *Control Domain*.
+:::
+
+Issues can arise when the control domain is lacking memory, that's why we advise to be generous with it whenever possible. Default values from the installer may be too low for your setup. In general it depends on the amount of VM's and their workload. If constraints do not allow you to follow the advice below, you can try to set lower values.
+
+In any case:
+* monitor RAM usage in the control domain
+* if issues arise (failed live migration for example), [[look at the logs](troubleshooting.md#log-files) for messages related to lack of memory
+
+### Recommended values
+
+* we advise to give at least 2GiB of RAM for Dom0. Below that your XCP-ng may experience performance issues or other weird errors.
+* up to 64GiB RAM on your machine, at least 4GiB RAM for Dom0
+* an host with 128GiB or more should use 8GiB RAM for Dom0
+
+:::warning
+Note: If you use ZFS, assign at least 16GB RAM to avoid swapping. ZFS (in standard configuration) uses half the Dom0 RAM as cache!
+:::
+
+### Current RAM usage
+
+You can use `htop` to see how much RAM is currently used in the dom0. Alternatively, you can have Netdata to show you past values.
+
+### Change dom0 memory
+
+Example with 4 GiB:
+
+`/opt/xensource/libexec/xen-cmdline --set-xen dom0_mem=4096M,max:4096M`
+
+Do not mess the units and make sure to set the same value as base value and as max value.
+
+Reboot to apply.
+
+## Autostart VM on boot
+
+A VM can be started at XCP-ng boot itself, it's called **Auto power on**. We have two ways to configure it: using Xen Orchestra or via the CLI.
+
+### With Xen Orchestra
+
+In Xen Orchestra we can just enable a toggle in VM "Advanced" view, called **Auto power on**. Everything will be set accordingly.
+
+
+
+
+### With the CLI
+
+1. Determine the UUID of the pool for which we want to enable Auto Start. To do this, run the console command on the server:
+
+```
+# xe pool-list
+uuid ( RO) :
+```
+
+2. Allow autostart of virtual machines at the pool level with the found UUID command:
+`# xe pool-param-set uuid= other-config:auto_poweron=true`
+
+Now we enable autostart at the virtual machine level.
+3. Execute the command to get the UUID of the virtual machine:
+
+```
+# xe vm-list
+ uuid ( RO) :
+ name-label ( RW) : VM
+ power-state ( RO) : running
+```
+
+4. Enable autostart for each virtual machine with the UUID found:
+`# xe vm-param-set uuid= other-config:auto_poweron=true`
+
+5. Checking the output
+`# xe vm-param-list uuid= | grep other-config`
+<<<<<<< HEAD
+
+
+## Software RAID Storage Repository
+
+XCP-ng has support for creating a software RAID for the operating system but it is limited to RAID level 1 (mirrored drives) and by the size of the drives used. It is strictly intended for hardware redundancy and doesn't provide any additional storage beyond what a single drive provides.
+
+These instructions describe how to add more storage to XCP-ng using software RAID and show measures that need to be taken to avoid problems that may happen when booting. You should read through these instructions at least once to become familiar with them before proceeding and to evaluate whether the process fits your needs. Look at the "Troubleshooting" section of these instructions to get some idea of the kinds of problems that can happen.
+
+An example installation is described below using a newly installed XCP-ng software RAID system. This covers only one specific possibility for software RAID. See the "More and Different" section of these instructions to see other possibilities.
+
+In addition, the example presented below is a fresh installation and not being installed onto a production system. The changes described in the instructions can be applied to a production system but, as with any system changes, there is always a risk of something going badly and having some data loss. If performing this on a production system, make sure that there are good backups of all VMs and other data on the system that can be restored to this system or even a different one in case of problems.
+
+These instructions assume you are starting with a server already installed with software RAID and have no other storage repositories defined except what may be on the existing RAID.
+
+### Example System
+
+The example system we're demonstrating here is a small server using 5 identical 1TB hard drives. XCP-ng has already been installed in a software RAID configuration using 2 of the 5 drives. There is already a default "Local storage" repository configured as part of the XCP-ng setup on the existing RAID 1 drive pair.
+
+Before starting the installation, all partitions were removed from the drives and the drives were overwritten with zeroes.
+
+So before starting out, here is an overview of the already configured system:
+
+```
+[09:51 XCP-ng ~]# cat /proc/partitions
+major minor #blocks name
+
+ 8 0 976762584 sda
+ 8 16 976762584 sdb
+ 8 32 976762584 sdc
+ 8 64 976762584 sde
+ 8 48 976762584 sdd
+ 9 127 976762432 md127
+ 259 0 18874368 md127p1
+ 259 1 18874368 md127p2
+ 259 2 933245487 md127p3
+ 259 3 524288 md127p4
+ 259 4 4194304 md127p5
+ 259 5 1048576 md127p6
+ 252 0 933232640 dm-0
+
+[09:51 XCP-ng ~]# cat /proc/mdstat
+Personalities : [raid1]
+md127 : active raid1 sdb[1] sda[0]
+ 976762432 blocks super 1.0 [2/2] [UU]
+ bitmap: 1/8 pages [4KB], 65536KB chunk
+
+unused devices:
+```
+
+The 5 drives are in place as `sda` through `sde` and as can be seen from the list are exactly the same size. The RAID 1 drive pair is set up as the XCP-ng default of a partitioned RAID 1 array `md127` using drives `sda` and `sdb` and is in a healthy state.
+
+### Building the Second RAID
+
+We have 3 remaining identical drives, `sdc`, `sdd`, and `sde` and we're going to create a RAID 5 array using them in order to maximize the amount of space. We'll create this using the mdadm command like this:
+
+```
+[10:02 XCP-ng ~]# mdadm --create /dev/md0 --run --level=5 --bitmap=internal --assume-clean --raid-devices=3 --metadata=1.2 /dev/sdc /dev/sdd /dev/sde
+mdadm: array /dev/md0 started.
+```
+
+Here, we've made sure to use the `assume-clean` and `metadata=1.2` options. The `assume-clean` option prevents the RAID assembly process from initializing the content of the parity blocks on the drives which saves a lot of time when assembling the RAID for the first time.
+
+The `metadata=1.2` option forces the RAID array metadata to a position close to the beginning of each drive in the array. This is most important for RAID 1 arrays but useful for others and prevents the component drives of the RAID array from being confused for separate individual drives by any process that tries to examine the drives for automatic mounting or other use.
+
+Checking the status of the drives in the system should show the newly added RAID array.
+
+```
+[10:20 XCP-ng ~]# cat /proc/partitions
+major minor #blocks name
+
+ 8 0 976762584 sda
+ 8 16 976762584 sdb
+ 8 32 976762584 sdc
+ 8 64 976762584 sde
+ 8 48 976762584 sdd
+ 9 127 976762432 md127
+ 259 0 18874368 md127p1
+ 259 1 18874368 md127p2
+ 259 2 933245487 md127p3
+ 259 3 524288 md127p4
+ 259 4 4194304 md127p5
+ 259 5 1048576 md127p6
+ 252 0 933232640 dm-0
+ 9 0 1953260544 md0
+[10:39 XCP-ng ~]# cat /proc/mdstat
+Personalities : [raid1] [raid6] [raid5] [raid4]
+md0 : active raid5 sde[2] sdd[1] sdc[0]
+ 1953260544 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
+ bitmap: 0/8 pages [0KB], 65536KB chunk
+
+md127 : active raid1 sdb[1] sda[0]
+ 976762432 blocks super 1.0 [2/2] [UU]
+ bitmap: 1/8 pages [4KB], 65536KB chunk
+
+unused devices:
+```
+
+Here we can see that the new RAID 5 array is in place as array `md0`, is using drives `sdc`, `sdd`, and `sde` and is healthy. As expected for a 3 drive RAID 5 array, it is providing about twice as much available space as a single drive.
+
+### Building the Storage Repository
+
+Now we create a new storage repository on the new RAID array like this:
+
+```
+[11:21 XCP-ng ~]# xe sr-create name-label="RAID storage" type=ext device-config:device=/dev/md0 shared=false content-type=user
+2acc2807-1c44-a757-0b79-3834dbcf1a79
+```
+
+What we have now is a second storage repository named "RAID storage" using thin-provisioned EXT filesystem storage. It will show up and can be used within Xen Orchestra or XCP-ng Center and should behave like any other storage repository.
+
+At this point, we'd expect that the system could just be used as is, virtual machines stored in the new RAID storage repository and that we can normally shut down and restart the system and expect things to work smoothly.
+
+Unfortunately, we'd be wrong.
+
+### Unstable RAID Arrays When Booting
+
+What really happens when XCP-ng boots with a software RAID is that code in the Linux kernel and in the initrd file will attempt to find and automatically assemble any RAID arrays in the system. When there is just the single `md127` RAID 1 array, the process works pretty well. Unfortunately, the system seems to occasionally break down where there are more drives, more arrays, and more complex arrays.
+
+This causes several problems in the system, mainly due to the system not correctly finding and adding all component drives to each array or not starting arrays which do not have all components added but could otherwise start successfully.
+
+A good example here would be the `md0` RAID 5 array we just created. Rebooting the system in the state it is in now will often or even usually work without problems. The system will find both drives of the `md127` RAID 1 boot array and all three drives of the `md0` RAID 5 storage array, assemble the arrays and start them running.
+
+Sometimes what happens is that the system either does not find all of the parts of the RAID or does not assemble them correctly or does not start the array. When that happens the `md0` storage array will not start and looking at the `/proc/mdstat` array status will show the array as missing one or two of the three drives or will show all three drives but not show them as running. Another common problem is that the array is assembled with enough drives to run, two out of three drives in our case, but does not start. This can also happen if the array has a failed drive at boot even if there are enough remaining drives to start and run the array.
+
+This can also happen to the `md127` boot array where it will show with only one of the two drives in place and running. If it does not start and run at all, we will fail to get a normal boot of the system and likely be tossed into an emergency shell instead of the normal boot process. This is usually not consistent and another reboot will start the system. This can even happen when the boot RAID is the only RAID array in the system but fortunately that rarely happens.
+
+So what can we do about this? Fortunately, we can give the system more information about what RAID arrays are in the system and specify that they should be started up at boot.
+
+### Stabilizing the RAID Boot Configuration: The mdadm.conf File
+
+The first thing we need to do is give the system more information on what RAID arrays exist and how they're put together. The way to do this is by creating a raid configuration file `/etc/mdadm.conf`.
+
+The `mdadm.conf` file created for this system is here:
+
+```
+[13:02 XCP-ng ~]# cat /etc/mdadm.conf
+AUTO +all
+MAILADDR root
+DEVICE /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde
+ARRAY /dev/md0 metadata=1.2 UUID=53461f34:2414371e:820f9514:008b6458
+ARRAY /dev/md127 metadata=1.0 UUID=09871a29:26fa7ce1:0c9b040a:60f5cabf
+```
+
+Each system and array will have different UUID identifiers so the numbers we have here are specific to this example. The UUID identifiers here will not work for another system. For each system, we'll need a way to get them to include in the `mdadm.conf` file. The best way is using the `mdadm` command itself while the arrays are running like this:
+
+```
+[13:06 XCP-ng ~]# mdadm --examine --scan
+ARRAY /dev/md/0 metadata=1.2 UUID=53461f34:2414371e:820f9514:008b6458 name=XCP-ng:0
+ARRAY /dev/md/127 metadata=1.0 UUID=09871a29:26fa7ce1:0c9b040a:60f5cabf name=localhost:127
+```
+
+Notice that this is output in almost exactly the same format as shown in the `mdadm.conf` file above. The UUID numbers are important and we'll need them again later.
+
+If we don't want to type in the entire file, we can create the file like this.
+
+```
+echo 'AUTO +all' > /etc/mdadm.conf
+echo 'MAILADDR root' >> /etc/mdadm.conf
+echo 'DEVICE /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde' >> /etc/mdadm.conf
+mdadm --examine --scan >> /etc/mdadm.conf
+```
+And then edit the file to change the format of the array names from `/dev/md/0` to `/dev/md0` and remove the `name=` parameters from each line. This isn't strictly necessary but keeps the array names in the file consistent with what is reported in `/proc/mdstat` and `/proc/partitions` and avoids giving each array another name (in our case those names would be `localhost:127` and `XCP-ng:0`).
+
+So what do these lines do? The first line instructs the system to allow or attempt automatic assembly for all arrays defined in the file. The second specifies to report errors in the system by email to the root user. The third is a list of all drives in the system participating in RAID arrays. Not all drives need to be specified on a single DEVICE line. Drives can be split among multiple lines and we could even have one DEVICE line for each drive. The last two are descriptions of each array in the system.
+
+This file gives the system a description of what arrays are configured in the system and what drives are used to create them but doesn't specify what to do with them. The system should be able to use this information at boot for automatic assembly of the arrays. Booting with the `mdadm.conf` file in place is more reliable but still runs into same problems as before.
+
+### Stabilizing the RAID Boot Configuration: The initrd Configuration
+
+The other thing we need to do is give the system some idea of what to do with the RAID arrays at boot time. The way to do this is by adding instructions for the `dracut` program creating the initrd file to enable all RAID support, use the `mdadm.conf` file we created, and to start the arrays at boot time.
+
+We can specify additional command line parameters to the dracut command which creates the initrd file to ensure that kernel RAID modules are loaded, the `mdadm.conf` file is used and so on but there are a lot of them. In addition, we would have to manually specify the command line parameters every time a new initrd file is built or rebuilt. Any time dracut is run any other way such as automatically as part of applying a kernel update, the changes specified manually on the command line would be lost. A better way to do it is to create a list of parameters that will be used automatically by dracut every time it is run to create a new initrd file.
+
+The `dracut` command keeps its configuration in the file `/etc/dracut.conf` and commands in the file are used when creating the initrd file every time the `dracut` command is run. We could make changes in that file but that comes with its own problems. There is no good way to prevent any other changes in the file from replacing our added commands such as installing an update which affects `dracut`.
+
+Instead of changing the main configuration file, we can have a file with only added commands for `dracut`. The place to create the file is in the folder `/etc/dracut.conf.d/`. Any file with commands in that folder will be read and used by `dracut` when creating a new initrd file. XCP-ng already creates several files in that folder that affect how the initrd file is created but we should avoid changing those files for the same reasons as avoiding changes to the main configuration file. Keeping the configuration changes we need in their own file should ensure that our changes won't be lost or changed. The added file will also be used every time `dracut` creates a new initrd file whether it is done manually at the command line or automatically by an update.
+
+We create a new file `dracut_mdraid.conf` in that folder that looks like this:
+
+```
+[14:11 XCP-ng ~]# cat /etc/dracut.conf.d/dracut_mdraid.conf
+mdadmconf="yes"
+use_fstab="yes"
+add_dracutmodules+=" mdraid "
+add_drivers+=" md_mod raid0 raid1 raid456 raid10 "
+add_device+=" /dev/md0 "
+add_device+=" /dev/md127 "
+kernel_cmdline+=" rd.auto=1 "
+kernel_cmdline+=" rd.md=1 "
+kernel_cmdline+=" rd.md.conf=1 "
+kernel_cmdline+=" rd.md.uuid=53461f34:2414371e:820f9514:008b6458 "
+kernel_cmdline+=" rd.md.uuid=09871a29:26fa7ce1:0c9b040a:60f5cabf "
+```
+
+This file contains two sets of instructions for `dracut`, some that affect how the initrd file is built and what is done at boot and the rest which are passed to the Linux kernel at boot.
+
+The first set instructs `dracut` to consider the `mdadm.conf` file we created earlier and also to include a copy of it in the initrd file, add `dracut` support for mdraid, include the kernel modules for mdraid support, and specifically support the two RAID devices by name.
+
+The second set instructs the booting Linux kernel to support automatic RAID assembly, support mdraid and the mdraid configuration and also to search for and start the two RAID arrays via their UUID identifiers. These are the same UUID identifiers that we included in the `mdadm.conf` file and, like the UUID identifiers there, are specific to each array and system.
+
+Something to note when creating the file is to allow extra space between command line parameters. That is why most of the lines have extra space before and after parameters within the quotes.
+
+### Building and Testing the New initrd File
+
+Now that we have all of this extra configuration, we need to get the system to include it for use at boot. To do that we use the `dracut` command to create a new initrd file like this:
+
+```
+dracut --force -M /boot/initrd-$(uname -r).img $(uname -r)
+```
+
+This creates a new initrd file with the correct name matching the name of the Linux kernel and prints a list of modules included in the initrd file. Printing the list isn't necessary but is handy to see that `dracut` is making progress as it runs.
+
+When the system returns to the command line, it's time to test. We'll reboot the system from the console or from within Xen Orchestra or XCP-ng Center. If all goes well, the system should boot normally and correctly find and mount all 5 drives into the two RAID arrays. The easiest way to tell that is looking at the `/proc/mdstat` file.
+
+```
+[14:36 XCP-ng ~]# cat /proc/mdstat
+Personalities : [raid1] [raid6] [raid5] [raid4]
+md127 : active raid1 sda[0] sdb[1]
+ 976762432 blocks super 1.0 [2/2] [UU]
+ bitmap: 1/8 pages [4KB], 65536KB chunk
+
+md0 : active raid5 sdc[0] sde[2] sdd[1]
+ 1953260544 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
+ bitmap: 3/8 pages [12KB], 65536KB chunk
+
+unused devices:
+```
+
+We can see that both arrays are active and healthy with all drives accounted for. Examining the storage repositories using Xen Orchestra, XCP-ng, or `xe` commands shows that both the Local storage and RAID storage repositories are available.
+
+### Troubleshooting
+
+The most common problems in this process stem from one of a few things.
+
+One common cause of problems is using the wrong type of drives. Just like when using drives for installation of XCP-ng, it is important to use drives that either have or emulate 512 byte disk sectors. Drives that use 4K byte disk blocks will not work unless they are 512e disks which emulate having 512 byte sectors. It is generally not a good idea to mix types of drives such as one 512n (native 512 byte sectors) and two 512e drives but should be possible to do in an emergency.
+
+The second is that the drives were not empty before including them into the system. If there are any traces of RAID configuration or file systems on the drives, we could have problems with interference between those and the new configurations we're creating on the drives when creating the RAID array or the EXT filesystem (or LVM if you use that for the storage array).
+
+The way to avoid this problem is to make sure the drives are thoroughly wiped before starting the process. This can be done from the command line with the `dd` command like this:
+
+```
+dd if=/dev/zero of=/dev/sde bs=1M
+```
+This writes zeroes to every block on the drive and will wipe any traces of previous filesystems or RAID configurations.
+
+Sometimes only one drive has a problem when assembling the RAID and we'll see a working RAID with one drive missing. We'll assume that our md0 RAID was assembled correctly except that it is missing drive `/dev/sde`. In that case, it should be possible to add the missing drive into the array like this:
+
+```
+mdadm --add /dev/md0 /dev/sde
+```
+
+If the drive is added to the RAID array correctly, we should start to see a lot of disk activity and we should be able to monitor the progress of it by looking at the `/proc/mdstat` file.
+
+If the drive will not add to the array due to something left over on the drive, we should get an error from `mdadm` indicating the problem. In that case we should be able to use the dd command to wipe out the one drive as above and then attempt to add it into the array.
+
+The other possibility is that the RAID array is created correctly but XCP-ng will not create a storage repository on it because some previous content of the drives is causing a problem. It should be possible to recover from this by writing zeroes to the entire array without needing to rebuild it like this:
+
+```
+dd if=/dev/zero of=/dev/md0 bs=1M
+```
+
+After the probably very lengthy process of zeroing out the array, it should be possible to try again to create a storage repository on the RAID array.
+
+Another common cause for problems is a problem with either the `mdadm.conf` or `dracut_mdraid.conf` files. Often when there is a problem with one of those files, the system will boot but fail to assemble or start the RAID arrays. The boot RAID array will usually be found and assembled automatically but other RAID arrays may not.
+
+The best thing to do in this case is to check over the contents of the `mdadm.conf` and `dracut_mdraid.conf` files. Look for mistyped or missing quotes in the `dracut_mdraid.conf` or missing spaces inside the quotes for those lines that have them. Look for incorrect or mistyped UUID identifiers for the RAID arrays in both files. The UUID identifiers should match the identifiers you get using the mdadm --examine --scan command and also match between the `mdadm.conf` file and the `dracut_mdraid.conf` file. If any errors are found and corrected, rebuild the initrd file using the `dracut` command.
+
+In an extreme case, it should even be possible to delete and re-create those files using the normal instructions and rebuild the initrd file again. It should also be possible but slightly more risky to remove the files and re-create the initrd file then reboot and attempt to re-create the files and initrd again after rebooting.
+
+Another possible but rare problem is caused by drives that shift their identifications from one system boot to the next. A drive that has one name such as `sdf` on one boot might be different such as `sdc` on a different boot. This is usually due to problems with the system BIOS or drivers and can also be caused by some hardware problems such as a drive taking wildly different amounts of time to start up from one boot to the next. It it also more common with some types of storage such as NVMe storage.
+
+This type of problem is very difficult to diagnose and correct. It may be possible to resolve it using different BIOS or setup configurations in the host system or by updating BIOS or controller firmware.
+
+### More and Different
+
+So what if we don't have or don't want a system that's identical to the example we just built in these instructions? Here are some of the possible and normal variations of software RAID under XCP-ng.
+
+#### No preexisting XCP-ng RAID 1
+
+We might want to create a RAID storage repository even though XCP-ng was installed without software RAID where the operating system was installed to a single hard drive or some other device. This needs only minimal changes to the example configuration. Without a software RAID installed by XCP-ng, there will be no RAID 1 device `md127` holding the operating system. In this case, we build the storage RAID array normally, still calling it `md0` but omit any lines in the `mdadm.conf` and `dracut_mdraid.conf` which list `md127`. We would only include the lines in those files mentioning `md0` and its UUID and the devices used to create it.
+
+#### Different Sized and Shaped RAID Arrays
+
+We might want to create a RAID array with more or fewer drives or a different RAID level. Common types would be a two drive RAID 1 array or a 4 drive RAID 5, 6 or 10 array. Those cases are very easy to accommodate by changing the parameters when building the RAID array, altering the `level=` or `raid-devices=` parameters and the list of drives on the `mdadm --create` command line. The only other consideration is to make sure the drives used are accounted for in a `DEVICE` line in the `mdadm.conf` file.
+
+The number of drives in a specific level of RAID array can also affect the performance of the array. A good rule of thumb for RAID 5 or 6 arrays is to have a number of drives that is a power of two (2, 4, 8, etc.) plus the number of extra drives where space is used for parity information, one drive for RAID 5 and two drives for RAID 6. The RAID 5 array we created in the example system meets that recommendation by having 3 drives. A 5 drive RAID5 array, or 4 or 6 drive RAID 6 arrays would as well. A good rule of thumb for RAID 10 arrays is to have an even number of drives. For RAID 10 in Linux an even number of drives is not a requirement as it is on other types of systems. In addition, it may be possible to get better performance by creating a 2 drive RAID 10 array instead of a 2 drive RAID 1 array.
+
+#### Avoiding the RAID 5 and 6 "Write Hole"
+
+RAID 5 and 6 arrays have a problem known as the "write hole" affecting their consistency after a failure during a disk write such as a crash or power failure. The problem happens when a chunk of RAID protected data known as a stripe is changed on the array. To make the change, the operating system reads the stripe of data, changes the portion of the data requested, recomputes the disk parity for RAID 5 or RAID 6 then rewrites the data to the disks. If a crash or power outage interrupts that process some of the data written to disk will reflect the new content of the stripe while some on other disks will reflect the old content of the stripe. In general the system may be able to detect that there is a problem by rereading the entire stripe and verifying that the parity portion does not match. The system would have no way to verify which portions of the stripe were written with new data and which contain old data so would not be able to properly reconstruct the stripe after a crash.
+
+This problem can only happen if the system is interrupted during a write to a RAID and tends to be rare.
+
+Generally, the best way to mitigate the problem is by avoiding it. Use good quality server hardware with known stable hardware drivers to avoid possible sources of crashes. Having good power protection such as redundant power supplies and battery backup units and using software to automatically shut down in case of a power outage will limit possible power-related problems.
+
+If that is not enough, there are other methods to avoid the write hole by making it possible for the RAID system to recover after a crash while working around the write hole problem.
+
+For RAID 5 systems, one way to do this is using feature known as PPL or partial parity log which changes the way that data is written and recovery is performed on a RAID 5 array. Using this method comes at a cost of as much as a 30 or 40 percent lower RAID write performance. To enable it while building the RAID, substitute `--consistency-policy=ppl` for `--bitmap=internal` when creating the array. It is also possible to change an existing RAID 5 array to use this with the command `mdadm --grow --bitmap=none --consistency-policy=ppl /dev/md0` (assuming the `md0` array created in our example system). It is also possible to change the array back with the command `mdadm --grow --bitmap=internal /dev/md0`.
+
+For RAID 6 systems, something different needs to be done. The way to close the write hole for a RAID 6 is to use a separate device which acts as a combination disk write log or journal and write cache. For best performance the device should be a disk with better write performance than the drives used in the array, preferably a fast SSD with good longevity. Going back to our example system and assuming that there is an additional device `sdf`, we would substitute `--write-journal=/dev/sdf` instead of `--bitmap=internal` when creating the array. To avoid the journal drive becoming a single point of failure, a good practice might be to create a RAID 1 device from the fast drives or SSDs then using that RAID device as a journal device. A write journal device may also be used for RAID 5 arrays.
+
+#### Different Sized Drives
+
+We might need to create a RAID array where our drives are not identical and each drive has a different number of available blocks. This might come up if we need to create a RAID array but have two of one type of drive and one of another such as two WD drives and one Seagate or two 1TB drives and one that is 1.5TB or 2TB.
+
+The easiest solution to creating a working RAID array in this situation is to partition the drives and create a RAID array using the partitions instead of using the entire drive.
+
+To do this, we get the sizes of the disks in the system by examining the `/proc/partitions` file. Starting with the smallest of the disks to be used in the array, use `gdisk` or `sgdisk` to create a single partition of type `fd00` (Linux RAID) using the maximum space available. Examine and record the size of the partition created and save the changes. Repeat the process with the remaining drives to be used except use the size of the partition created on the first drive instead of the maximum space available.
+
+This should leave you with drives that each have a single partition and all of the partitions are the same size even though the drives are not.
+
+When creating the RAID array and the `mdadm.conf` file, use the name of the disk partition instead of the name of the disk. In our example system, we would create the array using `/dev/sdc1`, `/dev/sdd1`, and `/dev/sde1` instead of `/dev/sdc`, `/dev/sdd`, and `/dev/sde` and also make the same substitutions on `DEVICE` lines in the `mdadm.conf` file.
+
+It should also be possible to create the partitions on the drives outside of the XCP-ng system using a bootable utility disk that contains partitioning utilities such as gparted.
+
+#### More Than One Additional Array
+
+We might want to create more than one extra RAID array and storage repository. This is also easy to accommodate in a similar way to using a different number of drives in the array. We can easily create another RAID array and another storage repository onto a different set of drives by changing the parameters of the `mdadm --create` command line and `xe sr-create` command line.
+
+As an example assume that we have 3 more drives `/dev/sdf`, `/dev/sdg`, and `/dev/sdh` and want to create a second RAID 5 array and another storage repository. We create another RAID 5 array, this time `md1` like this:
+
+```
+[16:45 XCP-ng ~]# mdadm --create /dev/md1 --run --level=5 --bitmap=internal --assume-clean --raid-devices=3 --metadata=1.2 /dev/sdf /dev/sdg /dev/sdh
+mdadm: array /dev/md1 started.
+```
+
+We then create another storage repository as we did previously making sure to give it a different name and use `/dev/md1` instead of `/dev/md0` in the command line.
+
+We also need to make sure that the `mdadm.conf` file has `DEVICE` lines containing the three drives `/dev/sdf`, `/dev/sdg`, and `/dev/sdh` and an ARRAY line containing /dev/md1 and its UUID in addition to the other drives and arrays `md127` and `md0`. We also need to make sure that the `dracut_mdraid.conf` file contains a `kernel_cmdline+=` line specifying the `rd.md.uuid=` with the UUID of the `md1` array that matches what is in the `mdadm.conf` file in addition to the other two similar lines in that file.
+
+It is important that each RAID array has a different name as the system will not allow you to create a RAID array with the name of one that already exists. Normally, you would just continue on with different RAID device names such as `md1`, `md2`, `md3`, etc. It is also important to use different names for each storage repository such as "RAID storage", "RAID storage 2" and so on.
+=======
+>>>>>>> 046cf3b (docs/guides.md: remove trailing whitespace)
diff --git a/docs/ha.md b/docs/ha.md
index ed517ca4..0843069b 100644
--- a/docs/ha.md
+++ b/docs/ha.md
@@ -108,7 +108,7 @@ After each test, **Minion 1** go back to **lab1** to start in the exact same con
#### Pull the power plug
-Now, we will decide to pull the plug for my host **lab1**: this is exactly where my VM currently runs. After some time (when XAPI detect and report the lost of the host, in general 2 minutes), we can see that **lab1** is reported as Halted. In the same time, the VM **Minion 1** is booted on the other host running, **lab 2**:
+Now, we will decide to pull the plug for my host **lab1**: this is exactly where my VM currently runs. After some time (when XAPI detect and report the lost of the host, in general 2 minutes), we can see that **lab1** is reported as Halted. In the same time, the VM **Minion 1** is booted on the other host running, **lab 2**:
If you decide to re-plug the host **lab1**, the host will be back online, without any VM on it, which is normal.
@@ -122,12 +122,12 @@ So? **Minion 1** lost access to its disks ad after some time, **lab1** saw it ca
The host could not join the liveset because the HA daemon could not access the heartbeat disk.
```
-Immediatly after fencing, **Minion 1** will be booted on the other host.
+Immediately after fencing, **Minion 1** will be booted on the other host.
:::tip
-**lab1** is not physically halted, you can access it through SSH. But from the XAPI point of view, it's dead. Now, let's try to re-plug the ethernet cable... and just wait! Everything will be back to normal!
+**lab1** is not physically halted, you can access it through SSH. But from the XAPI point of view, it's dead. Now, let's try to re-plug the Ethernet cable... and just wait! Everything will be back to normal!
:::
#### Pull the network cable
-Finally, the worst case: leaving the storage operational but "cut" the (management) network interface. Same procedure: unplug physically the cable, and wait... Because **lab1** can't contact any other host of the pool (in this case, **lab2**), it decides to start the fencing procedure. The result is exaclty the same as the previous test. It's gone for the pool master, displayed as "Halted" until we re-plug the cable.
\ No newline at end of file
+Finally, the worst case: leaving the storage operational but "cut" the (management) network interface. Same procedure: unplug physically the cable, and wait... Because **lab1** can't contact any other host of the pool (in this case, **lab2**), it decides to start the fencing procedure. The result is exactly the same as the previous test. It's gone for the pool master, displayed as "Halted" until we re-plug the cable.
diff --git a/docs/hardware.md b/docs/hardware.md
index b795ab8c..edf2751c 100644
--- a/docs/hardware.md
+++ b/docs/hardware.md
@@ -6,7 +6,7 @@ For other hardware, see [Unlisted Hardware](#unlisted-hardware).
## Unlisted Hardware
-Many devices outside the HCL in fact work very well with XCP-ng. Being outside the HCL means that there have been not tests to ensure that they work. Most of the hardware support depends on the Linux kernel and thus support for hardware outside the HCL depends on on how well the drivers are supported by the Linux kernel included in XCP-ng.
+Many devices outside the HCL in fact work very well with XCP-ng. Being outside the HCL means that there have been not tests to ensure that they work. Most of the hardware support depends on the Linux kernel and thus support for hardware outside the HCL depends on how well the drivers are supported by the Linux kernel included in XCP-ng.
This section is a community-enriched list of pieces of hardware that do not belong to the HCL, along with information about how well they work (or not), workarounds, etc.
@@ -49,8 +49,7 @@ Known Issues (with old firmware; also on XenServer 7.2 with current firmware)
* Mid Term: Upgrade Firmware to match XCP-ng Driver version (for XCP-ng 7.5 -> 11.2.XXXXX)
* Long Term: Avoid Emulex cards!
-
-#### Broadcom Netxtreme II BCM57711E
+#### Broadcom NetXtreme II BCM57711E
(or BCM5709 or ...)
diff --git a/docs/install.md b/docs/install.md
index 21b4c47b..b65e23f2 100644
--- a/docs/install.md
+++ b/docs/install.md
@@ -190,7 +190,7 @@ PXE boot doesn't support tagged VLAN networks! Be sure to boot on a untagged net
### TFTP server configuration
-1. In your TFTP root directory (eg `/tftp`), create a folder named `xcp-ng`.
+1. In your TFTP root directory (e.g., `/tftp`), create a folder named `xcp-ng`.
2. Copy the `mboot.c32` and `pxelinux.0` files from the installation media to the TFTP root directory.
3. From the XCP-ng installation media, copy the files `install.img` (from the root directory), `vmlinuz`, and `xen.gz` (from the /boot directory) to the new `xcp-ng` directory on the TFTP server.
4. In the TFTP root directory, create a folder called `pxelinux.cfg`
@@ -212,7 +212,7 @@ label xcp-ng
If you want to make an installation in UEFI mode, you need to have a slightly different TFTP server configuration:
1. In your TFTP root folder, create a directory called `EFI/xcp-ng`
-2. Configure your DHCP serveur to provide `/EFI/xcp-ng/grubx64.efi` as the boot file
+2. Configure your DHCP server to provide `/EFI/xcp-ng/grubx64.efi` as the boot file
3. Create a `grub.cfg` as follow:
```
menuentry "XCP-ng Install (serial)" {
@@ -225,10 +225,10 @@ If you want to make an installation in UEFI mode, you need to have a slightly di
4. Copy this `grub.cfg` file to `EFI/xcp-ng` folder on the TFTP server
5. Get the following files from XCP-ng ISO: `grubx64.efi`, `install.img` (from the root directory), `vmlinuz`, and `xen.gz` (from the /boot directory) to the new EFI/xcp-ng directory on the TFTP server.
-On the FTP, NFS or HTTP serveur, get all the installation media content in there.
+On the FTP, NFS or HTTP server, get all the installation media content in there.
:::tip
-When you do copy the installation files, **DO NOT FORGET** the `.treeinfo` file. Double check your webserver isn't blocking it (like Microsoft IIS does).
+When you do copy the installation files, **DO NOT FORGET** the `.treeinfo` file. Double check your web server isn't blocking it (like Microsoft IIS does).
:::
#### On the host
@@ -267,14 +267,14 @@ tree -L 1 /path/to/http-directory/
```
2. Boot the target machine.
-3. Press Ctrl-B to catch the iPXE menu. Use the chainload command to load grub.
+3. Press Ctrl-B to catch the iPXE menu. Use the chainload command to load grub.
```
chain http://SERVER_IP/EFI/xenserver/grubx64.efi
```
:::tip
-Sometimes grub takes a very long time to load after displaying "Welcome to Grub". This can be fixed by compiling a new version of Grub with `grub-mkstandalone`.
+Sometimes grub takes a very long time to load after displaying "Welcome to Grub". This can be fixed by compiling a new version of Grub with `grub-mkstandalone`.
:::
4. Once the grub prompt loads, set the root to http and load the config file.
@@ -286,7 +286,7 @@ configfile /EFI/xenserver/grub.cfg
```
5. Select the "install" menu entry.
-6. Wait for grub to load the necessary binaries. This may take a minute. If you look at your http server log you should see something like:
+6. Wait for grub to load the necessary binaries. This may take a minute. If you look at your http server log you should see something like:
```
# (from python3 -m http.server path-to-directory 80)
@@ -312,7 +312,7 @@ label xcp-ng-auto
```
:::tip
-Any SYSLINUX configuration style file will be valid. [Find more on the syslinux website](https://wiki.syslinux.org/wiki/index.php?title=PXELINUX).
+Any SYSLINUX configuration style file will be valid. [Find more on the Syslinux website](https://wiki.syslinux.org/wiki/index.php?title=PXELINUX).
:::
### With UEFI
@@ -442,4 +442,3 @@ We **strongly** advise against installing on USB stick. XCP-ng writes a lot into
* XAPI: the XenServer API database is changing a lot. Hence writing a lot, and believe me, USB sticks aren't really happy with that on the long run. Note: XAPI DB is what keep tracks on all XCP-ng's "state", and it's replicated on each host (from the slave).
* Logs: XCP-ng keeps a LOT of debug logs. However, there is a workaround: use a remote syslog.
:::
-
diff --git a/docs/migratetoxcpng.md b/docs/migratetoxcpng.md
index e633212b..f87bc72e 100644
--- a/docs/migratetoxcpng.md
+++ b/docs/migratetoxcpng.md
@@ -31,7 +31,7 @@ If you have an error telling you that you don't have an default SR, please choos
This script is a bit old and not tested since while. If you have issues, feel free to report that!
:::
-## From Virtualbox
+## From VirtualBox
Export your VM in OVA format, and use Xen Orchestra to import it. If you have an issue on VM boot, check the [VMware](migratetoxcpng.md#fromvmware) section.
@@ -40,14 +40,14 @@ Export your VM in OVA format, and use Xen Orchestra to import it. If you have an
Using OVA export from VMware and then OVA import into Xen Orchestra is the preferred way.
:::tip
-Collect info about network cards used in windows VM (ipconfig /all) use same mac address(es) when creating interfaces in xcp-ng this step will help You skip windows activation if system was activated already.
+Collect info about network cards used in windows VM (`ipconfig /all`) use same mac address(es) when creating interfaces in xcp-ng this step will help You skip windows activation if system was activated already.
:::
Importing a VMware Linux VM, you may encounter an error similar to this on boot:
`dracut-initqueue[227]: Warning: /dev/mapper/ol-root does not exist`
-The fix for this is installing some xen drivers *before* exporting the VM from VMware:
+The fix for this is installing some Xen drivers *before* exporting the VM from VMware:
`dracut --add-drivers "xen-blkfront xen-netfront" --force`
@@ -56,15 +56,15 @@ The fix for this is installing some xen drivers *before* exporting the VM from V
## From Hyper-V
* Remove Hyper-V tools from every VM if installed.
-* Install an NFS Server somewhere. (You can also use Win-scp directly from Hyper-V and copy "$uuidger -r".vhd directly to storage and rescan after that)
+* Install an NFS Server somewhere. (You can also use Win-scp directly from Hyper-V and copy `"$uuidgen -r".vhd` directly to storage and rescan after that)
* Create an NFS share on that server.
* Mount the NFS share as a Storage Repository in XenCenter or XOA.
-* Make sure the hyper-v virtual disk is not fixed type, use hyper-v mgmt to convert to dynamic vhd if needed.
+* Make sure the hyper-v virtual disk is not a fixed type, use hyper-v management to convert it to a dynamic vhd file if needed.
* Copy the VHD file you want to import to the NFS share.
- -use **uuidgen -r** to generate uuid and use it to rename vhd file.
+ -use `uuidgen -r` to generate a UUID and use it to rename the vhd file.
* Create a new VM in xcp-ng with no disks.
* Attach the VHD from the NFS share to your new VM.
-* Install Xenserver Tools.
+* Install XenServer Tools.
* If everything work well move virtual disk using XCP-ng center from temporary storage to dedicated storage on the fly, VM can be turned on and disk can be online.
:::tip
@@ -105,7 +105,7 @@ _Due the fact I have only server here, I have setup a "buffer" machine on my des
`vhd-util check -n myvm.vhd` should return `myvm.vhd is valid`
-* For each VM, create a VDI on Xen Orchestra with the virtual size of your VHD + 1GB (i.e the virtual size of myvm is 21GB, so I create a VDI with a size of 22GB).
+* For each VM, create a VDI on Xen Orchestra with the virtual size of your VHD file + 1GB (i.e the virtual size of `myvm` is 21GB, so I create a VDI with a size of 22GB).
* Get the UUID of the VDI (on Xen Orchestra or CLI) and use the CLI on the XCP-ng host to import the VHD content into the VDI :
diff --git a/docs/migratetoxcpng.md.orig b/docs/migratetoxcpng.md.orig
new file mode 100644
index 00000000..87bbc533
--- /dev/null
+++ b/docs/migratetoxcpng.md.orig
@@ -0,0 +1,128 @@
+# Migrate to XCP-ng
+
+If you are using another virtualization platform (VMware, KVM, etc.), this part of the documentation will help you to migrate to XCP-ng.
+
+:::warning
+OVA import will miss the information if the VM is running BIOS or UEFI mode. Double check your settings on your original system, and then enable (or not) UEFI on XCP-ng side for the destination VM. You can do so in VM advanced tab in Xen Orchestra.
+:::
+
+## From XenServer
+
+We got a dedicated section on [how to migrate from XenServer to XCP-ng](upgrade.md#upgrade-from-xenserver).
+
+## From Citrix Hypervisor
+
+We got a dedicated section on [how to migrate from Citrix Hypervisor to XCP-ng](upgrade.md#upgrade-from-xenserver).
+
+<<<<<<< HEAD
+## From Xen on Linux
+
+If you are running Xen on your usual distro (Debian, Ubuntu…), you are using `xl` to manage your VMs, and also plain text configuration files. You can migrate to an existing XCP-ng host thanks to [this Python script](http://www-archive.xenproject.org/files/xva/xva.py).
+
+1. Get that script in your current `dom0`.
+1. Shutdown your VM
+1. Run the script, VM by VM with for example: `./xva.py -c /etc/xen/vm1.cfg -n vm1 -s xcp_host_1 --username=root --password="mypassword" --no-ssl`. You can use a hostname or the IP address of your XCP-ng host (name `xcp_host_1` here)
+1. Your disks are streamed while the configuration file is "translated" to a VM object in your XCP-ng host.
+1. As soon it's done, you should be able to boot your VM on destination
+1. Repeat for all your VMs
+
+If you have an error telling you that you don't have an default SR, please choose a default SR on your XCP-ng pool (in XO, Home/Storage, hover on the storage you want to put by default, there's an icon for it).
+
+:::warning
+This script is a bit old and not tested since while. If you have issues, feel free to report that!
+:::
+
+## From Virtualbox
+=======
+## From VirtualBox
+>>>>>>> 2ad9372 (docs/migratetoxcpng.md: style cleanup)
+
+Export your VM in OVA format, and use Xen Orchestra to import it. If you have an issue on VM boot, check the [VMware](migratetoxcpng.md#fromvmware) section.
+
+## From VMware
+
+Using OVA export from VMware and then OVA import into Xen Orchestra is the preferred way.
+
+:::tip
+Collect info about network cards used in windows VM (`ipconfig /all`) use same mac address(es) when creating interfaces in xcp-ng this step will help You skip windows activation if system was activated already.
+:::
+
+Importing a VMware Linux VM, you may encounter an error similar to this on boot:
+
+`dracut-initqueue[227]: Warning: /dev/mapper/ol-root does not exist`
+
+The fix for this is installing some Xen drivers *before* exporting the VM from VMware:
+
+`dracut --add-drivers "xen-blkfront xen-netfront" --force`
+
+[See here](https://unix.stackexchange.com/questions/278385/boot-problem-in-linux/496037#496037) for more details. Once the imported VM is properly booted, remove any VMware related tooling and be sure to install [Xen guest tools](guests.md).
+
+## From Hyper-V
+
+* Remove Hyper-V tools from every VM if installed.
+* Install an NFS Server somewhere. (You can also use Win-scp directly from Hyper-V and copy `"$uuidgen -r".vhd` directly to storage and rescan after that)
+* Create an NFS share on that server.
+* Mount the NFS share as a Storage Repository in XenCenter or XOA.
+* Make sure the hyper-v virtual disk is not a fixed type, use hyper-v management to convert it to a dynamic vhd file if needed.
+* Copy the VHD file you want to import to the NFS share.
+ -use `uuidgen -r` to generate a UUID and use it to rename the vhd file.
+* Create a new VM in xcp-ng with no disks.
+* Attach the VHD from the NFS share to your new VM.
+* Install XenServer Tools.
+* If everything work well move virtual disk using XCP-ng center from temporary storage to dedicated storage on the fly, VM can be turned on and disk can be online.
+
+:::tip
+If You lost ability to extend migrated volume (opening journal failed: -2) You need to move disk to another storage, VM should be ON during moving process. This issue can occur when vhd files was directly copied to storage folder.
+:::
+
+## From KVM (Libvirt)
+
+Related forum thread:
+
+_Due the fact I have only server here, I have setup a "buffer" machine on my desktop to backup and convert the VM image file._
+
+* Install the dracut packages : yum install dracut-config-generic dracut-network
+
+ `dracut --add-drivers xen-blkfront -f /boot/initramfs-$(uname -r).img $(uname -r)`
+
+ If your VMs are in BIOS mode :
+
+ `dracut --regenerate-all -f && grub2-mkconfig -o /boot/grub2/grub.cfg`
+
+ If your VMs are in UEFI mode (OVMF Tianocore) :
+
+ `dracut --regenerate-all -f && grub2-mkconfig -o /boot/efi/EFI//grub.cfg`
+
+* Shutdown the VM
+
+* Use rsync to copy VM files to the "buffer" machine using `--sparse` flag.
+
+* Convert the QCOW2 to VHD using QEMU-IMG :
+
+ `qemu-img convert -O vpc myvm.qcow2 myvm.vhd`
+
+* Use rsync to copy the converted files (VHD) to your XCP-ng host.
+
+* After the rsync operation, the VHD are not valid for the XAPI, so repair them :
+
+ `vhd-util repair -n myvm.vhd`
+
+ `vhd-util check -n myvm.vhd` should return `myvm.vhd is valid`
+
+* For each VM, create a VDI on Xen Orchestra with the virtual size of your VHD file + 1GB (i.e the virtual size of `myvm` is 21GB, so I create a VDI with a size of 22GB).
+
+* Get the UUID of the VDI (on Xen Orchestra or CLI) and use the CLI on the XCP-ng host to import the VHD content into the VDI :
+
+ `xe vdi-import filename=myvm.vhd format=vhd --progress uuid=`
+
+* Once the import is done, create a virtual machine using XO or XCP-ng Center, delete the VM disk that has been created and attach your newly created VDI to the VM. Don't forget to set the VM boot mode to UEFI if your VMs was in UEFI mode.
+
+* Boot the VM and find a way to enter in the virtual UEFI of the VM. Here, I type the Escape and F9,F10,F11,F12 keys like crazy. Select Boot Manager, you should see this window :
+
+
+
+* Select `UEFI QEMU HARDDISK`, the screen should be black for seconds and you should see the GRUB. Let the machine worked for minutes and you should see the prompt finally 👍
+
+* Install Guest Tools and reboot. The reboot shouldn't take long, you don't have to redo step 13, the OS seems to have repair the boot sequence by itself.
+
+Done !
diff --git a/docs/mirrors.md b/docs/mirrors.md
index 84fa06e8..35c3e599 100644
--- a/docs/mirrors.md
+++ b/docs/mirrors.md
@@ -2,7 +2,7 @@
Like a Linux distribution, XCP-ng's installation images and RPM repositories can be replicated by benevolent mirror providers.
-Starting with its 8.0 release, XCP-ng uses [mirrorbits](https://github.com/etix/mirrorbits) to redirect download requests to an appropriate mirror based on their update status and geographical position.
+Starting with its 8.0 release, XCP-ng uses [Mirrorbits](https://github.com/etix/mirrorbits) to redirect download requests to an appropriate mirror based on their update status and geographical position.
## Original mirror
@@ -114,7 +114,7 @@ pub 2048R/3FD3AC9E 2018-10-03 XCP-ng Key (XCP-ng Official Signing Key) but you can transpose those steps to a newer ISO.
diff --git a/docs/networking.md b/docs/networking.md
index 613deebd..187ef2c8 100644
--- a/docs/networking.md
+++ b/docs/networking.md
@@ -168,7 +168,7 @@ The issue should be fixed.
Sometimes you need to add extra routes to an XCP-ng host. It can be done manually with an `ip route add 10.88.0.0/14 via 10.88.113.193` (for example). But it won't persist after a reboot.
-To properly create persistent static routes, first create your xen network interface as usual. If you already have this network created previously, just get its UUID with an `xe network-list`. You're looking for the interface you have a management IP on typically, something like `xapi0` or `xapi1` for example. If you're not sure which one it is, you can run `ifconfig` and find the interface name that has the IP address this static route traffic will be exiting. Then get that interfaces UUID using the previous `xe network-list` command.
+To properly create persistent static routes, first create your Xen network interface as usual. If you already have this network created previously, just get its UUID with an `xe network-list`. You're looking for the interface you have a management IP on typically, something like `xapi0` or `xapi1` for example. If you're not sure which one it is, you can run `ifconfig` and find the interface name that has the IP address this static route traffic will be exiting. Then get that interfaces UUID using the previous `xe network-list` command.
Now insert the UUID in the below example command. Also change the IPs to what you need, using the following format: `//gateway IP>`. For example, our previous `ip route add 10.88.0.0/14 via 10.88.113.193` will be translated into:
@@ -191,7 +191,7 @@ xe network-param-remove uuid= param-key=static-routes param-name=o
A toolstack restart is needed as before.
:::tip
-XAPI might not remove the already-installed route until the host is rebooted. If you need to remove it ASAP, you can use `ip route del 10.88.0.0/14 via 10.88.113.193`. Check that it's gone with `route -n`.
+XAPI might not remove the already-installed route until the host is rebooted. If you need to remove it ASAP, you can use `ip route del 10.88.0.0/14 via 10.88.113.193`. Check that it's gone with `route -n`.
:::
## Full mesh network
@@ -314,7 +314,7 @@ See .
Incorrect networking settings can cause loss of network connectivity. When there is no network connectivity, XCP-ng host can become inaccessible through Xen Orchestra or remote SSH. Emergency Network Reset provides a simple mechanism to recover and reset a host’s networking.
-The Emergency network reset feature is available from the CLI using the `xe-reset-networking` command, and within the Network and Management Interface section of xsconsole.
+The Emergency network reset feature is available from the CLI using the `xe-reset-networking` command, and within the Network and Management Interface section of `xsconsole`.
Incorrect settings that cause a loss of network connectivity include renaming network interfaces, creating bonds or VLANs, or mistakes when changing the management interface. For example, typing the wrong IP address. You may also want to run this utility in the following scenarios:
@@ -333,13 +333,13 @@ If the pool master requires a network reset, reset the network on the pool maste
#### Verifying the network reset
-After you specify the configuration mode to be used after the network reset, xsconsole and the CLI display settings that will be applied after host reboot. It is a final chance to modify before applying the emergency network reset command. After restart, the new network configuration can be verified in Xen Orchestra and xsconsole. In Xen Orchestra, with the host selected, select the Networking tab to see the new network configuration. The Network and Management Interface section in xsconsole display this information.
+After you specify the configuration mode to be used after the network reset, `xsconsole` and the CLI display settings that will be applied after host reboot. It is a final chance to modify before applying the emergency network reset command. After restart, the new network configuration can be verified in Xen Orchestra and `xsconsole`. In Xen Orchestra, with the host selected, select the Networking tab to see the new network configuration. The Network and Management Interface section in `xsconsole` display this information.
### SR-IOV
TO have SR-IOV enabled, you need:
-* SR-IOV / ASPM compatible mainboard
+* SR-IOV / ASPM compatible motherboard
* SR-IOV compatible CPU
* SR-IOV compatible network card
* SR-IOV compatible drivers for XCP-ng
@@ -351,7 +351,7 @@ You can't live migrate a VM with SR-IOV enabled. Use it only if you really need
#### Setup
* enable SR-IOV in your BIOS
-* enable ASPM (seem to be needed acording to https://www.juniper.net/documentation/en_US/contrail3.1/topics/concept/sriov-with-vrouter-vnc.html and https://www.supermicro.com/support/faqs/faq.cfm?faq=26448)
+* enable ASPM (seem to be needed according to https://www.juniper.net/documentation/en_US/contrail3.1/topics/concept/sriov-with-vrouter-vnc.html and https://www.supermicro.com/support/faqs/faq.cfm?faq=26448)
* enable SR-IOV in your network card firmware
Then, you can enable and configure it with `xe` CLI:
diff --git a/docs/project.md b/docs/project.md
index 301d6e8e..605e7088 100644
--- a/docs/project.md
+++ b/docs/project.md
@@ -18,4 +18,4 @@ There's also a [Twitter account](https://twitter.com/xcpng) and an IRC channel a
Here is a video recorded at FOSDEM19:
-
\ No newline at end of file
+
diff --git a/docs/release-8-1.md b/docs/release-8-1.md
index aa2a358b..17df2f43 100644
--- a/docs/release-8-1.md
+++ b/docs/release-8-1.md
@@ -67,13 +67,13 @@ In short, you are now able to backup and restore a VM, with its context, the who
You can restore it anytime later on another host, and resume it as if nothing happened. From the VM perspective, its uptime will be kept. Combined with Xen Orchestra Continuous Replication, you can also send your VM data and memory every XX hours to another XCP-ng host or pool, and resume it as soon you need it.
-For more information and use cases, you can check [this Devblog](https://xen-orchestra.com/blog/devblog-6-backup-ram/) written by our developer Benjamin.
+For more information and use cases, you can check [this devblog](https://xen-orchestra.com/blog/devblog-6-backup-ram/) written by our developer Benjamin.
### Installer improvements in 8.1
Our installer now offers two new installation options. In legacy boot mode, access them with F2 when offered the choice. In UEFI mode, see the added boot menu entries.
* First new option: boot the installer with a 2G RAM limit instead of the 8G default. This is a workaround for installation issues on hardware with Ryzen CPUs. Though those are Desktop-class CPUs and not supported officially in the HCL, we tried to make it easier to workaround the infamous "installer crashes on Ryzen" issue.
-* Second new option: boot the installer with our [alternate kernel](hardware.md#alternate-kernel) (kernel-alt). That kernel, built and maintained by @r1 for the team, is based on the main kernel, with all upstream kernel.org patches from the LTS 4.19 branch applied.It should be very stable by construction **but it receives less testing**. That option is there for cases when the main kernel and drivers have issues, so that you can quickly test if kernel.org patches have fixed it already. It will also install the alternate kernel in addition to the main kernel as a convenience. **If kernel-alt fixes issues for you, the most important thing to do is to tell us so that we may fix the main kernel!**
+* Second new option: boot the installer with our [alternate kernel](hardware.md#alternate-kernel) (kernel-alt). That kernel, built and maintained by @r1 for the team, is based on the main kernel, with all upstream kernel.org patches from the LTS 4.19 branch applied. It should be very stable by construction **but it receives less testing**. That option is there for cases when the main kernel and drivers have issues, so that you can quickly test if kernel.org patches have fixed it already. It will also install the alternate kernel in addition to the main kernel as a convenience. **If kernel-alt fixes issues for you, the most important thing to do is to tell us so that we may fix the main kernel!**
### New leaf coalesce logic with dynamic limits
@@ -86,8 +86,8 @@ Those interested in the patches, see [this commit](https://github.com/xcp-ng-rpm
* ZFS updated to 0.8.3.
* [Alternate kernel](hardware.md#alternate-kernel) updated to version 4.19.108. Installing it now automatically adds a new boot entry in grub's configuration, to make testing easier. Default entry remains that of the main kernel.
* `netdata-ui` still available from our repositories and also as a feature in Xen Orchestra.
- * r1 contributed a fix to netdata project to bring support for Xen 4.13
- * stormi made netdata cache be RAM-only to workaround an upstream bug that could make the disk cache grow forever
+ * r1 contributed a fix to Netdata project to bring support for Xen 4.13
+ * stormi made Netdata cache be RAM-only to workaround an upstream bug that could make the disk cache grow forever
* `zstd` updated to 1.4.4.
* Experimental support for XFS in local storage repository still available through the `sm-additional-drivers` package.
@@ -99,7 +99,7 @@ However we have updated the [documentation about the guest tools](guests.md), wh
### Other changes
-* Fixed netxtreme drivers (`bnx2x` module) that crashed with some models.
+* Fixed NetXtreme drivers (`bnx2x` module) that crashed with some models.
## Misc
@@ -137,9 +137,9 @@ See "Destroy and re-create a local SR" below.
* Back-up your VMs.
* Move the VMs from that local SR towards another SR, or export them then delete them (Note: an export will not retain the snapshots).
* Check that the SR is now empty.
- * Note the *SR uuid* (visible in Xen Orchestra, or in the output of `xe sr-list`).
+ * Note the *SR UUID* (visible in Xen Orchestra, or in the output of `xe sr-list`).
* Find the associated PBD: `xe pbd-list sr-uuid={SR-UUID}`
- * Note the *PBD uuid*.
+ * Note the *PBD UUID*.
* Note the associated device (e.g. `/dev/sdb`).
* Unplug the PBD: `xe pbd-unplug uuid={PBD-UUID}`
* Destroy the SR: `xe sr-destroy uuid={SR-UUID}`
@@ -176,14 +176,14 @@ We want to thank our community of users who was very helpful in helping us ident
* Missing removable media from VMs
* Possibly others that haven't been reported yet
-### Longer boot times when the ntp server cannot be reached
-If the ntp server can't be reached, the `chrony-wait` service may stall the boot process for several minutes before it gives up:
+### Longer boot times when the NTP server cannot be reached
+If the NTP server can't be reached, the `chrony-wait` service may stall the boot process for several minutes before it gives up:
* up to 10 minutes if you installed with `xcp-ng-8.1.0.iso`, or with yum update before 2020-04-03;
* up to 2 minutes only if you installed with `xcp-ng-8.1.0-2.iso`, with yum update after 2020-04-03, or have updated your host after 2020-04-03.
Reported to Citrix: [XSO-981](https://bugs.xenserver.org/browse/XSO-981)
-We must stress that it is important that all your hosts have accurate date and time and so be able to connect an ntp server.
+We must stress that it is important that all your hosts have accurate date and time and so be able to connect an NTP server.
### `yum update` from 8.0 to 8.1 from within a VNC console
@@ -249,7 +249,7 @@ In general, issues inherited from Citrix Hypervisor and already described in the
See [Citrix Hypervisor's known issues](https://docs.citrix.com/en-us/citrix-hypervisor/whats-new/known-issues.html) (link only valid for the latest release of Citrix Hypervisor). Most apply to XCP-ng.
Some exceptions to those CH 8.1 known issues:
-* The errors due to to `xapi-wait-init-complete.service` not being enabled were already fixed during XCP-ng 8.1's beta phase.
+* The errors due to `xapi-wait-init-complete.service` not being enabled were already fixed during XCP-ng 8.1's beta phase.
* Issues related to Citrix-specific things like licenses or GFS2 do not apply to XCP-ng.
* Though not mentioned yet in their known issues (as of 2020-03-30), an update of CH 8.0 to CH 8.1 using the update ISO fails at enabling the `chronyd` service. In XCP-ng 8.1, updated from 8.0 using `yum`, we fixed that issue before the release.
@@ -263,7 +263,7 @@ Some hardware-related issues are also described in [this page](hardware.md).
Live migrating a VM from an old XenServer can sometimes end with an error, with the following consequences:
* The VM reboots
-* It gets duplicated: the same VM uuid (and usually its VDIs too) is present both on the sender and the receiver host. Remove it from the receiver host.
+* It gets duplicated: the same VM UUID (and usually its VDIs too) is present both on the sender and the receiver host. Remove it from the receiver host.
Would require a hotfix to the old XenServer, but since those versions are not supported anymore, Citrix won't develop one.
diff --git a/docs/release-8-2.md b/docs/release-8-2.md
index 52c6be6f..80876cf0 100644
--- a/docs/release-8-2.md
+++ b/docs/release-8-2.md
@@ -56,9 +56,9 @@ A complete [reimplementation of the UEFI support in XCP-ng](https://github.com/x
This will also allow us to offer Secure Boot support for VMs in the near future.
-### Openflow controller access
+### OpenFlow controller access
-We automated the configuration needed by the user to allow communication with the Openflow controller in Xen Orchestra.
+We automated the configuration needed by the user to allow communication with the OpenFlow controller in Xen Orchestra.
Learn more about the VIFs network traffic control in Xen Orchestra in [this dedicated devblog](https://xen-orchestra.com/blog/vms-vif-network-traffic-control/).
@@ -66,9 +66,9 @@ We also backported this feature to XCP-ng 8.1 as this improvements was already s
### Core scheduling (experimental)
-As you probably know, Hyper Threading defeats all mitigations of CPU vulnerabilities related to side-channel attacks (as Spectre, Meltdown, Fallout...). That's why it was required to disable it as part of the mitigations. The reason is that with Hyper Threading enabled you can't protect a VM's vCPUs from attacks originating from other VMs that have workloads scheduled on the same physical core.
+As you probably know, Hyper Threading defeats all mitigation of CPU vulnerabilities related to side-channel attacks (as Spectre, Meltdown, Fallout...). That's why it was required to disable it as part of threat mitigation. The reason is that with Hyper Threading enabled you can't protect a VM's vCPUs from attacks originating from other VMs that have workloads scheduled on the same physical core.
-With Core Scheduling, you now have another solution: you can choose to leave Hyper Threading enabled and ask the scheduler to always group vCPUs of a given VM together on the same physical core(s). This will remove the vulnerability to a class of attacks from other VMs, but will leave the VM processes vulnerables to attacks from malevolent processes from within itself. To be usedonly with entirely trusted workloads.
+With Core Scheduling, you now have another solution: you can choose to leave Hyper Threading enabled and ask the scheduler to always group vCPUs of a given VM together on the same physical core(s). This will remove the vulnerability to a class of attacks from other VMs, but will leave the VM processes vulnerable to attacks from malevolent processes from within itself. To be used only with entirely trusted workloads.
A new XAPI method allowing you to choose the frequency of the core scheduler was written. You will have the option to select different granularity: CPU, core or socket, depending on the performance/security ratio you are looking for.
@@ -78,7 +78,7 @@ We added three new experimental storage drivers: `zfs`, `glusterfs` and `cephfs`
We also decided to include all SR drivers by default in XCP-ng now, including experimental ones. We do not, however, install all the dependencies on dom0 by default: `xfsprogs`, `gluster-server`, `ceph-common`, `zfs`... They need to be installed using `yum` for you to use the related SR drivers. Check the documentation for each storage driver.
#### `zfs`
-We already provided `zfs` packages in our repositories before, but there was no dedicated SR driver. Users would use the `file` driver, which has a major drawback: if the zpool is not active, that driver may believe that the SR suddenly became empty, and drop all VDI metadata.
+We already provided `zfs` packages in our repositories before, but there was no dedicated SR driver. Users would use the `file` driver, which has a major drawback: if the `zpool` is not active, that driver may believe that the SR suddenly became empty, and drop all VDI metadata.
So we developed a dedicated `zfs` SR driver that checks whether `zfs` is present before drawing such conclusions.
@@ -97,11 +97,11 @@ Use this driver to connect to an existing Ceph storage through the CephFS storag
=> [CephFS SR Documentation](storage.md#cephfs)
### Guest tools ISO
-Not really a change from XCP-ng 8.1, but rather a change from Citrix Hypervisor 8.2: they dropped the guest tools ISO, replaced by downloads from their website. We chose to retain the feature and still provide a guest tools ISO that you can mount to your VMs. Many thanks go to the [XAPI](https://github.com/xapi-project/xen-api/) developers who have accepted to keep the related source code in the XAPI project for us to keep using, rather than deleteing it.
+Not really a change from XCP-ng 8.1, but rather a change from Citrix Hypervisor 8.2: they dropped the guest tools ISO, replaced by downloads from their website. We chose to retain the feature and still provide a guest tools ISO that you can mount to your VMs. Many thanks go to the [XAPI](https://github.com/xapi-project/xen-api/) developers who have accepted to keep the related source code in the XAPI project for us to keep using, rather than deleting it.
### Other changes
-* We replaced Citrix's `gpumon` package, not built by us, by a mock build of `gpumon` sources, without the proprietary nvidia developer kit. For you as users, this changes nothing. For us, it means getting rid of a package that was not built by the XCP-ng build system.
+* We replaced Citrix's `gpumon` package, not built by us, by a mock build of `gpumon` sources, without the proprietary NVIDIA developer kit. For you as users, this changes nothing. For us, it means getting rid of a package that was not built by the XCP-ng build system.
* [Alternate kernel](hardware.md#alternate-kernel) updated to version 4.19.142.
* Intel's `e1000e` driver updated to version 3.8.4 in order to support more devices.
* Cisco's `enic` and `fnic` drivers updated to offer better device support and compatibility.
@@ -138,9 +138,9 @@ There exists no easy way to convert an existing storage repository from a given
* Back-up your VMs from the existing ZFS SR.
* Move the VMs from that local SR towards another SR, or export them then delete them.
* Check that the SR is now empty.
- * Note the *SR uuid* (visible in Xen Orchestra, or in the output of `xe sr-list`).
+ * Note the *SR UUID* (visible in Xen Orchestra, or in the output of `xe sr-list`).
* Find the associated PBD: `xe pbd-list sr-uuid={SR-UUID}`
- * Note the *PBD uuid*.
+ * Note the *PBD UUID*.
* Note the associated location (e.g. `/zfs/vol0`).
* Unplug the PBD: `xe pbd-unplug uuid={PBD-UUID}`
* Destroy the SR: `xe sr-destroy uuid={SR-UUID}`
@@ -214,7 +214,7 @@ Some hardware-related issues are also described in [this page](hardware.md).
Live migrating a VM from an old XenServer can sometimes end with an error, with the following consequences:
* The VM reboots
-* It gets duplicated: the same VM uuid (and usually its VDIs too) is present both on the sender and the receiver host. Remove it from the receiver host.
+* It gets duplicated: the same VM UUID (and usually its VDIs too) is present both on the sender and the receiver host. Remove it from the receiver host.
Would require a hotfix to the old XenServer, but since those versions are not supported anymore, Citrix won't develop one.
diff --git a/docs/roadmap.md b/docs/roadmap.md
index b4a7ddb1..a455ecb2 100644
--- a/docs/roadmap.md
+++ b/docs/roadmap.md
@@ -24,7 +24,7 @@ This is a draft roadmap, things aren't sorted in any way, and there is no ETA fo
* Netdata dedicated RPM (2019)
* Citrix DVSC replacement by XO plugin (2019)
* Full SMAPIv1 SR stack ZFS support, done with ZoL 0.8.1 (2019)
-* Netinstaller checking GPG (2019)
+* Net installer checking GPG (2019)
* Netdata in XCP-ng with [Xen metrics](https://github.com/netdata/netdata/pull/5660) (2019)
* `zstd` support for VM export/import (2019)
* `xfs` local SR support SMAPIv1 (2019)
@@ -32,7 +32,7 @@ This is a draft roadmap, things aren't sorted in any way, and there is no ETA fo
* Terraform support (2019)
* More recent (4.9) kernel usage in dom0 (2018)
* Signed Windows PV tools (2018)
-* Cloudstack compatibility (2018)
+* CloudStack compatibility (2018)
* Upgrade detection and upgrade with updater plugin (2018)
* Extra package repo (2018)
@@ -54,11 +54,11 @@ This is a draft roadmap, things aren't sorted in any way, and there is no ETA fo
* VDI export with compression (including `zstd`)
* SMAPIv3 Ceph support
* Coalesce process improvement (raw speed, rewrite, multicore?) [#127](https://github.com/xcp-ng/xcp/issues/127)
-* Faster Xen Storage Motion (using on the fly compression for disk content? remove stunnel?)
-* SMAPIv3 full ZFS driver (using pyzfs with it)
-* NVMe driver for near bare metal perfs (specification in progress)
-* smarctl alerts (specification in progress)
-* General storage perf improvement
+* Faster Xen Storage Motion (using on the fly compression for disk content? remove `stunnel`?)
+* SMAPIv3 full ZFS driver (using `pyzfs` with it)
+* NVMe driver for near bare metal performance (specification in progress)
+* `smarctl` alerts (specification in progress)
+* General storage performance improvement
* Thin pro on block based SR (architectural review needed)
### Network
@@ -67,7 +67,7 @@ This is a draft roadmap, things aren't sorted in any way, and there is no ETA fo
### API
-* XAPI HTTP lib 1.1 replacement (removing stunnel)
+* XAPI HTTP lib 1.1 replacement (removing `stunnel`)
* ISO upload in SR ISO
* JSON-RPC compression support
@@ -79,4 +79,4 @@ This is a draft roadmap, things aren't sorted in any way, and there is no ETA fo
* Expose repo URL/modification from XAPI (possibility to use XOA with `apt-cacher-ng`)
* Improved provisioning support (Ansible…)
* Automated tests
-* new RPM tracking in CentOS (Anitya)
\ No newline at end of file
+* new RPM tracking in CentOS (Anitya)
diff --git a/docs/storage.md b/docs/storage.md
index 008ce42e..4e8ad41f 100644
--- a/docs/storage.md
+++ b/docs/storage.md
@@ -142,30 +142,30 @@ Via `xe` CLI for a local EXT SR (where `sdaX` is a partition, but it can be the
xe sr-create host-uuid= type=ext content-type=user name-label="Local Ext" device-config:device=/dev/sdaX
```
-In addition to the two main, rock-solid, local storages (EXT and LVM), XCP-ng offers storage drivers for other types of local storage (ZFS, XFS, etc.).
+In addition to the two main, rock-solid, local storage types (EXT and LVM), XCP-ng offers storage drivers for other types of local storage (ZFS, XFS, etc.).
### NFS
Shared, thin-provisioned storage. Efficient, recommended for ease of maintenance and space savings.
-In Xen Orchestra, go in the "New" menu entry, then Storage, and select NFS. Follow instructions from there.
+In Xen Orchestra, go in the "New" menu entry, then Storage, and select NFS. Follow instructions from there.
:::tip
-Your host will mount the top-level NFS share you provide initially (example: /share/xen), then create folder(s) inside of that, then mount those directly instead (example: /share/xen/515982ab-476e-17b7-0e61-e68fef8d7d31). This means your NFS server or appliance must be set to allow sub-directory mounts, or adding the SR will fail. In FreeNAS, this checkbox is called "All dirs" in the NFS share properties.
+Your host will mount the top-level NFS share you provide initially (example: `/share/xen`), then create folder(s) inside of that, then mount those directly instead (example: `/share/xen/515982ab-476e-17b7-0e61-e68fef8d7d31`). This means your NFS server or appliance must be set to allow sub-directory mounts, or adding the SR will fail. In FreeNAS, this checkbox is called `All dirs` in the NFS share properties.
:::
### File
Local, thin-provisioned. Not recommended.
-The `file` storage driver allows you to use any local directory as storage.
+The `file` storage driver allows you to use any local directory as storage.
Example:
```
xe sr-create host-uuid= type=file content-type=user name-label="Local File SR" device-config:location=/path/to/storage
```
-Avoid using it with mountpoints for remote storage: if for some reason the filesystem is not mounted when the SR is scanned for virtual disks, the `file` driver will believe that the SR is empty and drop all VDI metadata for that storage.
+Avoid using it with mount points for remote storage: if for some reason the filesystem is not mounted when the SR is scanned for virtual disks, the `file` driver will believe that the SR is empty and drop all VDI metadata for that storage.
### XOSANv2
@@ -248,10 +248,10 @@ echo 10 > /sys/module/zfs/parameters/zfs_txg_timeout
There are many options to increase the performance of ZFS SRs:
-* Modify the module parameter `zfs_txg_timeout`: Flush dirty data to disk at least every N seconds (maximum txg duration). By default 5.
+* Modify the module parameter `zfs_txg_timeout`: Flush dirty data to disk at least every N seconds (maximum `txg` duration). By default 5.
* Disable sync to disk: `zfs set sync=disabled tank/zfssr`
* Turn on compression (it's cheap but effective): `zfs set compress=lz4 tank/zfssr`
-* Disable accesstime log: `zfs set atime=off tank/zfssr`
+* Disable access time log: `zfs set atime=off tank/zfssr`
### XFS
@@ -271,7 +271,7 @@ Via `xe` CLI for a local XFS SR (where `sdaX` is a partition, but it can be the
xe sr-create host-uuid= type=xfs content-type=user name-label="Local XFS" device-config:device=/dev/sdaX
```
-### Glusterfs
+### GlusterFS
Shared, thin-provisioned storage. Available since XCP-ng 8.2.
@@ -311,7 +311,7 @@ Create `/etc/ceph/admin.secret` with your access secret for CephFS.
AQBX21dfVMJtBhAA2qthmLyp7Wxz+T5YgoxzeQ==
```
-Now you can create the SR where `server` is your mon ip.
+Now you can create the SR where `server` is your mon IP.
```
# xe sr-create type=cephfs name-label=ceph device-config:server=172.16.10.10 device-config:serverpath=/xcpsr device-config:options=name=admin,secretfile=/etc/ceph/admin.secret
```
@@ -326,7 +326,7 @@ Now you can create the SR where `server` is your mon ip.
Shared, thin-provisioned storage. Available since XCP-ng 8.2.
-MooseFS is a fault-tolerant, highly available, highly performing, scaling-out, network distributed file system. It is POSIX compliant and acts like any other Unix-like file system.
+MooseFS is a fault-tolerant, highly available, highly performing, scaling-out, network distributed file system. It is POSIX compliant and acts like any other Unix-like file system.
SR driver was contributed directly by MooseFS Development Team.
:::warning
@@ -388,11 +388,11 @@ Experimental, this needs reliable testing to ensure no block corruption happens
This is at this moment the only way to connect to Ceph with no modifications of dom0, it's possible to create multiple Ceph iSCSI gateways following this:
-Ceph iSCSI gateway node(s) sits outside dom0, probably another Virtual or Physical machine. The packages referred in the URL are to be installed on iSCSI gateway node(s). For XCP-ng dom0, no modifications are needed as it would use LVMoISCSISR (lvmoiscsi) driver to access the iSCSI LUN presented by these gateways.
+Ceph iSCSI gateway node(s) sits outside dom0, probably another Virtual or Physical machine. The packages referred in the URL are to be installed on iSCSI gateway node(s). For XCP-ng dom0, no modifications are needed as it would use LVMoISCSISR (`lvmoiscsi`) driver to access the iSCSI LUN presented by these gateways.
-For some reason the chap authentication between gwcli and XCP-ng doesn't seem to be working, so it's recommended to disable it (in case you use no authentication a dedicated network for storage should be used to ensure some security).
+For some reason the chap authentication between `gwcli` and XCP-ng doesn't seem to be working, so it's recommended to disable it (in case you use no authentication a dedicated network for storage should be used to ensure some security).
-IMPORTANT: User had many weird glitches with iSCSI connection via ceph gateway in lab setup (3 gateways and 3 paths on each host) after several days of using it. So please keep in mind that this setup is experimental and unstable. This would have to be retested on recent XCP-ng.
+IMPORTANT: User had many weird glitches with iSCSI connection via Ceph gateway in lab setup (3 gateways and 3 paths on each host) after several days of using it. So please keep in mind that this setup is experimental and unstable. This would have to be retested on recent XCP-ng.
### Ceph RBD
@@ -400,7 +400,7 @@ IMPORTANT: User had many weird glitches with iSCSI connection via ceph gateway i
This way of using Ceph requires installing `ceph-common` inside dom0 from outside the official XCP-ng repositories. It is reported to be working by some users, but isn't recommended officially (see [Additional packages](additionalpackages.md)). You will also need to be careful about system updates and upgrades.
:::
-You can use this to connect to an existing Ceph storage over RBD, and configure it as a shared SR for all your hosts in the pool. This driver uses LVM (lvm) as generic driver and expects that the Ceph RBD volume is already connected to one or more hosts.
+You can use this to connect to an existing Ceph storage over RBD, and configure it as a shared SR for all your hosts in the pool. This driver uses LVM (`lvm`) as generic driver and expects that the Ceph RBD volume is already connected to one or more hosts.
Known issue: this SR is not allowed to be used for HA state metadata due to LVM backend restrictions within XAPI drivers, so if you want to use HA, you will need to create another type of storage for HA metadata
@@ -414,7 +414,7 @@ Installation steps
Create `/etc/ceph/keyring` with your access secret for Ceph.
```
-# cat /etc/ceph/keyring
+# cat /etc/ceph/keyring
[client.admin]
key = AQBX21dfVMJtJhAA2qthmLyp7Wxz+T5YgoxzeQ==
```
@@ -422,7 +422,7 @@ key = AQBX21dfVMJtJhAA2qthmLyp7Wxz+T5YgoxzeQ==
Create `/etc/ceph/ceph.conf` as your matching setup.
```
-# cat /etc/ceph/ceph.conf
+# cat /etc/ceph/ceph.conf
[global]
mon_host = 10.10.10.10:6789
@@ -444,7 +444,7 @@ types = [ "rbd", 1024 ]
xe sr-create name-label='CEPH' shared=true device-config:device=/dev/rbd/rbd/xen1 type=lvm content-type=user
```
-You will probably want to configure ceph further so that the block device is mapped on reboot.
+You will probably want to configure Ceph further so that the block device is mapped on reboot.
For the full discussion about Ceph in XCP-ng, see this forum thread:
@@ -551,7 +551,7 @@ When you make XO backup on regular basis, old/unused snapshots will be removed a
This process will take some time to finish (especially if you VM stays up and worst if you have a lot of writes on its disks).
-**What about creating snapshot (ie call backup jobs) faster than XCP-ng can coalesce?** Well, the chain will continue to grow. And more you have disks to merge, longer it will take.
+**What about creating snapshot (i.e., call backup jobs) faster than XCP-ng can coalesce?** Well, the chain will continue to grow. And more you have disks to merge, longer it will take.
You will hit a wall, 2 options here:
diff --git a/docs/troubleshooting.md b/docs/troubleshooting.md
index c1988a97..45a310ce 100644
--- a/docs/troubleshooting.md
+++ b/docs/troubleshooting.md
@@ -34,7 +34,7 @@ If you have subscribed to [Pro support](https://xcp-ng.com/), well, don't hesita
* Try the other boot options
* alternate kernel
* safe mode
-* Try to boot with the `iommu=0` xen parameter.
+* Try to boot with the `iommu=0` Xen parameter.
:::tip
**How to add or remove boot parameters from command line.**
@@ -43,7 +43,7 @@ If you have subscribed to [Pro support](https://xcp-ng.com/), well, don't hesita
* On BIOS mode, you can enter a menu by typing `menu` and then modify the boot entries with the TAB key. Xen parameters are between `/boot/xen.gz` and the next `---`. Kernel parameters are between `/boot/vmlinuz` and the next `---`.
:::
-If any of the above allows to work around your issue, please let us know ([github issues](https://github.com/xcp-ng/xcp/issues)). We can't fix issues we aren't aware of.
+If any of the above allows to work around your issue, please let us know ([GitHub issues](https://github.com/xcp-ng/xcp/issues)). We can't fix issues we aren't aware of.
### During installation or upgrade
@@ -81,7 +81,7 @@ Output of various running daemons involved in XCP-ng's tasks. Examples: output o
Contains the output of the XAPI toolstack.
-### Storage related (eg. coalescing snapshots)
+### Storage related (e.g., coalescing snapshots)
`/var/log/SMlog`
@@ -142,7 +142,7 @@ please try to:
* Blacklisting (Source: )
> Usually, when you install a recent distro in PVHVM (using other media) and you get a blank screen, try blacklisting by adding the following in your grub command at the end
>
-> modprobe.blacklist=bochs_drm
+> `modprobe.blacklist=bochs_drm`
### Initrd is missing after an update
@@ -152,7 +152,7 @@ After an update, XCP-ng won't boot and file `/boot/initrd-4.19.0+1.img` is missi
#### Cause
-Can be a `yum` update process interrupted while rebuilding the `initrd`, such as a manual reboot of the host before the post-install scriplets have finished executing.
+Can be a `yum` update process interrupted while rebuilding the `initrd`, such as a manual reboot of the host before the post-install scriptlets have finished executing.
#### Solution
@@ -206,13 +206,13 @@ echo "xen" > /sys/devices/system/clocksource/clocksource0/current_clocksource
### Async Tasks/Commands Hang or Execute Extremely Slowly
#### Cause
-This symptom can be caused by a variety of issues including RAID degradation, ageing HDDs, slow network storage, and external hard drives/usbs. While extremely unintuitive, even a single slow storage device physically connected (attached or unattached to a VM) can cause your entire host to hang during operation.
+This symptom can be caused by a variety of issues including RAID degradation, ageing HDDs, slow network storage, and external hard drives/USBs. While extremely unintuitive, even a single slow storage device physically connected (attached or unattached to a VM) can cause your entire host to hang during operation.
#### Solution
1. Begin by unplugging any external USB hubs, hard drives, and USBs.
2. Run a command such as starting a VM to see if the issue remains.
3. If the command still hangs, physically check to see if your HDDs/SSDs are all functioning normally and any RAID arrays you are using are in a clean non-degraded state.
-4. If these measures fail, login to your host and run `cat /var/log/kern.log | grep hung`. If this returns `"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.` your lvm layer may be hanging during storage scans. This could be caused by a drive that is starting to fail but has not hard failed yet.
+4. If these measures fail, login to your host and run `cat /var/log/kern.log | grep hung`. If this returns `"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.` your LVM layer may be hanging during storage scans. This could be caused by a drive that is starting to fail but has not hard failed yet.
5. If all these measures fail, collect the logs and make your way to the forum for help.
----
@@ -267,11 +267,11 @@ Resolved with version 8.2.2.200-RC1 and newer.
#### Causes and Solutions
##### Cause a) There can be leftovers from old Citrix XenServer Client Tools.
-1. remove any xen*.* files from `C:\Windows\system32` like
- * xenbus_coinst_7_2_0_51.dll
- * xenvbd_coinst_7_2_0_40.dll
- * xenbus_monitor_8_2_1_5.exe
- * and similiar `xen*_coinst` and `xen*_monitor` files
+1. remove any `xen*.*` files from `C:\Windows\system32` like
+ * `xenbus_coinst_7_2_0_51.dll`
+ * `xenvbd_coinst_7_2_0_40.dll`
+ * `xenbus_monitor_8_2_1_5.exe`
+ * and similar `xen*_coinst` and `xen*_monitor` files
2. remove any leftover `XenServer` devices from device manager, also display hidden `XenServer` devices and remove them!
* To show hidden devices in Device Manager: `View -> Show Hidden Devices`
@@ -337,10 +337,10 @@ Eject the ISO on those VMs.
#### Cause
XCP-ng ISO upgrade is a reinstall that saves only your XAPI database (Settings/VM Metadata).
-But it also creates a full backup of your previous XCP-ng/XenServer installation on a second partition, in most cases it's /dev/sda2.
+But it also creates a full backup of your previous XCP-ng/XenServer installation on a second partition, in most cases it's `/dev/sda2`.
#### Solution
-To access the backup (with all your tools and modifications) just mount the backup partition (mostly /dev/sda2) and copy your data back.
+To access the backup (with all your tools and modifications) just mount the backup partition (mostly `/dev/sda2`) and copy your data back.
***
@@ -349,7 +349,7 @@ To access the backup (with all your tools and modifications) just mount the back
#### Causes and Solutions
* Maybe your hardware got an issue
- * Check caps on your mainboard
+ * Check caps on your motherboard
* Check power supply
* Check cables
* Check drives SMART values with something like `smartctl -A /dev/sda` ([Smartmontools](https://www.smartmontools.org))
@@ -369,11 +369,11 @@ To access the backup (with all your tools and modifications) just mount the back
##### iSCSI reconnect after reboot fails permanently ( Unsupported SCSI Opcode )
-The problem is that in a storage-cluster environment every time the node changes or pacemaker start /stop /restart iSCSI resources the "iSCSI SN" for a lun are new generated and differs from that before.
+The problem is that in a storage-cluster environment every time the node changes or pacemaker start /stop /restart iSCSI resources the "iSCSI SN" for a LUN are new generated and differs from that before.
Xen uses the "iSCSI SN" as an identifier, so you have to ensure that "iSCSI SN" is the same on all cluster nodes.
You can read more about it [here](https://smcleod.net/tech/2015/12/14/iscsi-scsiid-persistence.html).
-* error message xen orchestra
+* error message Xen Orchestra
```
SR_BACKEND_FAILURE_47(, The SR is not available [opterr=Error reporting error, unknown key Device not appeared yet], )
@@ -391,7 +391,7 @@ kernel: [11219.642772] iSCSI/iqn.2018-12.com.example.server:33init: Unsupported
#### Solution
-The trick is to extend the Lio iSCSI lun configuration in pacemaker with a hard coded iscsi_sn (scsi_sn=d27dab3f-c8bf-4385-8f7e-a4772673939d) and `lio_iblock`, so that every node uses the same.
+The trick is to extend the Lio iSCSI LUN configuration in pacemaker with a hard coded `iscsi_sn (scsi_sn=d27dab3f-c8bf-4385-8f7e-a4772673939d)` and `lio_iblock`, so that every node uses the same.
* while pacemaker iscsi resource is running you can get the actual iSCSI_SN:
`cat /sys/kernel/config/target/core/iblock_0/lun_name/wwn/vpd_unit_serial`
@@ -404,7 +404,7 @@ primitive p_iscsi_lun_1 iSCSILogicalUnit \
scsi_sn=d27dab3f-c8bf-4385-8f7e-a4772673939d lio_iblock=0 \
op start timeout=20 interval=0 \
op stop timeout=20 interval=0 \
- op monitor interval=20 timout=40
+ op monitor interval=20 timeout=40
```
@@ -416,11 +416,11 @@ Hi, this is a small trick I had to use once [(original article)](https://linuxco
* Reboot your XenServer into Grub boot menu.
* Use arrows keys to locate an appropriate XenServer boot menu entry and press **e** key to edit boot options.
-* Locate read-only parameter **ro** and replace it with **rw**. Furthermore, locate keyword **splash** and replace it with **init=/bin/bash**.
+* Locate read-only parameter `ro` and replace it with `rw`. Furthermore, locate keyword `splash` and replace it with `init=/bin/bash`.
* **Hit F10** to boot into single-mode
-* Once in single-mode use **passwd** command to reset your XenServer's root password
-* Reboot xenserver by entering the command **exec /usr/sbin/init**
-* If everything went well you should now be able to login with your new XenServer password.
+* Once in single-mode use `passwd` command to reset your XenServer's root password
+* Reboot XenServer by entering the command `exec /usr/sbin/init`
+* If everything went well you should now be able to log in with your new XenServer password.
## XenStore related issues
@@ -430,7 +430,7 @@ The `XENSTORED_TRACE` being enabled might give useful information.
## Ubuntu 18.04 boot issue
-Some versions of Ubuntu 18.04 might fail to boot, due to a Xorg bug affecting GDM and causing a crash of it (if you use Ubuntu HWE stack).
+Some versions of Ubuntu 18.04 might fail to boot, due to a `Xorg` bug affecting GDM and causing a crash of it (if you use Ubuntu HWE stack).
The solution is to use `vga=normal fb=false` on Grub boot kernel to overcome this. You can add those into ` /etc/default/grub`, for the `GRUB_CMDLINE_LINUX_DEFAULT` variable. Then, a simple `sudo update-grub` will provide the fix forever.
@@ -442,7 +442,7 @@ Alternatively, in a fresh Ubuntu 18.04 install, you can switch to UEFI and you w
## Disappearing NVMe drives
-Some NVMe drives do not handle Automatic Power State Transition (APST) well on certain motherboards or adapters and will disappear from the system when attempting to lower their power state. You may see logs in dmesg that indicate this is happening.
+Some NVMe drives do not handle Automatic Power State Transition (APST) well on certain motherboards or adapters and will disappear from the system when attempting to lower their power state. You may see logs in `dmesg` that indicate this is happening.
```
[65056.815294] nvme nvme0: controller is down; will reset: CSTS=0xffffffff, PCI_STATUS=0xffff
@@ -463,7 +463,7 @@ Some NVMe drives do not handle Automatic Power State Transition (APST) well on c
[65061.030575] nvme nvme0: failed to set APST feature (-19)
```
-APST can be disabled by adding `nvme_core.default_ps_max_latency_us=0` to your kernel boot parameters. For example, in xcp-ng 8.1, edit `/boot/grub/grub.cfg` to include a new parameter on the first `module2` line.
+APST can be disabled by adding `nvme_core.default_ps_max_latency_us=0` to your kernel boot parameters. For example, in XCP-ng 8.1, edit `/boot/grub/grub.cfg` to include a new parameter on the first `module2` line.
```
menuentry 'XCP-ng' {
@@ -475,7 +475,7 @@ menuentry 'XCP-ng' {
```
## Missing templates when creating a new VM
-If you attempt to create a new VM, and you notice that you only have a handful of templates available, you can try fixing this from the console. Simply go to the console of your XCP-NG host and enter the following command:
+If you attempt to create a new VM, and you notice that you only have a handful of templates available, you can try fixing this from the console. Simply go to the console of your XCP-ng host and enter the following command:
```
/usr/bin/create-guest-templates
```
diff --git a/docs/troubleshooting.md.orig b/docs/troubleshooting.md.orig
new file mode 100644
index 00000000..95c0be77
--- /dev/null
+++ b/docs/troubleshooting.md.orig
@@ -0,0 +1,583 @@
+# Troubleshooting
+
+If you have a problem on XCP-ng, there's 2 options:
+
+* Community support (mostly on [XCP-ng Forum](https://xcp-ng.org/forum))
+* [Pro support](https://xcp-ng.com)
+
+## The 3-Step-Guide
+Here is our handy **3-Step-Guide**:
+
+1. Check the [logs](troubleshooting.md#log-files). Check your settings. [Read below](troubleshooting.md#common-problems)... if you already did, proceed to Step 2.
+2. Get help at our [Forum](https://xcp-ng.org/forum) or get help at IRC _#xcp-ng_ on [Freenode](https://webchat.freenode.net) and provide as much information as you can:
+ * ☑️ What did you **exactly** do to expose the bug?
+ * :rocket: XCP-ng Version
+ * :desktop_computer: Hardware
+ * :factory: Infrastructure
+ * :newspaper_roll: Logs
+ * :tv: Screenshots
+ * :stop_sign: Error messages
+3. Share your solution ([forum](https://xcp-ng.org/forum), [wiki](https://github.com/xcp-ng/xcp/wiki)) - others can benefit from your experience.
+ * And we are therefore officially proud of you! :heart:
+
+## Pro Support
+
+If you have subscribed to [Pro support](https://xcp-ng.com/), well, don't hesitate to use it!
+
+## Installation and upgrade
+
+(Where "upgrade" here designates an upgrade using the installation ISO)
+
+### If the installer starts booting up then crashes or hangs
+
+* First of all check the integrity of the ISO image you downloaded, using the provided checksum
+* Try the other boot options
+ * alternate kernel
+ * safe mode
+* Try to boot with the `iommu=0` Xen parameter.
+
+:::tip
+**How to add or remove boot parameters from command line.**
+
+* On UEFI mode, you can edit the grub entries with `e`. Xen parameters are on lines starting with `multiboot2 /boot/xen.gz` and kernel parameters on lines starting with `module2 /boot/vmlinuz`.
+* On BIOS mode, you can enter a menu by typing `menu` and then modify the boot entries with the TAB key. Xen parameters are between `/boot/xen.gz` and the next `---`. Kernel parameters are between `/boot/vmlinuz` and the next `---`.
+:::
+
+If any of the above allows to work around your issue, please let us know ([GitHub issues](https://github.com/xcp-ng/xcp/issues)). We can't fix issues we aren't aware of.
+
+### During installation or upgrade
+
+You can reach a shell with ALT+F2 (or ALT+RIGHT) and a logs console with ALT+F3 (or ALT+RIGHT twice).
+
+Full installation log are populated in real time in `/tmp/install-log`. They can be read with `view /tmp/install-log`.
+
+When asking for help about installation errors, providing this file increases your chances of getting precise answers.
+
+The target installation partition is mounted in `/tmp/root`.
+
+### Installation logs
+
+The installer writes in `/var/log/installer/`.
+
+The main log file is `/var/log/installer/install-log`.
+
+### Debugging the installer
+
+You can [build your own installer](develprocess.md#iso-modification).
+
+## Log files
+
+On a XCP-ng host, like in most Linux/UNIX systems, the logs are located in `/var/log`. XCP-ng does not use `journald` for logs, so everything is in `/var/log` directly.
+
+### General log
+
+`/var/log/daemon.log`
+
+Output of various running daemons involved in XCP-ng's tasks. Examples: output of `xenopsd` which handles the communication with the VMs, of executables involved in live migration and storage motion, and more...
+
+### XAPI's log
+
+`/var/log/xensource.log`
+
+Contains the output of the XAPI toolstack.
+
+### Storage related (e.g., coalescing snapshots)
+
+`/var/log/SMlog`
+
+Contains the output of the storage manager.
+
+### Kernel messages
+
+For hardware related issues or system crashes.
+
+`/var/log/kern.log`
+
+All kernel logs since last boot: type `dmesg`.
+
+### Kernel crash logs
+
+In case of a host crash, if it is kernel-related, you should find logs in `/var/crash`
+
+### Produce a status report
+
+To help someone else identify an issue or reproduce a bug, you can generate a full status report containing all log files, details about your configuration and more.
+
+```
+xen-bugtool --yestoall
+```
+
+Then upload the resulting archive somewhere. It may contain sensitive information about your setup, so it may be better to upload it to a private area and give the link only to those you trust to analyze it.
+
+
+### XCP-ng Center
+
+You can display the log files via menu `Help` -> `View XCP-ng Center Log Files`.
+
+The log files are located in `C:\Users\\AppData\Roaming\XCP-ng\XCP-ng Center\logs`.
+
+### Windows VM
+
+#### (PV-)Driver install log
+`C:\Windows\INF\setupapi.dev.log`
+
+## Common Problems
+
+### Blank screen (on a Linux VM)
+
+#### Cause
+
+Your VM booted just fine. You see a blank console because of driver related issues.
+
+#### Quick Solution
+
+please try to:
+
+* press `ALT` + `right Arrow` to switch to next console
+* press `TAB` to escape boot splash
+* press `ESC`
+
+#### Solution (draft! has to be tested/validated)
+
+* Blacklisting (Source: )
+> Usually, when you install a recent distro in PVHVM (using other media) and you get a blank screen, try blacklisting by adding the following in your grub command at the end
+>
+> `modprobe.blacklist=bochs_drm`
+
+### Initrd is missing after an update
+
+#### Symptom
+
+After an update, XCP-ng won't boot and file `/boot/initrd-4.19.0+1.img` is missing.
+
+#### Cause
+
+Can be a `yum` update process interrupted while rebuilding the `initrd`, such as a manual reboot of the host before the post-install scriptlets have finished executing.
+
+#### Solution
+
+1. Boot on the fallback kernel (last entry in grub menu)
+2. Rebuild the initrd with `dracut -f /boot/initrd-.img `
+3. Reboot on the latest kernel, it works!
+
+:::tip
+Here is an example of `dracut` command on a 8.2 host: `dracut -f /boot/initrd-4.19.0+1.img 4.19.0+1`
+:::
+
+### VM not in expected power state
+
+#### Cause
+The XAPI database thinks that the VM is On / Off. But this is fake news ;-)
+
+#### Solution
+Restart toolstack on CLI with the command `xe-toolstack-restart`. This just restarts the management services, all running VMs are untouched.
+
+***
+
+### Rebooting hangs the server
+
+#### Cause
+Unknown, possibly related to the kernel, or BIOS.
+This has been known to occur on a Dell Poweredge T20.
+
+### Solution
+
+Try these steps:
+
+1. Turn off C-States and Intel SpeedStep in the BIOS.
+2. Flash any update(s) to the BIOS firmware.
+3. Append `reboot=pci` to kernel boot parameters. This can be done in `/etc/grub.cfg` or `/etc/grub-efi.cfg`.
+
+***
+
+### Server loses time on 14th gen Dell hardware
+
+#### Cause
+
+Unknown, the system keeps listening to the hardware clock instead of trusting NTP
+
+### Solution
+
+```
+echo "xen" > /sys/devices/system/clocksource/clocksource0/current_clocksource
+ printf '%s\n\t%s\n%s\n' 'if test -f /sys/devices/system/clocksource/clocksource0/current_clocksource; then' 'echo xen > /sys/devices/system/clocksource/clocksource0/current_clocksource' 'fi' >> /etc/rc.local
+```
+
+### Async Tasks/Commands Hang or Execute Extremely Slowly
+
+#### Cause
+This symptom can be caused by a variety of issues including RAID degradation, ageing HDDs, slow network storage, and external hard drives/USBs. While extremely unintuitive, even a single slow storage device physically connected (attached or unattached to a VM) can cause your entire host to hang during operation.
+
+#### Solution
+1. Begin by unplugging any external USB hubs, hard drives, and USBs.
+2. Run a command such as starting a VM to see if the issue remains.
+3. If the command still hangs, physically check to see if your HDDs/SSDs are all functioning normally and any RAID arrays you are using are in a clean non-degraded state.
+4. If these measures fail, login to your host and run `cat /var/log/kern.log | grep hung`. If this returns `"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.` your LVM layer may be hanging during storage scans. This could be caused by a drive that is starting to fail but has not hard failed yet.
+5. If all these measures fail, collect the logs and make your way to the forum for help.
+
+----
+
+## Network Performance
+
+### TCP Offload checksum errors
+
+#### Cause
+
+When running `# tcpdump -i -v -nn |grep incorrect`, there are checksum incorrect error messages.
+Example: `# tcpdump -i eth0 -v -nn |grep incorrect`
+
+#### Solution
+
+**NOTE**: These changes does not guarantee improved network performance, please use iperf3 to check before and after the change.
+
+- If you see transmit TCP offload checksum errors like this:
+
+ `.443 > x.x.x.x.19723: Flags [.], cksum 0x848a (incorrect -> 0x1b17), ack 3537, win 1392, length 0`
+
+ then try running
+ `# xe pif-param-set uuid=$PIFUUID other-config:ethtool-tx="off"` where $PIFUUID is the UUID of the physical interface.
+
+- If you see receive TCP offload checksum errors like this:
+
+ `x.x.x.x.445 > .58710: Flags [.], cksum 0xa189 (incorrect -> 0xc352), seq 469937:477177, ack 53892, win 256, options [nop,nop,TS val 170183446 ecr 146516], length 7240WARNING: Packet is continued in later TCP segments`
+
+ `x.x.x.x.445 > .58710: Flags [P.], cksum 0x8e45 (incorrect -> 0xd531), seq 477177:479485, ack 53892, win 256, options [nop,nop,TS val 170183446 ecr 146516], length 2308SMB-over-TCP packet:(raw data or continuation?)`
+
+ then try running
+ `# xe pif-param-set uuid=$PIFUUID other-config:ethtool-gro="off"` where $PIFUUID is the UUID of the physical interface.
+
+The PIF UUID can be found by executing:
+
+`# xe pif-list`
+
+
+## Windows Agent / PV-Tools
+
+### I got the error message "Windows Management Agent failed to install" directly after installing it
+
+#### Cause
+There was an issue with the installing of the drivers certificate, so the drivers did not load silently.
+
+#### Solution
+Resolved with version 8.2.2.200-RC1 and newer.
+
+***
+
+### The Management Agent Installer was executed, but the PV-Drivers are not installed in the Device Manager
+
+#### Causes and Solutions
+##### Cause a) There can be leftovers from old Citrix XenServer Client Tools.
+1. remove any `xen*.*` files from `C:\Windows\system32` like
+ * `xenbus_coinst_7_2_0_51.dll`
+ * `xenvbd_coinst_7_2_0_40.dll`
+ * `xenbus_monitor_8_2_1_5.exe`
+ * and similar `xen*_coinst` and `xen*_monitor` files
+2. remove any leftover `XenServer` devices from device manager, also display hidden `XenServer` devices and remove them!
+ * To show hidden devices in Device Manager: `View -> Show Hidden Devices`
+
+##### Cause b) There was an issue with the installing of the drivers certificate, so the drivers did not load silently
+
+Resolved with version 8.2.2.200-RC1 and newer.
+
+***
+
+### Upgrading from XenTools 6.x to XCP-ng-Client-Tools-for-Windows-8.2.1-beta1 and get the error message "Windows Management Agent failed to install" directly after installing it
+
+#### Cause and solution:
+
+There was an issue with the installing of the drivers certificate, so the drivers did not load silently.
+
+Resolved with version 8.2.2.200-RC1 and newer.
+
+***
+
+### I installed the Client Tools. XCP-ng Center says that I/O is optimized but my network card is not (correct) installed and the Management Agent is (also) not working.
+
+##### Cause
+
+There was an issue with the installing of the drivers certificate, so the drivers did not load silently.
+
+#### Possible Solutions
+
+* Resolved with version 8.2.2.200-RC1 and newer.
+
+* Clean your system from `Citrix Client Tools` _AND_ `XCP-ng Client Tools` to create a clean state.
+* Then install the Client Tools from scratch.
+
+[This Guide](guests.md#upgrade-from-citrix-xenserver-client-tools) may help you through the process.
+
+
+## After Upgrade
+
+### The Server stays in Maintenance Mode
+
+#### Causes and Solutions
+* You enabled the maintenance mode and forgot about it.
+ * No big deal, just exit maintenance mode :-)
+* The server is still booting.
+ * Take your time and let him boot up :-) this takes sometimes some time, but typically not longer than some minutes.
+* A Storage Repository (SR) could not be attached.
+ * Check the corresponding disk(s), network(s) and setting(s). Follow the [3-Step-Guide](#general).
+* There is a serious problem.
+ * Follow the 3-Step-Guide.
+
+***
+
+### Some of my VMs do not start. Error: "This operation cannot be performed because the specified virtual disk could not be found."
+
+#### Cause
+It's mostly related to an inserted ISO that is no longer accessible.
+
+#### Solution
+Eject the ISO on those VMs.
+
+***
+
+### I had some scripts/tools installed and after the upgrade all is gone! Help!
+
+#### Cause
+XCP-ng ISO upgrade is a reinstall that saves only your XAPI database (Settings/VM Metadata).
+But it also creates a full backup of your previous XCP-ng/XenServer installation on a second partition, in most cases it's `/dev/sda2`.
+
+#### Solution
+To access the backup (with all your tools and modifications) just mount the backup partition (mostly `/dev/sda2`) and copy your data back.
+
+***
+
+### After upgrading my XCP-ng host is unstable, network card freezes, kernel errors, etc.
+
+#### Causes and Solutions
+
+* Maybe your hardware got an issue
+ * Check caps on your motherboard
+ * Check power supply
+ * Check cables
+ * Check drives SMART values with something like `smartctl -A /dev/sda` ([Smartmontools](https://www.smartmontools.org))
+ * Check memory with something like [Memtest86+](https://www.memtest.org)
+* Maybe your firmware got an issue
+ * update BIOS
+ * update network card firmware
+ * update RAID controller / HBA firmware
+ * update system firmware
+* Maybe we (or upstream Citrix XenServer) removed/updated something.
+ * Please check our [Hardware Compatibility List (HCL)](hardware.md).
+ * Follow the [3-Step-Guide](#general).
+
+## iSCSI Troubleshooting
+
+### iSCSI in storage-cluster environment (DRBD / Corosync / Pacemaker )
+
+##### iSCSI reconnect after reboot fails permanently ( Unsupported SCSI Opcode )
+
+The problem is that in a storage-cluster environment every time the node changes or pacemaker start /stop /restart iSCSI resources the "iSCSI SN" for a LUN are new generated and differs from that before.
+Xen uses the "iSCSI SN" as an identifier, so you have to ensure that "iSCSI SN" is the same on all cluster nodes.
+You can read more about it [here](https://smcleod.net/tech/2015/12/14/iscsi-scsiid-persistence.html).
+
+* error message Xen Orchestra
+
+```
+SR_BACKEND_FAILURE_47(, The SR is not available [opterr=Error reporting error, unknown key Device not appeared yet], )
+
+```
+
+* possible and misleading error message on storage servers
+
+```
+kernel: [11219.445255] rx_data returned 0, expecting 48.
+kernel: [11219.446656] iSCSI Login negotiation failed.
+kernel: [11219.642772] iSCSI/iqn.2018-12.com.example.server:33init: Unsupported SCSI Opcode 0xa3, sending CHECK_CONDITION.
+
+```
+
+#### Solution
+
+The trick is to extend the Lio iSCSI LUN configuration in pacemaker with a hard coded `iscsi_sn (scsi_sn=d27dab3f-c8bf-4385-8f7e-a4772673939d)` and `lio_iblock`, so that every node uses the same.
+
+* while pacemaker iscsi resource is running you can get the actual iSCSI_SN:
+`cat /sys/kernel/config/target/core/iblock_0/lun_name/wwn/vpd_unit_serial`
+
+* extend your pacemaker iSCSI configuration with a `scsi_sn` and the matching `lio_iblock`
+
+```
+primitive p_iscsi_lun_1 iSCSILogicalUnit \
+ params target_iqn="iqn.2019-01.com.example.server:example" implementation=lio-t lun=0 path="/dev/drbd0" \
+ scsi_sn=d27dab3f-c8bf-4385-8f7e-a4772673939d lio_iblock=0 \
+ op start timeout=20 interval=0 \
+ op stop timeout=20 interval=0 \
+ op monitor interval=20 timeout=40
+
+```
+
+***
+
+## Reset root password
+
+Hi, this is a small trick I had to use once [(original article)](https://linuxconfig.org/how-to-reset-an-administrative-root-password-on-xenserver-7-linux)
+
+* Reboot your XenServer into Grub boot menu.
+* Use arrows keys to locate an appropriate XenServer boot menu entry and press **e** key to edit boot options.
+* Locate read-only parameter `ro` and replace it with `rw`. Furthermore, locate keyword `splash` and replace it with `init=/bin/bash`.
+* **Hit F10** to boot into single-mode
+* Once in single-mode use `passwd` command to reset your XenServer's root password
+* Reboot XenServer by entering the command `exec /usr/sbin/init`
+* If everything went well you should now be able to login with your new XenServer password.
+
+## XenStore related issues
+
+See the [Xen doc](https://wiki.xenproject.org/wiki/Debugging_Xen#Debugging_Xenstore_Problems).
+
+The `XENSTORED_TRACE` being enabled might give useful information.
+
+## Ubuntu 18.04 boot issue
+
+Some versions of Ubuntu 18.04 might fail to boot, due to a `Xorg` bug affecting GDM and causing a crash of it (if you use Ubuntu HWE stack).
+
+The solution is to use `vga=normal fb=false` on Grub boot kernel to overcome this. You can add those into ` /etc/default/grub`, for the `GRUB_CMDLINE_LINUX_DEFAULT` variable. Then, a simple `sudo update-grub` will provide the fix forever.
+
+You can also remove the `hwe` kernel and use the `generic` one: this way, the problem won't occur at all.
+
+:::tip
+Alternatively, in a fresh Ubuntu 18.04 install, you can switch to UEFI and you won't have this issue.
+:::
+
+## Disappearing NVMe drives
+
+Some NVMe drives do not handle Automatic Power State Transition (APST) well on certain motherboards or adapters and will disappear from the system when attempting to lower their power state. You may see logs in `dmesg` that indicate this is happening.
+
+```
+[65056.815294] nvme nvme0: controller is down; will reset: CSTS=0xffffffff, PCI_STATUS=0xffff
+[65060.797874] nvme 0000:04:00.0: Refused to change power state, currently in D3
+[65060.815452] xen: registering gsi 32 triggering 0 polarity 1
+[65060.815473] Already setup the GSI :32
+[65060.937775] nvme nvme0: Removing after probe failure status: -19
+[65060.950019] print_req_error: I/O error, dev nvme1n1, sector 895222784
+[65060.950022] print_req_error: I/O error, dev nvme1n1, sector 438385288
+[65060.950040] print_req_error: I/O error, dev nvme1n1, sector 223301496
+[65060.950072] print_req_error: I/O error, dev nvme1n1, sector 256912800
+[65060.950077] print_req_error: I/O error, dev nvme1n1, sector 189604552
+[65060.950085] print_req_error: I/O error, dev nvme1n1, sector 390062504
+[65060.950087] print_req_error: I/O error, dev nvme1n1, sector 453909496
+[65060.950099] print_req_error: I/O error, dev nvme1n1, sector 453915072
+[65060.950102] print_req_error: I/O error, dev nvme1n1, sector 246194176
+[65060.950107] print_req_error: I/O error, dev nvme1n1, sector 246194288
+[65061.030575] nvme nvme0: failed to set APST feature (-19)
+```
+
+APST can be disabled by adding `nvme_core.default_ps_max_latency_us=0` to your kernel boot parameters. For example, in XCP-ng 8.1, edit `/boot/grub/grub.cfg` to include a new parameter on the first `module2` line.
+
+```
+menuentry 'XCP-ng' {
+ search --label --set root root-jnugiq
+ multiboot2 /boot/xen.gz dom0_mem=7584M,max:7584M watchdog ucode=scan dom0_max_vcpus=1-16 crashkernel=256M,below=4G console=vga vga=mode-0x0311
+ module2 /boot/vmlinuz-4.19-xen root=LABEL=root-jnugiq ro nolvm hpet=disable console=hvc0 console=tty0 quiet vga=785 splash plymouth.ignore-serial-consoles nvme_core.default_ps_max_latency_us=0
+ module2 /boot/initrd-4.19-xen.img
+}
+```
+## Missing templates when creating a new VM
+
+If you attempt to create a new VM, and you notice that you only have a handful of templates available, you can try fixing this from the console. Simply go to the console of your XCP-ng host and enter the following command:
+```
+/usr/bin/create-guest-templates
+```
+
+This should recreate all the templates.
+
+
+## The updater plugin is busy
+
+The message `The updater plugin is busy (current operation: check_update)` means that the plugin crashed will doing an update. The lock was then active, and it was left that way. You can probably see that by doing:
+
+```
+cat /var/lib/xcp-ng-xapi-plugins/updater.py.lock
+```
+
+It should be empty, but if you have the bug, you got `check_update`.
+
+Remove `/var/lib/xcp-ng-xapi-plugins/updater.py.lock` and that should fix it.
+<<<<<<< HEAD
+
+## Disk failure/replacement with software RAID
+
+If XCP-ng has been installed with a *software RAID 1 full disk mirror* method, a disk failure can be fixed with a disk replacement. Here's how:
+
+#### If the host can't boot anymore
+
+Boot to the XCP-ng installer ISO in shell mode.
+
+#### Once booted into your XCP-ng install or the ISO
+
+Enter the following commands:
+```
+cat /proc/mdstat
+```
+This will return a similar output:
+```
+Personalities : [raid1]
+md127 : active raid1 nvme0n2[3] nvme0n1[2]
+ 62914432 blocks super 1.0 [2/2] [U_]
+
+unused devices:
+```
+`[U_]` indicates that the RAID is damaged. Now we will repair it.
+
+#### Remove damaged disk
+
+Let's assume we want to remove `nvme0n1`:
+```
+mdadm --manage /dev/md127 --fail /dev/nvme0n1
+```
+Now `mdstat` shows `nvme0n1` as *failed*:
+```
+Personalities : [raid1]
+md127 : active raid1 nvme0n2[3] nvme0n1[2](F)
+ 62914432 blocks super 1.0 [2/1] [U_]
+
+unused devices:
+```
+Now we can remove the disk from the raid:
+```
+mdadm --manage /dev/md127 --remove /dev/nvme0n1
+```
+The disk is removed from `mdstat`:
+```
+Personalities : [raid1]
+md127 : active raid1 nvme0n2[3]
+ 62914432 blocks super 1.0 [2/1] [U_]
+
+unused devices:
+```
+The disk is successfully removed.
+
+#### Add a new/replacement disk to the RAID
+
+Now we can add a replacement disk. Shutdown your host, install the disk on your system, then boot it to your XCP-ng install or the installer ISO once more. Now add the disk to the RAID:
+```
+mdadm --manage /dev/md127 --add /dev/nvme0n1
+```
+`mdstat` shows that disk `nvme0n1` is in the RAID and is synchronizing with `nvme0n2`:
+```
+Personalities : [raid1]
+md127 : active raid1 nvme0n2[3] nvme0n1[4]
+ 62914432 blocks super 1.0 [2/1] [U_]
+ [=>...................] recovery = 9.9% (2423168/24418688) finish=2.8min speed=127535K/sec
+
+unused devices:
+```
+Wait for completion, the rebuild is complete once `mdstat` looks like:
+```
+md127 : active raid1 nvme0n2[3] nvme0n1[4]
+ 62914432 blocks super 1.0 [2/2] [UU]
+
+unused devices:
+```
+`[UU]` is back, the RAID is repaired and you should now reboot the host.
+
+#### If the system is still unbootable
+
+This might happen for various reasons. If you haven't backed-up the contents of the disks yet, you really should now, in case data was corrupted on more than one disk. Clonezilla is a good open source live ISO to do this with if you don't already have a favorite tool. It can back up to another disk, or to a network share.
+
+It has been reported to us that some non-enterprise motherboards may have limited UEFI firmware that does not cope well with disk changes.
+
+In most cases, you should be able to restore the bootloader by upgrading your host to the same version it is already running (e.g upgrade 8.2 to 8.2 using the 8.2 install ISO). Check [the upgrade docs](upgrade.md) for the usual instructions and warnings. Another, custom solution is to run the appropriate `efibootmgr` commands from the installer's shell. Refer to [its documentation](https://linux.die.net/man/8/efibootmgr).
+=======
+>>>>>>> 7c2ffe8 (docs/troubleshooting.md: style fixes)
diff --git a/docs/upgrade.md b/docs/upgrade.md
index 6e96ecb1..e9ad0a88 100644
--- a/docs/upgrade.md
+++ b/docs/upgrade.md
@@ -59,7 +59,7 @@ This is an alternate method if you can't boot from the installation ISO.
If you do not have access to your server or remote KVM in order to upgrade using the interactive ISO installer, you can initiate an automatic reboot and upgrade process using the following procedure:
-* Unpack/extract the XCP-ng ISO to a folder on an HTTP server. Make sure not to miss the hidden .treeinfo file (common mistake if you `cp` the files with `*`).
+* Unpack/extract the XCP-ng ISO to a folder on an HTTP server. Make sure not to miss the hidden `.treeinfo` file (common mistake if you `cp` the files with `*`).
* Get the UUID of your host by running the below command:
```
xe host-list
@@ -100,7 +100,7 @@ Once upgraded, **keep your system regularly updated** (see [Updates Howto](updat
#### Access to the repository (obviously)
-Your dom0 system must either have access to updates.xcp-ng.org, or to a local mirror. In the second case, make sure to update the `baseurl` values in `/etc/yum.repos.d/xcp-ng.repo` to make them point at the local mirror, and keep the mirror up to date, of course.
+Your dom0 system must either have access to `updates.xcp-ng.org`, or to a local mirror. In the second case, make sure to update the `baseurl` values in `/etc/yum.repos.d/xcp-ng.repo` to make them point at the local mirror, and keep the mirror up to date, of course.
#### Be cautious with third party repositories and packages
@@ -123,7 +123,7 @@ Check them carefully.
#### Upgrade instructions
-If for some reason you want to upgrade to the unsupported XCP-ng 7.6 from an earlier release, see [Yum Upgrade towards XCP ng 7.6](https://github.com/xcp-ng/xcp/wiki/Yum-Upgrade-towards-XCP-ng-7.6).
+If for some reason you want to upgrade to the unsupported XCP-ng 7.6 from an earlier release, see [Yum Upgrade towards XCP-ng 7.6](https://github.com/xcp-ng/xcp/wiki/Yum-Upgrade-towards-XCP-ng-7.6).
:warning: **Proceed one host at a time. Do not `yum update` all hosts at once to "save time".** :warning:
@@ -180,7 +180,7 @@ For a given configuration file, only one of those can be created, depending on w
So, after an upgrade using `yum`, you need to look for `.rpmnew` and `.rpmsave` files, update the related configuration files accordingly if needed, and delete those `.rpmnew` and `.rpmsave` files in order to keep things clean for when you will need to do this again after the next upgrade.
If you haven't modified configuration files that `rpm` wants to update, there will be nothing to do.
-/!\ There is an exception: always ignore the `/etc/cron.d/logrotate.cron.rpmsave` file. Citrix team named that file this way so that it is ignored by cron. It is used only with legacy partitioning, where no `/var/log` partition exists, and triggers a very aggressive log rotation. Leave it alone.
+/!\ There is an exception: always ignore the `/etc/cron.d/logrotate.cron.rpmsave` file. Citrix team named that file this way so that it is ignored by `cron`. It is used only with legacy partitioning, where no `/var/log` partition exists, and triggers a very aggressive log rotation. Leave it alone.
```
# Find conflicting configuration files, excluding logrotate.cron.rpmsave
@@ -214,7 +214,7 @@ This article describes how to proceed in order to convert your Citrix XenServer
### Migration process
-XCP-NG installation follows roughly the same workflow as a XenServer installation. Therefore, the migration procedure will be very similar to an upgrade procedure in XenServer.
+XCP-ng installation follows roughly the same workflow as a XenServer installation. Therefore, the migration procedure will be very similar to an upgrade procedure in XenServer.
* Download the XCP-ng ISO [from this XCP-ng website](https://xcp-ng.org/#easy-to-install)
* Follow the [website instructions](https://xcp-ng.org/#easy-to-install) to put the ISO into an USB key or a CD
@@ -278,11 +278,11 @@ See [the Troubleshooting page](troubleshooting.md#installation-and-upgrade).
If you do not have access to your server or remote KVM in order to upgrade using the interactive ISO installer, you can initiate an automatic reboot and upgrade process using the following procedure:
-Unpack/extract the XCP-NG ISO to a folder on a webserver. Then get the UUID of your host by running the below command:
+Unpack/extract the XCP-ng ISO to a folder on a web server. Then get the UUID of your host by running the below command:
`xe host-list`
-Using that host UUID, as well as the URL to the folder hosting the unpacked XCP-NG ISO, run the following command to test access:
+Using that host UUID, as well as the URL to the folder hosting the unpacked XCP-ng ISO, run the following command to test access:
`xe host-call-plugin plugin=prepare_host_upgrade.py host-uuid=750d9176-6468-4a08-8647-77a64c09093e fn=testUrl args:url=http:///xcp-ng/unpackedexample/`
@@ -292,9 +292,9 @@ Now tell the host to automatically boot to the ISO and upgrade itself on next re
`xe host-call-plugin plugin=prepare_host_upgrade.py host-uuid=750d9176-6468-4a08-8647-77a64c09093e fn=main args:url=http:///xcp-ng/unpackedexample/`
-The output should also be true. It has created a temporary entry in the grub bootloader which will automatically load the upgrade ISO on the next boot. It then automatically runs the XCP-NG upgrade with no user intervention required. It will also backup your existing XenServer dom0 install to the secondary backup partition, just like the normal upgrade.
+The output should also be true. It has created a temporary entry in the grub bootloader which will automatically load the upgrade ISO on the next boot. It then automatically runs the XCP-ng upgrade with no user intervention required. It will also backup your existing XenServer dom0 install to the secondary backup partition, just like the normal upgrade.
-To start the process, just tell the host to reboot. It is best to watch the progress by using KVM if it's available, but if not, it should proceed fine and boot into upgraded XCP-NG in 10 to 20 minutes.
+To start the process, just tell the host to reboot. It is best to watch the progress by using KVM if it's available, but if not, it should proceed fine and boot into upgraded XCP-ng in 10 to 20 minutes.
## Migrate VMs from older XenServer/XCP-ng