Skip to content

Commit

Permalink
OPCT-270: Added arm64 instructions and multi-arch builds (#93)
Browse files Browse the repository at this point in the history
Creating ARM64 support on OPCT validation environment, it is required to
build/mirror multi-arch (amd64 and arm64) the following components:
- sonobuoy aggregator server/worker image
- plugin openshift-tests image (and dependencies)
  - tools (base image) and dependencies (oc, jq, etc)
  - must-gather-monitoring image

The build for those components are implemented in the PR
redhat-openshift-ecosystem/provider-certification-plugins#51

This PR bumps the Plugins image to use multi-arch payloads starting the
next release (v0.5.0-alpha.3). The documentation is also provided with
steps to create new builds and to quickly test in an ARM cluster, OCP on
AWS with IPI.

Note: I am not updating `mkdocs.html` to prevent indexing new docs in
the current documentation release. cc
[OPCT-251](https://issues.redhat.com/browse/OPCT-251)

https://issues.redhat.com/browse/OPCT-270

---------

Co-authored-by: Richard Vanderpool <49568690+rvanderp3@users.noreply.github.com>
  • Loading branch information
mtulio and rvanderp3 authored Feb 19, 2025
1 parent 946cb99 commit 6ab5700
Show file tree
Hide file tree
Showing 3 changed files with 345 additions and 0 deletions.
139 changes: 139 additions & 0 deletions docs/dev/arch-support.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,139 @@
# OPCT Devel Guide - Architecture support

OPCT projects are split into different components. Each
component has its own build process and dependencies.

OPCT is divided into two main components:
- OPCT CLI: client side application to provision the test environment, collect, and generate report.
- Test environment: a group of tools running in server-side, target OpenShift cluster,
which is limited to architectures supported by OPCT

The next steps will describe how to explore building the components,
we advise that this guide be followed only if support for a new architecture is added,
or you are curious to learn how the components are packed.

## Client

The client (OPCT CLI) is built in the following platforms:

- linux/amd64
- linux/arm64
- darwin/amd64
- darwin/arm64
- windows/amd64

### Adding support to a new platform

The OPCT command line interface is built in Go language. To add support for a new OS/architecture
you need to:

- 1) Check if the Go toolchain used by [the project][go-mod] can build to the target architecture:
```bash
curl -s https://raw.githubusercontent.com/redhat-openshift-ecosystem/opct/main/go.mod | grep ^go
go version
go tool dist list
```
- 2) Modify the [Makefile][makefile] to build to the new architecture
- 3) Upddate the [CI release pipeline][ci-pipeline-release] to upload the CLI when new release is created
- 4) Test it

```sh
make build-darwin-arm64
```

[go-mod]: https://github.com/redhat-openshift-ecosystem/opct/blob/main/go.mod
[makefile]: https://github.com/redhat-openshift-ecosystem/opct/blob/main/Makefile
[ci-pipeline-release]: https://github.com/redhat-openshift-ecosystem/opct/blob/main/.github/workflows/go.yaml

## Server-side components

The following components are used by OPCT on the server side:

- Sonobuoy Aggregator Server
- Sonobuoy Worker
- Plugin openshift-tests
- Must-Gather
- Must-Gather Monitoring
- etcdfio
- camgi
- openshift client
- sonobuoy client
- jq

### Server-side platforms

The components are built in the following platforms:

| Component | linux/amd64 | linux/arm64 | linux-s390x | linux-pp64le |
| -- | -- | -- | -- | -- |
| Sonobuoy Aggregator/Worker | yes | yes | yes | yes |
| Plugin openshift-tests | yes | yes | no | no |

### Supported platforms

OPCT can provide full feature coverage in the following platforms:

- linux/amd64
- linux/arm64

The regular execution can be done by running:

```bash
opct run --wait
```

### Limited platforms

In the other platforms, you should be able to run Kubernetes e2e tests
provided by Sonobuoy on the following platforms:

- linux/pp64le
- linux/s390x

The following command allows you to run such tests:

```bash
opct sonobuoy run --sonobuoy-image quay.io/opct/sonobuoy:v0.5.0-alpha.3
```

### Adding support to a new platform

The first requirement to support the new server-side platform is to ensure OpenShift/OKD
can provide payloads for that platform.

Once OpenShift is supported, each OPCT server-side component must be built to create a
full support.

The following components are required:

- Sonobuoy Aggregator Server/Worker
- Plugin openshift-tests
- Must-gather
- Must-gather monitoring
- openshift client
- sonobuoy client
- jq

The following components are optional:

- Plugin openshift-tests:
- camgi
- etcdfio

#### Building Sonobuoy Aggregator and Worker image

See the release steps for more details how to mirror [Sonobuoy images](./release.md).

#### Building Plugin openshift-tests

See the [build script][build-sh], and the [Containerfile][containerfile-plugin-otests].

See the [introduced multi-arch PR #51](https://github.com/redhat-openshift-ecosystem/provider-certification-plugins/pull/51) for reference.

#### Building Must-Gather Monitoring

See the [build script][build-sh], and the [Containerfile][containerfile-mgm].

[build-sh]: https://github.com/redhat-openshift-ecosystem/provider-certification-plugins/blob/main/build.sh
[containerfile-otests]: https://github.com/redhat-openshift-ecosystem/provider-certification-plugins/blob/main/openshift-tests-provider-cert/Containerfile
[containerfile-mgm]: https://github.com/redhat-openshift-ecosystem/provider-certification-plugins/blob/main/must-gather-monitoring/Containerfile
76 changes: 76 additions & 0 deletions docs/dev/release.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,76 @@
# Dev Guide - Release components

This guides describes how to release a new version of OPCT considering all the project dependencies.

## Creating container images for components

### Sonobuoy

Steps to check if Sonobuoy provides images to the target platform in the version used by OPCT:

1) Check the Sonobuoy version used by OPCT
```bash
$ go list -m github.com/vmware-tanzu/sonobuoy
github.com/vmware-tanzu/sonobuoy v0.57.1
```

2) Check the Sonobuoy images built for the version required by OPCT
```bash
$ skopeo list-tags docker://docker.io/sonobuoy/sonobuoy | jq .Tags | grep -i v0.57.1
"amd64-v0.57.1",
"arm64-v0.57.1",
"ppc64le-v0.57.1",
"s390x-v0.57.1",
"v0.57.1",
"win-amd64-1809-v0.57.1",
"win-amd64-1903-v0.57.1",
"win-amd64-1909-v0.57.1",
"win-amd64-2004-v0.57.1",
"win-amd64-20H2-v0.57.1",
```

3) [Bump the desired Sonobuoy version](https://github.com/redhat-openshift-ecosystem/opct/blob/main/hack/image-mirror-sonobuoy/mirror.sh#L9C27-L9C43)
in the script `mirror.sh` to mirror the Sonobuoy image to the OPCT image repository.

4) Run the mirror script to mirror and push images to the OPCT registry:
> (you must have permissions to quay.io/opct, otherwise you can hack it to push to yours)
```bash
make image-mirror-sonobuoy
```

### Plugins images

#### Development builds

Create images to test locally:

```bash
make images
```

To build images individually, you can use, for example for a single arch:

```sh
PLATFORMS=linux/amd64 make build-plugin-tests
```

Take a look into individual targets for each program in the [Makefile](https://github.com/redhat-openshift-ecosystem/provider-certification-plugins/blob/main/Makefile).

The script responsible to build images locally is [`build.sh`](https://github.com/redhat-openshift-ecosystem/provider-certification-plugins/blob/main/build.sh).
Get started there if you want to explore more about the build pipeline.

#### Production builds

Images for production are automatically created by the build
pipeline [ci.yaml](https://github.com/redhat-openshift-ecosystem/provider-certification-plugins/blob/main/.github/workflows/ci.yaml)
when:

- a push in the `main` branch, the `latest` image will be published in the image registry
- a tag is created from any `release-v*`, the same tag will be published in the image registry

The production images are built by default for `linux/amd64` and `linux/arm64` in the same manifest
for the following components:

- Plugin [openshift-tests](https://quay.io/repository/opct/plugin-openshift-tests?tab=tags)
- Plugin [artifacts-collector](https://quay.io/repository/opct/plugin-artifacts-collector?tab=tags)
- Plugin [Must-gather-monitoring](https://quay.io/repository/opct/must-gather-monitoring?tab=tags)
130 changes: 130 additions & 0 deletions docs/guides/features/validating-arm-installations.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,130 @@
# Validating OpenShift installation with ARM

This guide describe how to install an OpenShift cluster on AWS on instances with ARM64 architecture and run the validation tests with OPCT.

This is a reference guide, not an official documentation of installing OpenShift on AWS on
ARM64 architecture.

Please refer to [OpenShift documentation][openshift-docs] for more information.

## Install a cluster

- Download the installer binary:

```bash
wget -O openshift-install.tar.gz https://mirror.openshift.com/pub/openshift-v4/amd64/clients/ocp/4.14.0-rc.6/openshift-install-linux.tar.gz
tar xfz openshift-install.tar.gz
```

- Export the variables used to create a cluster:

```bash
export INSTALL_DIR=install-dir1
export BASE_DOMAIN=devcluster.openshift.com
export CLUSTER_NAME=arm-opct01
export CLUSTER_REGION=us-east-1
export SSH_PUB_KEY_FILE=$HOME/.ssh/id_rsa.pub
export PULL_SECRET_FILE=$HOME/.openshift/pull-secret-latest.json

mkdir -p $INSTALL_DIR
```

- Pick the release image in the [release controller][release-controller] (valid only for experimental environments):

```bash
export OPENSHIFT_INSTALL_RELEASE_IMAGE_OVERRIDE="quay.io/openshift-release-dev/ocp-release:4.14.0-rc.6-aarch64
```
- Create installer configuration:
```bash
cat <<EOF > ${INSTALL_DIR}/install-config.yaml
apiVersion: v1
publish: External
baseDomain: ${BASE_DOMAIN}
metadata:
name: "${CLUSTER_NAME}"
controlPlane:
name: master
architecture: arm64
replicas: 3
compute:
- name: worker
architecture: arm64
replicas: 3
platform:
aws:
region: ${CLUSTER_REGION}
pullSecret: '$(cat ${PULL_SECRET_FILE} |awk -v ORS= -v OFS= '{$1=$1}1')'
sshKey: |
$(cat ${SSH_PUB_KEY_FILE})
EOF
```
- Install the cluster:
```bash
./openshift-install create cluster --dir ${INSTALL_DIR} --log-level debug
```
- Export kubeconfig:
```bash
export KUBECONFIG=${INSTALL_DIR}/auth/kubeconfig
```
## Run Conformance workflow and explore the results
This section describes how to rnu OPCT in an OpenShift cluster.
### Prerequisites
- OpenShift cluster installed on ARM
- OPCT command line interface installed
- KUBECONFIG environment variable exported
- An OpenShift user with cluster-admin privileges
### Steps
- Setup test node:
```bash
opct adm setup-node --yes
```
- Run OPCT and retrieve results when finished:
```bash
./opct run --wait
```
- Collect the results:
```bash
./opct retrieve
```
- Explore the results:
```bash
./opct report $(date +%Y%m)*.tar.gz --save-to ./report --loglevel debug
```
Explore the results.
## Destroy
Destroy the conformance test environment:
```bash
./opct destroy
```
- Destroy a cluster:
```bash
./openshift-install destroy cluster --dir ${INSTALL_DIR}
```
[openshift-docs]: https://docs.openshift.com/container-platform/latest
[release-controller]: https://arm64.ocp.releases.ci.openshift.org/

0 comments on commit 6ab5700

Please sign in to comment.