diff --git a/OracleDatabase/RAC/OracleRACStorageServer/README.md b/OracleDatabase/RAC/OracleRACStorageServer/README.md index 77c2cde84f..9b8ec8d68a 100644 --- a/OracleDatabase/RAC/OracleRACStorageServer/README.md +++ b/OracleDatabase/RAC/OracleRACStorageServer/README.md @@ -1,189 +1,214 @@ # Oracle ASM on NFS Server for RAC testing -Sample Docker and Podman build files to facilitate installation, configuration, and environment setup for DevOps users. +Example Podman build files to facilitate installation, configuration, and environment setup NFS Server for RAC testing for DevOps users. -**IMPORTANT:** This image can be used to setup ASM on NFS for RAC. You can skip if you have physical block devices or NAS server for Oracle RAC and Grid. You need to make sure that NFS server container must be up and running for RAC functioning. This image is for only testing purpose. +**IMPORTANT:** This image can be used to set up ASM on NFS for Oracle RAC. You can skip this procedure if you have physical block devices or a NAS server for Oracle RAC and Oracle Grid Infrastructure. You must ensure that the NFS server container is up and running for Oracle RAC functioning. -Refer below instructions for setup of NFS Container for RAC - +Refer to the following instructions for setup of NFS Container for Oracle RAC: -- [Oracle ASM on NFS Server for RAC testing](#oracle-asm-on-nfs-server-for-rac-testing) -- [How to build NFS Storage Container Image](#how-to-build-nfs-storage-container-image) - - [How to build NFS Storage Container Image on Docker Host](#how-to-build-nfs-storage-container-image-on-docker-host) - - [How to build NFS Storage Container Image on Podman Host](#how-to-build-nfs-storage-container-image-on-podman-host) +- [Oracle ASM on NFS Server for Oracle RAC testing](#oracle-asm-on-nfs-server-for-rac-testing) +- [How to build NFS Storage Container Image on Container host](#how-to-build-nfs-storage-container-image-on-container-host) - [Create Bridge Network](#create-bridge-network) -- [NFS Server installation on Host](#nfs-server-installation-on-host) -- [Running RACStorageServer container](#running-racstorageserver-container) - - [RAC Storage container for Docker Host Machine](#rac-storage-container-for-docker-host-machine) - - [RAC Storage Container for Podman Host Machine](#rac-storage-container-for-podman-host-machine) +- [NFS Server installation on Podman Host](#nfs-server-installation-on-podman-host) +- [SELinux Configuration on Podman Host](#selinux-configuration-on-podman-host) +- [Oracle RAC Storage Container for Podman Host](#oracle-rac-storage-container-for-podman-host) +- [Oracle RAC Storage container for Docker Host](#oracle-rac-storage-container-for-docker-host) - [Create NFS Volume](#create-nfs-volume) - [Copyright](#copyright) -## How to build NFS Storage Container Image +## How to build NFS Storage Container Image on Container host +To create the files for Oracle RAC storage, ensure that you have at least 60 GB space available for the container. -### How to build NFS Storage Container Image on Docker Host -You need to make sure that you have atleast 60GB space available for container to create the files for RAC storage. +**IMPORTANT:** If you are behind a proxy, you must set the `http_proxy` and `https_proxy` env variable to values based on your environment before building the image. -**IMPORTANT:** If you are behind the proxy, you need to set http_proxy env variable based on your enviornment before building the image. Please ensure that you have the `podman-docker` package installed on your OL8 Podman host to run the command using the docker utility. -```bash -dnf install podman-docker -y -``` +To assist in building the images, you can use the [buildContainerImage.sh](containerfiles/buildContainerImage.sh) script. See below for instructions and usage. -To assist in building the images, you can use the [buildDockerImage.sh](dockerfiles/buildDockerImage.sh) script. See below for instructions and usage. +In this guide, we are referring to Oracle Linux 8 onwards as the Podman Host, and Oracle Linux 7 as the Docker Host machines. -The `buildDockerImage.sh` script is just a utility shell script that performs MD5 checks and is an easy way for beginners to get started. Expert users are welcome to directly call `docker build` with their prefered set of parameters. Go into the **dockerfiles** folder and run the **buildDockerImage.sh** script: +The `buildContainerImage.sh` script is just a utility shell script that performs MD5 checks. It provides an easy way for beginners to get started. Expert users are welcome to directly call `podman build` with their preferred set of parameters. Go into the **containerfiles** folder and run the **buildContainerImage.sh** script on your Podman host: ```bash -cd /docker-images/OracleDatabase/RAC/OracleRACStorageServer/dockerfiles -./buildDockerImage.sh -v 19.3.0 +./buildContainerImage.sh -v (Software Version) +./buildContainerImage.sh -v latest ``` -For detailed usage of command, please execute folowing command: +In a successful build, you see build messages similar to the following: ```bash -cd /docker-images/OracleDatabase/RAC/OracleRACStorageServer/dockerfiles -./buildDockerImage.sh -h -``` -### How to build NFS Storage Container Image on Podman Host - -You need to make sure that you have atleast 60GB space available for container to create the files for RAC storage. - -**IMPORTANT:** If you are behind the proxy, you need to set `http_proxy` and `https_proxy` env variable based on your enviornment before building the image. - -To assist in building the images, you can use the [buildDockerImage.sh](dockerfiles/buildDockerImage.sh) script. See below for instructions and usage. - -The `buildDockerImage.sh` script is just a utility shell script that performs MD5 checks and is an easy way for beginners to get started. Expert users are welcome to directly call `docker build` with their prefered set of parameters. Go into the **dockerfiles** folder and run the **buildDockerImage.sh** script: - -```bash -cd /docker-images/OracleDatabase/RAC/OracleRACStorageServer/dockerfiles -./buildDockerImage.sh -v latest -``` -You would see successful build message similar like below- -```bash - Oracle RAC Storage Server Podman Image version latest is ready to be extended: + Oracle RAC Storage Server Container Image version latest is ready to be extended: --> oracle/rac-storage-server:latest ``` -## Create Bridge Network -Before creating container, create the bridge private network for NFS storage container. +**NOTE**: To build an Oracle RAC storage Image for the Docker host, pass the version `ol7` to buildContainerImage.sh -On the host- +For detailed usage notes for this script, run the following command: ```bash -docker network create --driver=bridge --subnet=192.168.17.0/24 rac_priv1_nw +./buildContainerImage.sh -h +Usage: buildContainerImage.sh -v [version] [-o] [Docker build option] +Builds a Docker Image for Oracle Database. + +Parameters: + -v: version to build + Choose one of: latest ol7 + Choose "latest" version for podman host machines + Choose "ol7" for docker host machines + -o: passes on Docker build option ``` -**Note:** You can change subnet according to your environment. - +### Create Bridge Network +Before creating the container, create the bridge public network for the NFS storage container. -## NFS Server installation on Host -Ensure to install NFS server rpms on host to utilize NFS volumes in containers- +The following are examples of creating `bridge`, `macvlan` or `ipvlan` [networks](https://docs.podman.io/en/latest/markdown/podman-network-create.1.html). +Example of creating bridge networks: ```bash -yum -y install nfs-utils +podman network create --driver=bridge --subnet=10.0.20.0/24 rac_pub1_nw ``` -## Running RACStorageServer container - -### RAC Storage container for Docker Host Machine - -#### Prerequisites for RAC Storage Container for Docker Host - -Create placeholder for NFS storage and make sure it is empty - +Example of creating macvlan networks: ```bash -export ORACLE_DBNAME=ORCLCDB -mkdir -p /docker_volumes/asm_vol/$ORACLE_DBNAME -rm -rf /docker_volumes/asm_vol/$ORACLE_DBNAME/asm_disk0* +podman network create -d macvlan --subnet=10.0.20.0/24 -o parent=ens5 rac_pub1_nw ``` -Execute following command to create the container: - +Example of creating ipvlan networks: ```bash -export ORACLE_DBNAME=ORCLCDB -docker run -d -t --hostname racnode-storage \ ---dns-search=example.com --cap-add SYS_ADMIN --cap-add AUDIT_WRITE \ ---volume /docker_volumes/asm_vol/$ORACLE_DBNAME:/oradata --init \ ---network=rac_priv1_nw --ip=192.168.17.80 --tmpfs=/run \ ---volume /sys/fs/cgroup:/sys/fs/cgroup:ro \ ---name racnode-storage oracle/rac-storage-server:19.3.0 +podman network create -d ipvlan --subnet=10.0.20.0/24 -o parent=ens5 rac_pub1_nw ``` -**IMPORTANT:** During the container startup 5 files named as `asm_disk0[1-5].img` will be created under /oradata.If the files are already present, they will not be recreated.These files can be used for ASM storage in RAC containers. - -**NOTE**: Expose directory to container which has atleast 60GB. In the above example, we are using `/docker_volumes/asm_vol/$ORACLE_DBNAME` and you need to change values according to your env. Inside container, it will be /oradata and do not change this. - -In the above example, we used **192.168.17.0/24** subnet for NFS server. You can change the subnet values according to your environment. Also, SELINUX must be disabled or in permissive mode in Docker Host Machine. - -To check the racstorage container/services creation logs , please tail docker logs. It will take 10 minutes to create the racnode-storage container service. +**Note:** You can change the subnet and parent network interfaces according to your environment. +### NFS Server installation on Podman Host +To use NFS volumes in containers, you must install NFS server rpms on the Podman host. For example: ```bash -docker logs -f racnode-storage +dnf install -y nfs-utils ``` -you should see following in docker logs output: +### SELinux Configuration on Podman Host +If SELinux is enabled on the Podman host then you must install another SELINUX module, specifically allowing permissions to write to the Podman host. To check if your SELinux is enabled or not, run the `getenforce` command. + +Copy [rac-storage.te](./rac-storage.te) to `/var/opt` folder in your host and then execute below- ```bash -################################################# -runOracle.sh: NFS Server is up and running -Create NFS volume for /oradata -################################################# +cd /var/opt +make -f /usr/share/selinux/devel/Makefile rac-storage.pp +semodule -i rac-storage.pp +semodule -l | grep rac-storage ``` - -### RAC Storage Container for Podman Host Machine +### Oracle RAC Storage Container for Podman Host +Run the following command to create the container: #### Prerequisites for RAC Storage Container for Podman Host -Create placeholder for NFS storage and make sure it is empty - +Create a placeholder for NFS storage and ensure that it is empty: ```bash export ORACLE_DBNAME=ORCLCDB mkdir -p /scratch/stage/rac-storage/$ORACLE_DBNAME rm -rf /scratch/stage/rac-storage/$ORACLE_DBNAME/asm_disk0* ``` - -If SELinux is enabled on Podman Host (you can check by running `sestatus` command), then execute below to make SELinux policy as `permissive` and reboot the host machine. This will allow permissions to write to `asm-disks*` in the `/oradata` folder inside the podman containers- +If SELinux host is enabled on the machine, then run the following command: ```bash -sed -i 's/^SELINUX=.*/SELINUX=permissive/' /etc/selinux/config -reboot +semanage fcontext -a -t container_file_t /scratch/stage/rac-storage/$ORACLE_DBNAME +restorecon -v /scratch/stage/rac-storage/$ORACLE_DBNAME ``` - -Execute following command to create the container: +#### Deploying Oracle RAC Storage Container for Podman Host +If you are building an Oracle RAC storage container for the Podman Host, then you can use the following commands: ```bash export ORACLE_DBNAME=ORCLCDB podman run -d -t \ --hostname racnode-storage \ - --dns-search=example.com \ + --dns-search=example.info \ + --dns 10.0.20.25 \ --cap-add SYS_ADMIN \ --cap-add AUDIT_WRITE \ --cap-add NET_ADMIN \ + -e DNS_SERVER=10.0.20.25 \ + -e DOMAIN=example.info \ --volume /scratch/stage/rac-storage/$ORACLE_DBNAME:/oradata \ - --network=rac_priv1_nw \ - --ip=192.168.17.80 \ + --network=rac_pub1_nw --ip=10.0.20.80 \ --systemd=always \ --restart=always \ --name racnode-storage \ localhost/oracle/rac-storage-server:latest ``` -To check the racstorage container/services creation logs , please tail docker logs. It will take 10 minutes to create the racnode-storage container service. +To check the Oracle RAC storage container and services creation logs, you can run a tail command on the Docker logs. It should take approximately 10 minutes to create the racnode-storage container service. ```bash podman exec racnode-storage tail -f /tmp/storage_setup.log ``` -You would see successful message like below - +In a successful deployment, you should see messages similar to the following: ```bash +Export list for racnode-storage: +/oradata * ################################################# Setup Completed ################################################# ``` -**NOTE**: Expose directory to container which has atleast 60GB. In the above example, we are using `/scratch/stage/rac-storage/$ORACLE_DBNAME` and you need to change values according to your env. Inside container, it will be /oradata and do not change this. -In the above example, we used **192.168.17.0/24** subnet for NFS server. You can change the subnet values according to your environment. +### Oracle RAC Storage container for Docker Host + +To use NFS volumes in containers, you must install NFS server rpms on the Podman host: + +```bash +yum install -y nfs-utils +``` +#### Prerequisites for an Oracle RAC Storage Container for Docker Host + +Create a placeholder for NFS storage, and ensure that it is empty: +```bash +export ORACLE_DBNAME=ORCLCDB +mkdir -p /scratch/docker_volumes/asm_vol/$ORACLE_DBNAME +rm -rf /scratch/docker_volumes/asm_vol/$ORACLE_DBNAME/asm_disk0* +``` + +#### Deploying Oracle RAC Storage Container for Docker Host + +If you are building an Oracle RAC storage container on Docker host machines, then run the following commands: + +```bash +export ORACLE_DBNAME=ORCLCDB +docker run -d -t \ +--hostname racnode-storage \ +--dns-search=example.info \ +--cap-add SYS_ADMIN \ +--cap-add AUDIT_WRITE \ +--volume /scratch/docker_volumes/asm_vol/$ORACLE_DBNAME:/oradata --init \ +--network=rac_pub1_nw --ip=10.0.20.80 \ +--tmpfs=/run \ +--volume /sys/fs/cgroup:/sys/fs/cgroup:ro \ +--name racnode-storage \ +oracle/rac-storage-server:ol7 +``` + +To check the Oracle RAC storage container and services creation logs, you can run a tail command on the Docker logs. It should take 10 minutes to create the racnode-storage container service. + +```bash +docker logs -f racnode-storage +``` + +**IMPORTANT:** During the container startup, five files with the name `asm_disk0[1-5].img` will be created under `/oradata`. If the files are already present, then they will not be recreated. These files can be used for ASM storage in Oracle RAC containers. + +**NOTE**: Expose the directory to a container that has at least 60 GB. In the preceding example, we are using `/scratch/stage/rac-storage/$ORACLE_DBNAME`. Change these values according to your environment. Inside the container, the directory will be `/oradata`. Do not change this. -**Note** : If SELINUX is enabled on the Podman host, then you must create an SELinux policy for Oracle RAC on Podman. For details about this procedure, see "How to Configure Podman for SELinux Mode" in the publication [Oracle Real Application Clusters Installation Guide for Podman Oracle Linux x86-64](https://docs.oracle.com/en/database/oracle/oracle-database/21/racpd/target-configuration-oracle-rac-podman.html#GUID-59138DF8-3781-4033-A38F-E0466884D008). +In the preceding example, we use **192.168.17.0/24** as the subnet for the NFS server. You can change the subnet values according to your environment. +You should see following in the Docker logs output: -**IMPORTANT:** During the container startup 5 files named as `asm_disk0[1-5].img` will be created under /oradata.If the files are already present, they will not be recreated.These files can be used for ASM storage in RAC containers. + +**IMPORTANT:** The NFS volume must be `/oradata`, which you will export to Oracle RAC containers for ASM storage. It will take approximately 10 minutes to set up the NFS server. ### Create NFS Volume -Create NFS volume using following command on Podman Host: +#### Create NFS volume using the following command on the Podman Host + +```bash +podman volume create --driver local \ +--opt type=nfs \ +--opt o=addr=10.0.20.80,rw,bg,hard,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 \ +--opt device=10.0.20.80:/oradata \ +racstorage +``` + +#### Create NFS volume using following command on Docker Host ```bash docker volume create --driver local \ @@ -192,11 +217,17 @@ docker volume create --driver local \ --opt device=192.168.17.80:/oradata \ racstorage ``` +**IMPORTANT:** If you are not using the 192.168.17.0/24 subnet then you must change **addr=192.168.17.80** based on your environment. -**IMPORTANT:** If you are not using 192.168.17.0/24 subnet then you need to change **addr=192.168.17.25** based on your environment. +## Environment variables explained -**IMPORTANT:** The NFS volume must be `/oradata` which you will export to RAC containers for ASM storage. It will take 10 minutes for setting up NFS server. +| Environment Variable | Description | +|----------------------|-----------------| +| DNS_SERVER | Default set to 10.0.20.25. Specify the comma-separated list of DNS server IP addresses where both Oracle RAC nodes are resolved. | +| DOMAIN | Default set to example.info. Specify the domain details for Oracle RAC Container Environment. | -## Copyright +## License +Unless otherwise noted, all scripts and files hosted in this repository that are required to build the container images are under UPL 1.0 license. -Copyright (c) 2014-2024 Oracle and/or its affiliates. All rights reserved. \ No newline at end of file +## Copyright +Copyright (c) 2014-2024 Oracle and/or its affiliates. diff --git a/OracleDatabase/RAC/OracleRACStorageServer/README1.md b/OracleDatabase/RAC/OracleRACStorageServer/README1.md new file mode 100644 index 0000000000..77c2cde84f --- /dev/null +++ b/OracleDatabase/RAC/OracleRACStorageServer/README1.md @@ -0,0 +1,202 @@ +# Oracle ASM on NFS Server for RAC testing +Sample Docker and Podman build files to facilitate installation, configuration, and environment setup for DevOps users. + +**IMPORTANT:** This image can be used to setup ASM on NFS for RAC. You can skip if you have physical block devices or NAS server for Oracle RAC and Grid. You need to make sure that NFS server container must be up and running for RAC functioning. This image is for only testing purpose. + +Refer below instructions for setup of NFS Container for RAC - + +- [Oracle ASM on NFS Server for RAC testing](#oracle-asm-on-nfs-server-for-rac-testing) +- [How to build NFS Storage Container Image](#how-to-build-nfs-storage-container-image) + - [How to build NFS Storage Container Image on Docker Host](#how-to-build-nfs-storage-container-image-on-docker-host) + - [How to build NFS Storage Container Image on Podman Host](#how-to-build-nfs-storage-container-image-on-podman-host) +- [Create Bridge Network](#create-bridge-network) +- [NFS Server installation on Host](#nfs-server-installation-on-host) +- [Running RACStorageServer container](#running-racstorageserver-container) + - [RAC Storage container for Docker Host Machine](#rac-storage-container-for-docker-host-machine) + - [RAC Storage Container for Podman Host Machine](#rac-storage-container-for-podman-host-machine) +- [Create NFS Volume](#create-nfs-volume) +- [Copyright](#copyright) + +## How to build NFS Storage Container Image + +### How to build NFS Storage Container Image on Docker Host +You need to make sure that you have atleast 60GB space available for container to create the files for RAC storage. + +**IMPORTANT:** If you are behind the proxy, you need to set http_proxy env variable based on your enviornment before building the image. Please ensure that you have the `podman-docker` package installed on your OL8 Podman host to run the command using the docker utility. +```bash +dnf install podman-docker -y +``` + +To assist in building the images, you can use the [buildDockerImage.sh](dockerfiles/buildDockerImage.sh) script. See below for instructions and usage. + +The `buildDockerImage.sh` script is just a utility shell script that performs MD5 checks and is an easy way for beginners to get started. Expert users are welcome to directly call `docker build` with their prefered set of parameters. Go into the **dockerfiles** folder and run the **buildDockerImage.sh** script: + +```bash +cd /docker-images/OracleDatabase/RAC/OracleRACStorageServer/dockerfiles +./buildDockerImage.sh -v 19.3.0 +``` + +For detailed usage of command, please execute folowing command: +```bash +cd /docker-images/OracleDatabase/RAC/OracleRACStorageServer/dockerfiles +./buildDockerImage.sh -h +``` +### How to build NFS Storage Container Image on Podman Host + +You need to make sure that you have atleast 60GB space available for container to create the files for RAC storage. + +**IMPORTANT:** If you are behind the proxy, you need to set `http_proxy` and `https_proxy` env variable based on your enviornment before building the image. + +To assist in building the images, you can use the [buildDockerImage.sh](dockerfiles/buildDockerImage.sh) script. See below for instructions and usage. + +The `buildDockerImage.sh` script is just a utility shell script that performs MD5 checks and is an easy way for beginners to get started. Expert users are welcome to directly call `docker build` with their prefered set of parameters. Go into the **dockerfiles** folder and run the **buildDockerImage.sh** script: + +```bash +cd /docker-images/OracleDatabase/RAC/OracleRACStorageServer/dockerfiles +./buildDockerImage.sh -v latest +``` +You would see successful build message similar like below- +```bash + Oracle RAC Storage Server Podman Image version latest is ready to be extended: + + --> oracle/rac-storage-server:latest +``` + +## Create Bridge Network +Before creating container, create the bridge private network for NFS storage container. + +On the host- +```bash +docker network create --driver=bridge --subnet=192.168.17.0/24 rac_priv1_nw +``` + +**Note:** You can change subnet according to your environment. + + +## NFS Server installation on Host +Ensure to install NFS server rpms on host to utilize NFS volumes in containers- + +```bash +yum -y install nfs-utils +``` +## Running RACStorageServer container + +### RAC Storage container for Docker Host Machine + +#### Prerequisites for RAC Storage Container for Docker Host + +Create placeholder for NFS storage and make sure it is empty - +```bash +export ORACLE_DBNAME=ORCLCDB +mkdir -p /docker_volumes/asm_vol/$ORACLE_DBNAME +rm -rf /docker_volumes/asm_vol/$ORACLE_DBNAME/asm_disk0* +``` + +Execute following command to create the container: + +```bash +export ORACLE_DBNAME=ORCLCDB +docker run -d -t --hostname racnode-storage \ +--dns-search=example.com --cap-add SYS_ADMIN --cap-add AUDIT_WRITE \ +--volume /docker_volumes/asm_vol/$ORACLE_DBNAME:/oradata --init \ +--network=rac_priv1_nw --ip=192.168.17.80 --tmpfs=/run \ +--volume /sys/fs/cgroup:/sys/fs/cgroup:ro \ +--name racnode-storage oracle/rac-storage-server:19.3.0 +``` + +**IMPORTANT:** During the container startup 5 files named as `asm_disk0[1-5].img` will be created under /oradata.If the files are already present, they will not be recreated.These files can be used for ASM storage in RAC containers. + +**NOTE**: Expose directory to container which has atleast 60GB. In the above example, we are using `/docker_volumes/asm_vol/$ORACLE_DBNAME` and you need to change values according to your env. Inside container, it will be /oradata and do not change this. + +In the above example, we used **192.168.17.0/24** subnet for NFS server. You can change the subnet values according to your environment. Also, SELINUX must be disabled or in permissive mode in Docker Host Machine. + +To check the racstorage container/services creation logs , please tail docker logs. It will take 10 minutes to create the racnode-storage container service. + +```bash +docker logs -f racnode-storage +``` + +you should see following in docker logs output: + +```bash +################################################# +runOracle.sh: NFS Server is up and running +Create NFS volume for /oradata +################################################# +``` + +### RAC Storage Container for Podman Host Machine + +#### Prerequisites for RAC Storage Container for Podman Host + +Create placeholder for NFS storage and make sure it is empty - +```bash +export ORACLE_DBNAME=ORCLCDB +mkdir -p /scratch/stage/rac-storage/$ORACLE_DBNAME +rm -rf /scratch/stage/rac-storage/$ORACLE_DBNAME/asm_disk0* +``` + +If SELinux is enabled on Podman Host (you can check by running `sestatus` command), then execute below to make SELinux policy as `permissive` and reboot the host machine. This will allow permissions to write to `asm-disks*` in the `/oradata` folder inside the podman containers- +```bash +sed -i 's/^SELINUX=.*/SELINUX=permissive/' /etc/selinux/config +reboot +``` + +Execute following command to create the container: + +```bash +export ORACLE_DBNAME=ORCLCDB +podman run -d -t \ + --hostname racnode-storage \ + --dns-search=example.com \ + --cap-add SYS_ADMIN \ + --cap-add AUDIT_WRITE \ + --cap-add NET_ADMIN \ + --volume /scratch/stage/rac-storage/$ORACLE_DBNAME:/oradata \ + --network=rac_priv1_nw \ + --ip=192.168.17.80 \ + --systemd=always \ + --restart=always \ + --name racnode-storage \ + localhost/oracle/rac-storage-server:latest +``` + +To check the racstorage container/services creation logs , please tail docker logs. It will take 10 minutes to create the racnode-storage container service. + +```bash +podman exec racnode-storage tail -f /tmp/storage_setup.log +``` +You would see successful message like below - +```bash +################################################# + Setup Completed +################################################# +``` + +**NOTE**: Expose directory to container which has atleast 60GB. In the above example, we are using `/scratch/stage/rac-storage/$ORACLE_DBNAME` and you need to change values according to your env. Inside container, it will be /oradata and do not change this. + +In the above example, we used **192.168.17.0/24** subnet for NFS server. You can change the subnet values according to your environment. + +**Note** : If SELINUX is enabled on the Podman host, then you must create an SELinux policy for Oracle RAC on Podman. For details about this procedure, see "How to Configure Podman for SELinux Mode" in the publication [Oracle Real Application Clusters Installation Guide for Podman Oracle Linux x86-64](https://docs.oracle.com/en/database/oracle/oracle-database/21/racpd/target-configuration-oracle-rac-podman.html#GUID-59138DF8-3781-4033-A38F-E0466884D008). + + +**IMPORTANT:** During the container startup 5 files named as `asm_disk0[1-5].img` will be created under /oradata.If the files are already present, they will not be recreated.These files can be used for ASM storage in RAC containers. + +### Create NFS Volume +Create NFS volume using following command on Podman Host: + +```bash +docker volume create --driver local \ +--opt type=nfs \ +--opt o=addr=192.168.17.80,rw,bg,hard,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 \ +--opt device=192.168.17.80:/oradata \ +racstorage +``` + +**IMPORTANT:** If you are not using 192.168.17.0/24 subnet then you need to change **addr=192.168.17.25** based on your environment. + +**IMPORTANT:** The NFS volume must be `/oradata` which you will export to RAC containers for ASM storage. It will take 10 minutes for setting up NFS server. + +## Copyright + +Copyright (c) 2014-2024 Oracle and/or its affiliates. All rights reserved. \ No newline at end of file diff --git a/OracleDatabase/RAC/OracleRACStorageServer/containerfiles/buildContainerImage.sh b/OracleDatabase/RAC/OracleRACStorageServer/containerfiles/buildContainerImage.sh new file mode 100755 index 0000000000..77912c5491 --- /dev/null +++ b/OracleDatabase/RAC/OracleRACStorageServer/containerfiles/buildContainerImage.sh @@ -0,0 +1,132 @@ +#!/bin/bash +############################# +# Copyright (c) 2024, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl +# Author: paramdeep.saini@oracle.com, sanjay.singh@oracle.com +############################ + +usage() { + cat << EOF + +Usage: buildContainerImage.sh -v [version] [-o] [Docker build option] +Builds a Docker Image for Oracle Database. + +Parameters: + -v: version to build + Choose "latest" version for podman host machines + Choose "ol7" version for docker host machines + -o: passes on Docker build option + +############################# +# Copyright (c) 2024, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl +# Author: paramdeep.saini@oracle.com +############################ + +EOF + exit 0 +} + +############## +#### MAIN #### +############## + +# Parameters +VERSION="latest" +export SKIPMD5=0 +DOCKEROPS="" + +while getopts "hiv:o:" optname; do + case "$optname" in + "h") + usage + ;; + "v") + VERSION="$OPTARG" + ;; + "o") + DOCKEROPS="$OPTARG" + ;; + "?") + usage; + exit 1; + ;; + *) + # Should not occur + echo "Unknown error while processing options inside buildContainerImage.sh" + ;; + esac +done + +# Oracle Database Image Name +IMAGE_NAME="oracle/rac-storage-server:$VERSION" +if command -v docker &>/dev/null; then + CONTAINER_BUILD_TOOL="docker" +elif command -v podman &>/dev/null; then + CONTAINER_BUILD_TOOL="podman" +else + echo "Neither Docker nor Podman is installed. Please install either Docker or Podman to proceed." + exit 1 +fi +# Go into version folder +cd "$VERSION" || exit + +echo "==========================" +echo "DOCKER info:" +docker info +echo "==========================" + +# Proxy settings +PROXY_SETTINGS="" +# shellcheck disable=SC2154 +if [ "${http_proxy}" != "" ]; then + PROXY_SETTINGS="$PROXY_SETTINGS --build-arg http_proxy=${http_proxy}" +fi +# shellcheck disable=SC2154 +if [ "${https_proxy}" != "" ]; then + PROXY_SETTINGS="$PROXY_SETTINGS --build-arg https_proxy=${https_proxy}" +fi +# shellcheck disable=SC2154 +if [ "${ftp_proxy}" != "" ]; then + PROXY_SETTINGS="$PROXY_SETTINGS --build-arg ftp_proxy=${ftp_proxy}" +fi +# shellcheck disable=SC2154 +if [ "${no_proxy}" != "" ]; then + PROXY_SETTINGS="$PROXY_SETTINGS --build-arg no_proxy=${no_proxy}" +fi +# shellcheck disable=SC2154 +if [ "$PROXY_SETTINGS" != "" ]; then + echo "Proxy settings were found and will be used during the build." +fi + +# ################## # +# BUILDING THE IMAGE # +# ################## # +echo "Building image '$IMAGE_NAME' ..." + +# BUILD THE IMAGE (replace all environment variables) +BUILD_START=$(date '+%s') +# shellcheck disable=SC2086 +$CONTAINER_BUILD_TOOL build --force-rm=true --no-cache=true $DOCKEROPS $PROXY_SETTINGS -t $IMAGE_NAME -f Containerfile . || { + echo "There was an error building the image." + exit 1 +} +BUILD_END=$(date '+%s') +# shellcheck disable=SC2154,SC2003 +BUILD_ELAPSED=$((BUILD_END - BUILD_START)) + +echo "" +# shellcheck disable=SC2181,SC2320 +if [ $? -eq 0 ]; then +cat << EOF + Oracle RAC Storage Server Container Image version $VERSION is ready to be extended: + + --> $IMAGE_NAME + + Build completed in $BUILD_ELAPSED seconds. + +EOF + +else + echo "Oracle RAC Storage Server Docker Image was NOT successfully created. Check the output and correct any reported problems with the docker build operation." +fi \ No newline at end of file diff --git a/OracleDatabase/RAC/OracleRACStorageServer/containerfiles/latest/Containerfile b/OracleDatabase/RAC/OracleRACStorageServer/containerfiles/latest/Containerfile new file mode 100644 index 0000000000..52f73f6486 --- /dev/null +++ b/OracleDatabase/RAC/OracleRACStorageServer/containerfiles/latest/Containerfile @@ -0,0 +1,64 @@ +############################# +# Copyright (c) 2024, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl +# Author: paramdeep.saini@oracle.com +############################ +# +# ORACLE CONTAINERFILES PROJECT +# -------------------------- +# This is the Containerfile for Oracle Database RAC Storage Server. This file create NFS server for ASM storage. +# +# HOW TO BUILD THIS IMAGE +# ----------------------- +# Put all downloaded files in the same directory as this Containerfile +# Run: +# $ podman build -t oracle/rac-storage-server:latest. +# +# Pull base image +# --------------- +FROM oraclelinux:8 + +# Environment variables required for this build (do NOT change) +# ------------------------------------------------------------- +# Linux Env Variable +ENV SETUP_LINUX_FILE="setupLinuxEnv.sh" \ + INSTALL_DIR=/opt/scripts \ + EXPORTFILE=exportfile \ + RUN_FILE="runOracle.sh" \ + SUDO_SETUP_FILE="setupSudo.sh" \ + INITSH="initsh" \ + BIN="/usr/sbin" \ + ORADATA="/oradata" \ + container="true" +# Use second ENV so that variable get substituted +ENV INSTALL_SCRIPTS=$INSTALL_DIR/install \ + SCRIPT_DIR=$INSTALL_DIR/startup + +# Copy binaries +# ------------- +# Copy Linux setup Files +COPY $SETUP_LINUX_FILE $SUDO_SETUP_FILE $INSTALL_SCRIPTS/ + +# Setup Scripts +COPY $RUN_FILE $EXPORTFILE $INITSH $SCRIPT_DIR/ + +RUN chmod 755 $INSTALL_DIR/install/*.sh && \ + $INSTALL_DIR/install/$SETUP_LINUX_FILE && \ + $INSTALL_DIR/install/$SUDO_SETUP_FILE && \ + sync + +RUN rm -rf $INSTALL_DIR/install && \ + chmod 755 $SCRIPT_DIR/*.sh && \ + echo "nohup $SCRIPT_DIR/runOracle.sh &" >> /etc/rc.local && \ + chmod +x /etc/rc.d/rc.local && \ + cp $SCRIPT_DIR/$INITSH /usr/bin/$INITSH && \ + chmod 755 /usr/bin/$INITSH && \ + chmod 666 $SCRIPT_DIR/$EXPORTFILE + +USER root +VOLUME ["/oradata"] +WORKDIR /workdir + +# Define default command to start Oracle Database. +# hadolint ignore=DL3025 +ENTRYPOINT /usr/bin/$INITSH diff --git a/OracleDatabase/RAC/OracleRACStorageServer/containerfiles/latest/checkSpace.sh b/OracleDatabase/RAC/OracleRACStorageServer/containerfiles/latest/checkSpace.sh new file mode 100755 index 0000000000..5c0c3ddc13 --- /dev/null +++ b/OracleDatabase/RAC/OracleRACStorageServer/containerfiles/latest/checkSpace.sh @@ -0,0 +1,23 @@ +#!/bin/bash +# +############################# +# Copyright (c) 2024, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl +# Author: paramdeep.saini@oracle.com +############################ +# +# +# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS HEADER. +# + +REQUIRED_SPACE_GB=5 +AVAILABLE_SPACE_GB=`df -PB 1G / | tail -n 1 | awk '{print $4}'` + +if [ $AVAILABLE_SPACE_GB -lt $REQUIRED_SPACE_GB ]; then + script_name=`basename "$0"` + echo "!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!" + echo "$script_name: ERROR - There is not enough space available in the docker container." + echo "$script_name: The container needs at least $REQUIRED_SPACE_GB GB , but only $AVAILABLE_SPACE_GB available." + echo "!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!" + exit 1; +fi; diff --git a/OracleDatabase/RAC/OracleRACStorageServer/containerfiles/latest/exportfile b/OracleDatabase/RAC/OracleRACStorageServer/containerfiles/latest/exportfile new file mode 100644 index 0000000000..3fb4d631e0 --- /dev/null +++ b/OracleDatabase/RAC/OracleRACStorageServer/containerfiles/latest/exportfile @@ -0,0 +1 @@ +/oradata *(rw,sync,no_wdelay,no_root_squash,insecure) diff --git a/OracleDatabase/RAC/OracleRACStorageServer/containerfiles/latest/initsh b/OracleDatabase/RAC/OracleRACStorageServer/containerfiles/latest/initsh new file mode 100755 index 0000000000..70b02bc084 --- /dev/null +++ b/OracleDatabase/RAC/OracleRACStorageServer/containerfiles/latest/initsh @@ -0,0 +1,10 @@ +#!/bin/bash +# Copyright (c) 2023, Oracle and/or its affiliates +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl/ + +echo "Creating env variables file /etc/storage_env_vars" +/bin/bash -c "cat /proc/1/environ | tr '\0' '\n' > /etc/storage_env_vars" +/bin/bash -c "sed -i -e 's/^/export /' /etc/storage_env_vars" + +echo "Starting Systemd" +exec /lib/systemd/systemd diff --git a/OracleDatabase/RAC/OracleRACStorageServer/containerfiles/latest/runOracle.sh b/OracleDatabase/RAC/OracleRACStorageServer/containerfiles/latest/runOracle.sh new file mode 100755 index 0000000000..c2b1e21bbf --- /dev/null +++ b/OracleDatabase/RAC/OracleRACStorageServer/containerfiles/latest/runOracle.sh @@ -0,0 +1,172 @@ +#!/bin/bash +# +############################# +# Copyright (c) 2024, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl +# Author: paramdeep.saini@oracle.com, sanjay.singh@oracle.com +############################ +# Description: Runs NFS server inside the container +# +# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS HEADER. +# + +if [ -f /etc/storage_env_vars ]; then +# shellcheck disable=SC1091 + source /etc/storage_env_vars +fi + +logfile="/tmp/storage_setup.log" + +touch $logfile +chmod 666 $logfile +# shellcheck disable=SC2034,SC2086 +progname="$(basename $0)" + +####################### Constants ################# +# shellcheck disable=SC2034 +declare -r FALSE=1 +# shellcheck disable=SC2034 +declare -r TRUE=0 +export REQUIRED_SPACE_GB=55 +export ORADATA=/oradata +export INSTALL_COMPLETED_FILE="/workdir/installcomplete" +export FILE_COUNT=0 +################################################## + +check_space () +{ + local REQUIRED_SPACE_GB=$1 + # shellcheck disable=SC2006 + AVAILABLE_SPACE_GB=`df -B 1G $ORADATA | tail -n 1 | awk '{print $4}'` + if [ ! -f ${INSTALL_COMPLETED_FILE} ] ;then + # shellcheck disable=SC2086 + if [ $AVAILABLE_SPACE_GB -lt $REQUIRED_SPACE_GB ]; then + # shellcheck disable=SC2006 + script_name=`basename "$0"` + echo "!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!" | tee -a $logfile + echo "$script_name: ERROR - There is not enough space available in the docker container under $ORADATA." | tee -a $logfile + echo "$script_name: The container needs at least $REQUIRED_SPACE_GB GB , but only $AVAILABLE_SPACE_GB available." | tee -a $logfile + echo "!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!" | tee -a $logfile + exit 1; + else + echo " Space check passed : $ORADATA has available space $AVAILABLE_SPACE_GB and ASM storage set to $REQUIRED_SPACE_GB" | tee -a $logfile + fi; + fi; +} +####################################### ETC Host Function ############################################################# + +setupEtcResolvConf() +{ +local stat=3 +# shellcheck disable=SC2154 +if [ "$action" == "" ]; then +# shellcheck disable=SC2236 + if [ ! -z "${DNS_SERVER}" ] ; then + sudo sh -c "echo \"search ${DOMAIN}\" > /etc/resolv.conf" + sudo sh -c "echo \"nameserver ${DNS_SERVER}\" >> /etc/resolv.conf" + fi +fi + +} + +SetupEtcHosts() +{ +# shellcheck disable=SC2034 +local stat=3 +# shellcheck disable=SC2034 +local HOST_LINE +if [ "$action" == "" ]; then +# shellcheck disable=SC2236 + if [ ! -z "${HOSTFILE}" ]; then + if [ -f "${HOSTFILE}" ]; then + sudo sh -c "cat \"${HOSTFILE}\" > /etc/hosts" + fi + else + sudo sh -c "echo -e \"127.0.0.1\tlocalhost.localdomain\tlocalhost\" > /etc/hosts" + sudo sh -c "echo -e \"$PUBLIC_IP\t$PUBLIC_HOSTNAME.$DOMAIN\t$PUBLIC_HOSTNAME\" >> /etc/hosts" + fi +fi + +} + + + + ################################### + # !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! # + ############# MAIN ################ + # !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! # + ################################### + + if [ ! -d "$ORADATA" ] ;then + echo "$ORADATA dir doesn't exist! exiting" | tee -a $logfile + exit 1 + fi + # shellcheck disable=SC2086 + if [ -z $ASM_STORAGE_SIZE_GB ] ;then + echo "ASM_STORAGE_SIZE env variable is not defined! Assigning 50GB default" | tee -a $logfile + ASM_STORAGE_SIZE_GB=50 + else + echo "ASM STORAGE SIZE set to : $ASM_STORAGE_SIZE_GB" | tee -a $logfile + fi + ####### Populating resolv.conf and /etc/hosts ### + setupEtcResolvConf + SetupEtcHosts + #################### + echo "Oracle user will be the owner for /oradata" | tee -a $logfile + sudo chown -R oracle:oinstall /oradata + + echo "Checking Space" | tee -a $logfile + check_space $ASM_STORAGE_SIZE_GB + # shellcheck disable=SC2004 + ASM_DISKS_SIZE=$(($ASM_STORAGE_SIZE_GB/5)) + count=1; + while [ $count -le 5 ]; + do + echo "Creating ASM Disks $ORADATA/asm_disk0$count.img of size $ASM_DISKS_SIZE" | tee -a $logfile + + if [ ! -f $ORADATA/asm_disk0$count.img ];then + dd if=/dev/zero of=$ORADATA/asm_disk0$count.img bs=1G count=$ASM_DISKS_SIZE + chown oracle:oinstall $ORADATA/asm_disk0$count.img + else + echo "$ORADATA/asm_disk0$count.img file already exist! Skipping file creation" | tee -a $logfile + fi + # shellcheck disable=SC2004 + count=$(($count+1)) + done + # shellcheck disable=SC2012 + FILE_COUNT=$(ls $ORADATA/asm_disk0* | wc -l) + # shellcheck disable=SC2086 + if [ ${FILE_COUNT} -ge 5 ];then + echo "Touching ${INSTALL_COMPLETED_FILE}" | tee -a $logfile + touch ${INSTALL_COMPLETED_FILE} + fi + + echo "#################################################" | tee -a $logfile + echo " Starting NFS Server Setup " | tee -a $logfile + echo "#################################################" | tee -a $logfile + + + echo "Starting Nfs Server" | tee -a $logfile + systemctl start nfs-utils.service | tee -a $logfile + systemctl restart rpcbind.service | tee -a $logfile + systemctl start nfs-server.service | tee -a $logfile + + echo "Checking Nfs Service" | tee -a $logfile + systemctl status nfs-utils.service | tee -a $logfile + + echo "Checking rpc bind service" + systemctl status rpcbind.service | tee -a $logfile + + echo "Setting up /etc/exports" + # shellcheck disable=SC2086,SC2002 + cat $SCRIPT_DIR/$EXPORTFILE | tee -a /etc/exports + + echo "Exporting File System" + sudo /usr/sbin/exportfs -r | tee -a $logfile + + echo "Checking exported mountpoints" | tee -a $logfile + showmount -e | tee -a $logfile + + echo "#################################################" | tee -a $logfile + echo " Setup Completed " | tee -a $logfile + echo "#################################################" | tee -a $logfile diff --git a/OracleDatabase/RAC/OracleRACStorageServer/containerfiles/latest/setupLinuxEnv.sh b/OracleDatabase/RAC/OracleRACStorageServer/containerfiles/latest/setupLinuxEnv.sh new file mode 100755 index 0000000000..131fc0bd10 --- /dev/null +++ b/OracleDatabase/RAC/OracleRACStorageServer/containerfiles/latest/setupLinuxEnv.sh @@ -0,0 +1,33 @@ +#!/bin/bash +############################# +# Copyright (c) 2024, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl +# Author: paramdeep.saini@oracle.com, sanjay.singh@oracle.com +############################ +# Description: Sets up the unix environment for DB installation. +# +# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS HEADER. +# + +# Setup filesystem and oracle user +# Adjust file permissions, go to /opt/oracle as user 'oracle' to proceed with Oracle installation +# ------------------------------------------------------------ +mkdir /oradata && \ +chmod ug+x /opt/scripts/startup/*.sh && \ +if grep -q "Oracle Linux Server release 9" /etc/oracle-release; then \ + dnf install -y oracle-database-preinstall-23ai && \ + cp /etc/security/limits.d/oracle-database-preinstall-23ai.conf /etc/security/limits.d/grid-database-preinstall-23ai.conf && \ + sed -i 's/oracle/grid/g' /etc/security/limits.d/grid-database-preinstall-23ai.conf && \ + rm -f /etc/systemd/system/oracle-database-preinstall-23ai-firstboot.service && \ + sed -i 's/^TasksMax\S*/TasksMax=80%/g' /usr/lib/systemd/system/user-.slice.d/10-defaults.conf && \ + dnf clean all; \ +else \ + dnf -y install oraclelinux-developer-release-el8 && \ + dnf -y install oracle-database-preinstall-23ai && \ + cp /etc/security/limits.d/oracle-database-preinstall-23ai.conf /etc/security/limits.d/grid-database-preinstall-23ai.conf && \ + sed -i 's/oracle/grid/g' /etc/security/limits.d/grid-database-preinstall-23ai.conf && \ + rm -f /etc/rc.d/init.d/oracle-database-preinstall-23ai-firstboot && \ + dnf clean all; \ +fi && \ +dnf -y install net-tools which zip unzip tar openssh-server vim-minimal which vim-minimal passwd sudo nfs-utils && \ +dnf clean all diff --git a/OracleDatabase/RAC/OracleRACStorageServer/containerfiles/latest/setupSudo.sh b/OracleDatabase/RAC/OracleRACStorageServer/containerfiles/latest/setupSudo.sh new file mode 100755 index 0000000000..9d43d06931 --- /dev/null +++ b/OracleDatabase/RAC/OracleRACStorageServer/containerfiles/latest/setupSudo.sh @@ -0,0 +1,13 @@ +#!/bin/bash +############################# +# Copyright (c) 2024, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl +# Author: paramdeep.saini@oracle.com, sanjay.singh@oracle.com +############################ +# +# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS HEADER. +# + +chmod 666 /etc/sudoers +echo "oracle ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers +chmod 440 /etc/sudoers diff --git a/OracleDatabase/RAC/OracleRACStorageServer/containerfiles/ol7/Containerfile b/OracleDatabase/RAC/OracleRACStorageServer/containerfiles/ol7/Containerfile new file mode 100644 index 0000000000..dbb53abc36 --- /dev/null +++ b/OracleDatabase/RAC/OracleRACStorageServer/containerfiles/ol7/Containerfile @@ -0,0 +1,57 @@ +# LICENSE UPL 1.0 +# +# Copyright (c) 1982-2018 Oracle and/or its affiliates. All rights reserved. +# +# ORACLE DOCKERFILES PROJECT +# -------------------------- +# This is the Dockerfile for Oracle Database 18c RAC Storage Server. This file create NFS server for ASM storage. +# +# HOW TO BUILD THIS IMAGE +# ----------------------- +# Put all downloaded files in the same directory as this Dockerfile +# Run: +# $ docker build -t oracle/rac-storage-server:19.3.0. +# +# Pull base image +# --------------- +FROM oraclelinux:7-slim + +# Environment variables required for this build (do NOT change) +# ------------------------------------------------------------- +# Linux Env Variable +ENV SETUP_LINUX_FILE="setupLinuxEnv.sh" \ + INSTALL_DIR=/opt/scripts \ + EXPORTFILE=exportfile \ + RUN_FILE="runOracle.sh" \ + SUDO_SETUP_FILE="setupSudo.sh" \ + BIN="/usr/sbin" \ + ORADATA="/oradata" \ + container="true" +# Use second ENV so that variable get substituted +ENV INSTALL_SCRIPTS=$INSTALL_DIR/install \ + SCRIPT_DIR=$INSTALL_DIR/startup + +# Copy binaries +# ------------- +# Copy Linux setup Files +COPY $SETUP_LINUX_FILE $SUDO_SETUP_FILE $INSTALL_SCRIPTS/ + +# Setup Scripts +COPY $RUN_FILE $EXPORTFILE $SCRIPT_DIR/ + +RUN chmod 755 $INSTALL_DIR/install/*.sh && \ + $INSTALL_DIR/install/$SETUP_LINUX_FILE && \ + $INSTALL_DIR/install/$SUDO_SETUP_FILE && \ + sync + +RUN rm -rf $INSTALL_DIR/install && \ + chmod 755 $SCRIPT_DIR/*.sh && \ + chmod 666 $SCRIPT_DIR/$EXPORTFILE + +USER oracle +VOLUME ["/oradata"] +WORKDIR /home/oracle + +# Define default command to start Oracle Database. + +CMD ["exec", "$SCRIPT_DIR/$RUN_FILE"] diff --git a/OracleDatabase/RAC/OracleRACStorageServer/containerfiles/ol7/checkSpace.sh b/OracleDatabase/RAC/OracleRACStorageServer/containerfiles/ol7/checkSpace.sh new file mode 100755 index 0000000000..3544ed3c06 --- /dev/null +++ b/OracleDatabase/RAC/OracleRACStorageServer/containerfiles/ol7/checkSpace.sh @@ -0,0 +1,23 @@ +#!/bin/bash +# LICENSE UPL 1.0 +# +# Copyright (c) 1982-2018 Oracle and/or its affiliates. All rights reserved. +# +# Since: January, 2018 +# Author: paramdeep.saini@oracle.com, sanjay.singh@oracle.com +# Description: Checks the available space of the system. +# +# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS HEADER. +# + +REQUIRED_SPACE_GB=5 +AVAILABLE_SPACE_GB=`df -PB 1G / | tail -n 1 | awk '{print $4}'` + +if [ $AVAILABLE_SPACE_GB -lt $REQUIRED_SPACE_GB ]; then + script_name=`basename "$0"` + echo "!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!" + echo "$script_name: ERROR - There is not enough space available in the docker container." + echo "$script_name: The container needs at least $REQUIRED_SPACE_GB GB , but only $AVAILABLE_SPACE_GB available." + echo "!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!" + exit 1; +fi; diff --git a/OracleDatabase/RAC/OracleRACStorageServer/containerfiles/ol7/exportfile b/OracleDatabase/RAC/OracleRACStorageServer/containerfiles/ol7/exportfile new file mode 100644 index 0000000000..906122a523 --- /dev/null +++ b/OracleDatabase/RAC/OracleRACStorageServer/containerfiles/ol7/exportfile @@ -0,0 +1 @@ +/oradata *(rw,sync,no_wdelay,no_root_squash) diff --git a/OracleDatabase/RAC/OracleRACStorageServer/containerfiles/ol7/runOracle.sh b/OracleDatabase/RAC/OracleRACStorageServer/containerfiles/ol7/runOracle.sh new file mode 100755 index 0000000000..1fa1319256 --- /dev/null +++ b/OracleDatabase/RAC/OracleRACStorageServer/containerfiles/ol7/runOracle.sh @@ -0,0 +1,193 @@ +#!/bin/bash +# LICENSE UPL 1.0 +# +# Copyright (c) 1982-2018 Oracle and/or its affiliates. All rights reserved. +# +# Since: January, 2018 +# Author: paramdeep.saini@oracle.com, sanjay.singh@oracle.com +# Description: Runs NFS server inside the container +# +# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS HEADER. +# + +if [ -f /etc/rac_env_vars ]; then +# shellcheck disable=SC1091 +source /etc/rac_env_vars +fi +logfile="/tmp/orod.log" + +touch $logfile +chmod 666 /tmp/orod.log +# shellcheck disable=SC2086,SC2034 +progname="$(basename $0)" + +####################### Constants ################# +# shellcheck disable=SC2034 +declare -r FALSE=1 +declare -r TRUE=0 +export REQUIRED_SPACE_GB=55 +export ORADATA=/oradata +export INSTALL_COMPLETED_FILE="/home/oracle/installcomplete" +export FILE_COUNT=0 +################################################## + +check_space () +{ +local REQUIRED_SPACE_GB=$1 +# shellcheck disable=SC2006,SC2086 +AVAILABLE_SPACE_GB=`df -B 1G $ORADATA | tail -n 1 | awk '{print $4}'` +# shellcheck disable=SC1009 +if [ ! -f ${INSTALL_COMPLETED_FILE} ] ;then +# shellcheck disable=SC2086 +if [ $AVAILABLE_SPACE_GB -lt $REQUIRED_SPACE_GB ]; then + # shellcheck disable=SC2006 + script_name=`basename "$0"` + echo "!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!" + echo "$script_name: ERROR - There is not enough space available in the docker container under $ORADATA." + echo "$script_name: The container needs at least $REQUIRED_SPACE_GB GB , but only $AVAILABLE_SPACE_GB available." + echo "!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!" + exit 1; +else + echo " Space check passed : $ORADATA has avilable space $AVAILABLE_SPACE_GB and ASM storage set to $REQUIRED_SPACE_GB" +fi; +fi; +} + +########### SIGINT handler ############ +function _int() { + echo "Stopping container." +local cmd +echo "Stopping nfs server" +sudo /usr/sbin/rpc.nfsd 0 +echo "Executing exportfs au" +sudo /usr/sbin/exportfs -au +echo "Executing exportfs f" +sudo /usr/sbin/exportfs -f +touch /tmp/stop +} + +########### SIGTERM handler ############ +function _term() { + echo "Stopping container." + echo "SIGTERM received, shutting down!" +local cmd +echo "Stopping nfs server" +sudo /usr/sbin/rpc.nfsd 0 +echo "Executing exportfs au" +sudo /usr/sbin/exportfs -au +echo "Executing exportfs f" +sudo /usr/sbin/exportfs -f +touch /tmp/sigterm +} + +########### SIGKILL handler ############ +function _kill() { + echo "SIGKILL received, shutting down database!" +# shellcheck disable=SC2034 +local cmd +echo "Stopping nfs server" +sudo /usr/sbin/rpc.nfsd 0 +echo "Executing exportfs au" +sudo /usr/sbin/exportfs -au +echo "Executing exportfs f" +sudo /usr/sbin/exportfs -f +touch /tmp/sigkill +} + +################################### +# !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! # +############# MAIN ################ +# !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! # +################################### + +# Set SIGINT handler +trap _int SIGINT + +# Set SIGTERM handler +trap _term SIGTERM + +# Set SIGKILL handler +# shellcheck disable=SC2173 +trap _kill SIGKILL + +if [ ! -d "$ORADATA" ] ;then +echo "$ORADATA dir doesn't exist! exiting" +exit 1 +fi +# shellcheck disable=SC2086 +if [ -z $ASM_STORAGE_SIZE_GB ] ;then +echo "ASM_STORAGE_SIZE env variable is not defined! Assigning 50GB default" +ASM_STORAGE_SIZE_GB=50 +else +echo "ASM STORAGE SIZE set to : $ASM_STORAGE_SIZE_GB" +fi + +echo "Oracle user will be the owner for /oradata" +sudo chown -R oracle:oinstall /oradata + +echo "Checking Space" +check_space $ASM_STORAGE_SIZE_GB +# shellcheck disable=SC2004 +ASM_DISKS_SIZE=$(($ASM_STORAGE_SIZE_GB/5)) +count=1; +while [ $count -le 5 ]; +do +echo "Creating ASM Disks $ORADATA/asm_disk0$count.img of size $ASM_DISKS_SIZE" + +if [ ! -f $ORADATA/asm_disk0$count.img ];then +dd if=/dev/zero of=$ORADATA/asm_disk0$count.img bs=1G count=$ASM_DISKS_SIZE +else +echo "$ORADATA/asm_disk0$count.img file already exist! Skipping file creation" +fi +# shellcheck disable=SC2004 +count=$(($count+1)) +done +# shellcheck disable=SC2012 +FILE_COUNT=$(ls $ORADATA/asm_disk0* | wc -l) +# shellcheck disable=SC2086 +if [ ${FILE_COUNT} -ge 5 ];then +echo "Touching ${INSTALL_COMPLETED_FILE}" +touch ${INSTALL_COMPLETED_FILE} +fi + +echo "#################################################" +echo " Starting NFS Server Setup " +echo "#################################################" + + +echo "Setting up /etc/exports" +# shellcheck disable=SC2086,SC2002 +cat $SCRIPT_DIR/$EXPORTFILE | sudo tee -a /etc/exports + +echo "Starting RPC Bind " +sudo /sbin/rpcbind -w + +echo "Exporting File System" +sudo /usr/sbin/exportfs -r + +echo "Starting RPC NFSD" +sudo /usr/sbin/rpc.nfsd + +echo "Starting RPC Mountd" +sudo /usr/sbin/rpc.mountd --manage-gids + +#echo "Starting Rpc Quotad" +sudo /usr/sbin/rpc.rquotad + +echo "Checking NFS server" +# shellcheck disable=SC2006,SC2196,SC2126 +PROC_COUNT=`ps aux | egrep 'rpcbind|mountd|nfsd' | grep -v "grep -E rpcbind|mountd|nfsd" | wc -l` +# shellcheck disable=SC2086 +if [ $PROC_COUNT -gt 1 ]; then +echo "####################################################" +echo " NFS Server is up and running " +echo " Create NFS volume for $ORADATA/$ORACLE_SID " +echo "####################################################" +echo $TRUE +else +echo "NFS Server Setup Failed" +fi + +tail -f /tmp/orod.log & +childPID=$! +wait $childPID diff --git a/OracleDatabase/RAC/OracleRACStorageServer/containerfiles/ol7/setupLinuxEnv.sh b/OracleDatabase/RAC/OracleRACStorageServer/containerfiles/ol7/setupLinuxEnv.sh new file mode 100755 index 0000000000..fc6c663dcd --- /dev/null +++ b/OracleDatabase/RAC/OracleRACStorageServer/containerfiles/ol7/setupLinuxEnv.sh @@ -0,0 +1,19 @@ +#!/bin/bash +# LICENSE UPL 1.0 +# +# Copyright (c) 1982-2018 Oracle and/or its affiliates. All rights reserved. +# +# Since: January, 2018 +# Author: paramdeep.saini@oracle.com, sanjay.singh@oracle.com +# Description: Sets up the unix environment for DB installation. +# +# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS HEADER. +# + +# Setup filesystem and oracle user +# Adjust file permissions, go to /opt/oracle as user 'oracle' to proceed with Oracle installation +# ------------------------------------------------------------ +mkdir /oradata && \ +chmod ug+x /opt/scripts/startup/*.sh && \ +yum -y install oracle-database-preinstall-18c net-tools which zip unzip tar openssh-server openssh-client vim-minimal which vim-minimal passwd sudo nfs-utils && \ +yum clean all diff --git a/OracleDatabase/RAC/OracleRACStorageServer/containerfiles/ol7/setupSudo.sh b/OracleDatabase/RAC/OracleRACStorageServer/containerfiles/ol7/setupSudo.sh new file mode 100755 index 0000000000..a07c060e0d --- /dev/null +++ b/OracleDatabase/RAC/OracleRACStorageServer/containerfiles/ol7/setupSudo.sh @@ -0,0 +1,15 @@ +#!/bin/bash +# LICENSE UPL 1.0 +# +# Copyright (c) 1982-2018 Oracle and/or its affiliates. All rights reserved. +# +# Since: November, 2018 +# Author: paramdeep.saini@oracle.com, sanjay.singh@oracle.com +# Description: setup the sudo for Oracle user +# +# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS HEADER. +# + +chmod 666 /etc/sudoers +echo "oracle ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers +chmod 440 /etc/sudoers diff --git a/OracleDatabase/RAC/OracleRACStorageServer/rac-storage.te b/OracleDatabase/RAC/OracleRACStorageServer/rac-storage.te new file mode 100644 index 0000000000..b57aaaa277 --- /dev/null +++ b/OracleDatabase/RAC/OracleRACStorageServer/rac-storage.te @@ -0,0 +1,31 @@ +module rac-storage 1.0; + +require { + type container_init_t; + type hugetlbfs_t; + type nfsd_fs_t; + type rpc_pipefs_t; + type default_t; + type kernel_t; + class filesystem mount; + class filesystem unmount; + class file { read write open }; + class dir { read watch }; + class bpf { map_create map_read map_write }; + class system module_request; + class fifo_file { open read write }; +} + +#============= container_init_t ============== +allow container_init_t hugetlbfs_t:filesystem mount; +allow container_init_t nfsd_fs_t:filesystem mount; +allow container_init_t rpc_pipefs_t:filesystem mount; +allow container_init_t nfsd_fs_t:file { read write open }; +allow container_init_t nfsd_fs_t:dir { read watch }; +allow container_init_t rpc_pipefs_t:dir { read watch }; +allow container_init_t rpc_pipefs_t:fifo_file { open read write }; +allow container_init_t rpc_pipefs_t:filesystem unmount; +allow container_init_t self:bpf map_create; +allow container_init_t self:bpf { map_read map_write }; +allow container_init_t default_t:dir read; +allow container_init_t kernel_t:system module_request; \ No newline at end of file diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/README.md b/OracleDatabase/RAC/OracleRealApplicationClusters/README.md index 8f3bd66ed0..86ad282c8e 100644 --- a/OracleDatabase/RAC/OracleRealApplicationClusters/README.md +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/README.md @@ -1,1198 +1,273 @@ -# Oracle Real Application Clusters in Linux Containers - -Learn about container deployment options for Oracle Real Application Clusters (Oracle RAC) Release 21c (21.3) - -## Overview of Running Oracle RAC in Containers - -Oracle Real Application Clusters (Oracle RAC) is an option to the award-winning Oracle Database Enterprise Edition. Oracle RAC is a cluster database with a shared cache architecture that overcomes the limitations of traditional shared-nothing and shared-disk approaches to provide highly scalable and available database solutions for all business applications. -Oracle RAC uses Oracle Clusterware as a portable cluster software that allows clustering of independent servers so that they cooperate as a single system and Oracle Automatic Storage Management (Oracle ASM) to provide simplified storage management that is consistent across all servers and storage platforms. -Oracle Clusterware and Oracle ASM are part of the Oracle Grid Infrastructure, which bundles both solutions in an easy to deploy software package. - -For more information on Oracle RAC Database 21c refer to the [Oracle Database documentation](http://docs.oracle.com/en/database/). - -## Using this Image - -To create an Oracle RAC environment, complete these steps in order: - -- [Oracle Real Application Clusters in Linux Containers](#oracle-real-application-clusters-in-linux-containers) - - [Overview of Running Oracle RAC in Containers](#overview-of-running-oracle-rac-in-containers) - - [Using this Image](#using-this-image) - - [Section 1 : Prerequisites for running Oracle RAC in containers](#section-1--prerequisites-for-running-oracle-rac-in-containers) - - [Section 2: Building Oracle RAC Database Container Images](#section-2-building-oracle-rac-database-container-images) - - [Oracle RAC Container Image for Docker](#oracle-rac-container-image-for-docker) - - [Oracle RAC Container Image for Podman](#oracle-rac-container-image-for-podman) - - [Section 3: Network and Password Management](#section-3--network-and-password-management) - - [Section 4: Oracle RAC on Docker](#section-4-oracle-rac-on-docker) - - [Section 4.1 : Prerequisites for Running Oracle RAC on Docker](#section-41--prerequisites-for-running-oracle-rac-on-docker) - - [Section 4.2: Setup Oracle RAC Container on Docker](#section-42-setup-oracle-rac-container-on-docker) - - [Deploying Oracle RAC on Container with Block Devices on Docker](#deploying-oracle-rac-on-container-with-block-devices-on-docker) - - [Deploying Oracle RAC on Container With Oracle RAC Storage Container](#deploying-oracle-rac-on-container-with-oracle-rac-storage-container) - - [Assign networks to Oracle RAC containers](#assign-networks-to-oracle-rac-containers) - - [Start the first container](#start-the-first-container) - - [Connect to the Oracle RAC container](#connect-to-the-oracle-rac-container) - - [Section 4.3: Adding an Oracle RAC Node using a Docker Container](#section-43-adding-an-oracle-rac-node-using-a-docker-container) - - [Deploying Oracle RAC Additional Node on Container with Block Devices on Docker](#deploying-oracle-rac-additional-node-on-container-with-block-devices-on-docker) - - [Deploying Oracle RAC Additional Node on Container with Oracle RAC Storage Container on Docker](#deploying-oracle-rac-additional-node-on-container-with-oracle-rac-storage-container-on-docker) - - [Assign Network to additional Oracle RAC container](#assign-network-to-additional-oracle-rac-container) - - [Start Oracle RAC racnode2 container](#start-oracle-rac-racnode2-container) - - [Connect to the Oracle RAC racnode2 container](#connect-to-the-oracle-rac-racnode2-container) - - [Section 4.4: Setup Oracle RAC Container on Docker with Docker Compose](#section-44-setup-oracle-rac-container-on-docker-with-docker-compose) - - [Section 5: Oracle RAC on Podman](#section-5-oracle-rac-on-podman) - - [Section 5.1 : Prerequisites for Running Oracle RAC on Podman](#section-51--prerequisites-for-running-oracle-rac-on-podman) - - [Section 5.2: Setup RAC Containers on Podman](#section-52-setup-rac-containers-on-podman) - - [Deploying Oracle RAC Containers with Block Devices on Podman](#deploying-oracle-rac-containers-with-block-devices-on-podman) - - [Deploying Oracle RAC on Container With Oracle RAC Storage Container on Podman](#deploying-oracle-rac-on-container-with-oracle-rac-storage-container-on-podman) - - [Assign networks to Oracle RAC containers Created Using Podman](#assign-networks-to-oracle-rac-containers-created-using-podman) - - [Start the first container Created Using Podman](#start-the-first-container-created-using-podman) - - [Connect to the Oracle RAC container Created Using Podman](#connect-to-the-oracle-rac-container-created-using-podman) - - [Section 5.3: Adding a Oracle RAC Node using a container on Podman](#section-53-adding-a-oracle-rac-node-using-a-container-on-podman) - - [Deploying Oracle RAC Additional Node on Container with Block Devices on Podman](#deploying-oracle-rac-additional-node-on-container-with-block-devices-on-podman) - - [Deploying Oracle RAC Additional Node on Container with Oracle RAC Storage Container on Podman](#deploying-oracle-rac-additional-node-on-container-with-oracle-rac-storage-container-on-podman) - - [Assign Network to additional Oracle RAC container Created Using Podman](#assign-network-to-additional-oracle-rac-container-created-using-podman) - - [Start Oracle RAC container](#start-oracle-rac-container) - - [Section 5.4: Setup Oracle RAC Container on Podman with Podman Compose](#section-54-setup-oracle-rac-container-on-podman-with-podman-compose) - - [Section 6: Connecting to an Oracle RAC Database](#section-6-connecting-to-an-oracle-rac-database) - - [Section 7: Environment Variables for the First Node](#section-7-environment-variables-for-the-first-node) - - [Section 8: Environment Variables for the Second and Subsequent Nodes](#section-8-environment-variables-for-the-second-and-subsequent-nodes) - - [Section 9: Building a Patched Oracle RAC Container Image](#section-9-building-a-patched-oracle-rac-container-image) - - [Section 10 : Sample Container Files for Older Releases](#section-10--sample-container-files-for-older-releases) - - [Docker](#docker) - - [Podman](#podman) - - [Section 11 : Support](#section-11--support) - - [Docker Support](#docker-support) - - [Podman Support](#podman-support) - - [Section 12 : License](#section-12--license) - - [Section 13 : Copyright](#section-13--copyright) - -## Section 1 : Prerequisites for running Oracle RAC in containers - -Before you proceed to section two, you must complete each of the steps listed in this section. - -To review the resource requirements for Oracle RAC, see Oracle Database 21c Release documentation [Oracle Grid Infrastructure Installation and Upgrade Guide](https://docs.oracle.com/en/database/oracle/oracle-database/21/cwlin/index.html) - -Complete each of the following prerequisites: - -1. Ensure that each container that you will deploy as part of your cluster meets the minimum hardware requirements for Oracle RAC and Oracle Grid Infrastructure software. -2. Ensure all data files, control files, redo log files, and the server parameter file (`SPFILE`) used by the Oracle RAC database reside on shared storage that is accessible by all the Oracle RAC database instances. An Oracle RAC database is a shared-everything database, so each Oracle RAC Node must have the same access. -3. Configure the following addresses manually in your DNS. - - - Public IP address for each container - - Private IP address for each container - - Virtual IP address for each container - - Three single client access name (SCAN) addresses for the cluster. -4. If you are planning to set up RAC on Docker, refer Docker Host machine details in [Section 4.1](#section-41--prerequisites-for-running-oracle-rac-on-docker) -5. If you are planning to set up RAC on Podman, refer Podman Host machine details in [Section 5.1](#section-51--prerequisites-for-running-oracle-rac-on-podman) -6. Block storage: If you are planning to use block devices for shared storage, then allocate block devices for OCR, voting and database files. -7. NFS storage: If you are planning to use NFS storage for OCR, Voting Disk and Database files, then configure NFS storage and export at least one NFS mount. You can also use `/docker-images/OracleDatabase/RAC/OracleRACStorageServer` container for shared file system on NFS. -8. Set`/etc/sysctl.conf`parameters: For Oracle RAC, you must set following parameters at host level in `/etc/sysctl.conf`: - - ```INI - fs.aio-max-nr = 1048576 - fs.file-max = 6815744 - net.core.rmem_max = 4194304 - net.core.rmem_default = 262144 - net.core.wmem_max = 1048576 - net.core.wmem_default = 262144 - net.core.rmem_default = 262144 - ``` - -9. List and reload parameters: After the `/etc/sysctl.conf` file is modified, run the following commands: - - ```bash - sysctl -a - sysctl -p - ``` - -10. To resolve VIPs and SCAN IPs, we are using a DNS container in this guide. Before proceeding to the next step, create a [DNS server container](../OracleDNSServer/README.md). -**Note** If you have a pre-configured DNS server in your environment, then you can replace `-e DNS_SERVERS=172.16.1.25`, `--dns=172.16.1.25`, `-e DOMAIN=example.com` and `--dns-search=example.com` parameters in **Section 2: Building Oracle RAC Database Podman Install Images** with the `DOMAIN_NAME` and `DNS_SERVER` based on your environment. -11. If you are running RAC on Podman, make sure that you have installed the `podman-docker` rpm package so that podman commands can be run using `docker` utility. -12. The Oracle RAC `Dockerfile` does not contain any Oracle software binaries. Download the following software from the [Oracle Technology Network](https://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html) and stage them under `/docker-images/OracleDatabase/RAC/OracleRealApplicationCluster/dockerfiles/` folder. - - - Oracle Database 21c Grid Infrastructure (21.3) for Linux x86-64 - - Oracle Database 21c (21.3) for Linux x86-64 - - - If you are deploying Oracle RAC on Podman then execute following, otherwise skip to next section. - - Because Oracle RAC on Podman is supported on Release 21c (21.7) or later, you must download the grid release update (RU) from [support.oracle.com](https://support.oracle.com/portal/). - - - In this Example we download the following latest one-off patches for release 21.13 from [support.oracle.com](https://support.oracle.com/portal/) - - `36031790` - - `36041222` -13. Ensure you have git configured in your host machine, [refer this page](https://docs.oracle.com/en/learn/ol-git-start/index.html) for instructions. Clone this git repo by running below command - -```bash -git clone git@github.com:oracle/docker-images.git -``` - -**Notes** - -- If you are planning to use a `DNSServer` container for SCAN IPs, VIPs resolution, then configure the DNSServer. For development and testing purposes only, use the Oracle `DNSServer` image to deploy a container providing DNS resolutions. Please check [OracleDNSServer](../OracleDNSServer/README.md) for details. -- `OracleRACStorageServer` docker image can be used only for development and testing purpose. Please check [OracleRACStorageServer](../OracleRACStorageServer/README.md) for details. -- When you want to deploy RAC on Docker or Podman on Single host, create bridge networks for containers. -- When you want to deploy RAC on Docker or Podman on Multiple host, create macvlan networks for containers. -- To run Oracle RAC using Podman on multiple hosts, refer [Podman macvlan network](https://docs.podman.io/en/latest/markdown/podman-network-create.1.html). - To run Oracle RAC using Oracle Container Runtime for Docker on multiple hosts, refer [Docker macvlan network](https://docs.docker.com/network/macvlan/). -- If the Docker or Podman bridge network is not available outside your host, you can use the Oracle Connection Manager [CMAN image](https://github.com/oracle/docker-images/tree/main/OracleDatabase/RAC/OracleConnectionManager) to access the Oracle RAC Database from outside the host. - -## Section 2: Building Oracle RAC Database Container Images - -**IMPORTANT :** This section assumes that you have gone through all the prerequisites in Section 1 and completed all the steps, based on your environment. Do not uncompress the binaries and patches. - -To assist in building the images, you can use the [`buildContainerImage.sh`](https://github.com/oracle/docker-images/blob/master/OracleDatabase/RAC/OracleRealApplicationClusters/dockerfiles/buildContainerImage.sh) script. See the following for instructions and usage. - -### Oracle RAC Container Image for Docker - -If you are planing to deploy Oracle RAC container image on Podman, skip to the section [Oracle RAC Container Image for Podman](#oracle-rac-container-image-for-podman). - - ```bash - cd /docker-images/OracleDatabase/RAC/OracleRealApplicationClusters/dockerfiles - ./buildContainerImage.sh -v -o '--build-arg BASE_OL_IMAGE=oraclelinux:7 --build-arg SLIMMING=true|false' - - # for example ./buildContainerImage.sh -v 21.3.0 -o '--build-arg BASE_OL_IMAGE=oraclelinux:7 --build-arg SLIMMING=false' - ``` - -### Oracle RAC Container Image for Podman - -If you are planing to deploy Oracle RAC container image on Docker, skip to the section [Oracle RAC Container Image for Docker](#oracle-rac-container-image-for-docker). - - ```bash - cd /docker-images/OracleDatabase/RAC/OracleRealApplicationClusters/dockerfiles - ./buildContainerImage.sh -v -o '--build-arg BASE_OL_IMAGE=oraclelinux:8 --build-arg SLIMMING=true|false' - - # for example ./buildContainerImage.sh -v 21.3.0 -o '--build-arg BASE_OL_IMAGE=oraclelinux:8 --build-arg SLIMMING=false' - ``` - -- After the `21.3.0` Oracle RAC container image is built, start building a patched image with the download 21.7 RU and one-offs. To build the patch image, refer [Example of how to create a patched database image](https://github.com/oracle/docker-images/tree/main/OracleDatabase/RAC/OracleRealApplicationClusters/samples/applypatch). - - -**Notes** - -- The resulting images will contain the Oracle Grid Infrastructure binaries and Oracle RAC Database binaries. -- If you are behind a proxy wall, then you must set the `https_proxy` environment variable based on your environment before building the image. - -## Section 3: Network and Password Management - -1. Before you start the installation, you must plan your private and public network. You can create a network bridge on every container host so containers running within that host can communicate with each other. - - For example, create `rac_pub1_nw` for the public network (`172.16.1.0/24`) and `rac_priv1_nw` (`192.168.17.0/24`) for a private network. You can use any network subnet for testing. - - In this document we reference the public network on `172.16.1.0/24` and the private network on `192.168.17.0/24`. - - ```bash - docker network create --driver=bridge --subnet=172.16.1.0/24 rac_pub1_nw - docker network create --driver=bridge --subnet=192.168.17.0/24 rac_priv1_nw - ``` - - - To run Oracle RAC using Oracle Container Runtime for Docker on multiple hosts, you will need to create a [Docker macvlan network](https://docs.docker.com/network/macvlan/) using the following commands: - - ```bash - docker network create -d macvlan --subnet=172.16.1.0/24 --gateway=172.16.1.1 -o parent=eth0 rac_pub1_nw - docker network create -d macvlan --subnet=192.168.17.0/24 --gateway=192.168.17.1 -o parent=eth1 rac_priv1_nw - ``` - -2. Specify the secret volume for resetting the grid, oracle, and database user password during node creation or node addition. The volume can be a shared volume among all the containers. For example: - - ```bash - mkdir /opt/.secrets/ - openssl rand -out /opt/.secrets/pwd.key -hex 64 - ``` - - - Edit the `/opt/.secrets/common_os_pwdfile` and seed the password for the grid, oracle and database users. For this deployment scenario, it will be a common password for the grid, oracle, and database users. Run the command: - - ```bash - openssl enc -aes-256-cbc -salt -in /opt/.secrets/common_os_pwdfile -out /opt/.secrets/common_os_pwdfile.enc -pass file:/opt/.secrets/pwd.key - rm -f /opt/.secrets/common_os_pwdfile - ``` - -3. Create `rac_host_file` on both Podman and Docker hosts: - - ```bash - mkdir /opt/containers/ - touch /opt/containers/rac_host_file - ``` - -**Notes** - -- To run Oracle RAC using Podman on multiple hosts, refer [Podman macvlan network](https://docs.podman.io/en/latest/markdown/podman-network-create.1.html). -To run Oracle RAC using Oracle Container Runtime for Docker on multiple hosts, refer [Docker macvlan network](https://docs.docker.com/network/macvlan/). -- If the Docker or Podman bridge network is not available outside your host, you can use the Oracle Connection Manager [CMAN image](https://github.com/oracle/docker-images/tree/main/OracleDatabase/RAC/OracleConnectionManager) to access the Oracle RAC Database from outside the host. -- If you want to specify a different password for each of the user accounts, then create three different files, encrypt them under `/opt/.secrets`, and pass the file name to the container using the environment variable. Environment variables can be ORACLE_PWD_FILE for the oracle user, GRID_PWD_FILE for the grid user, and DB_PWD_FILE for the database password. -- If you want to use a common password for the oracle, grid, and database users, then you can assign a password file name to COMMON_OS_PWD_FILE environment variable. - -## Section 4: Oracle RAC on Docker - -If you are deploying Oracle RAC On Podman, skip to the [Section 5: Oracle RAC on Podman](#section-5-oracle-rac-on-podman). - -**Note** Oracle RAC is supported for production use on Docker starting with Oracle Database 21c (21.3). On earlier releases, Oracle RAC on Docker is supported for development and and test environments. To deploy Oracle RAC on Docker, use the pre-built images available on the Oracle Container Registry. Execute the following steps in a given order to deploy RAC on Docker: - -To create an Oracle RAC environment on Docker, complete each of these steps in order. - -### Section 4.1 : Prerequisites for Running Oracle RAC on Docker - -To run Oracle RAC on Docker, you must install and configure [Oracle Container Runtime for Docker](https://docs.oracle.com/cd/E52668_01/E87205/html/index.html) on Oracle Linux 7. You must have sufficient space on docker file system (`/var/lib/docker`), configured with the Docker OverlayFS storage driver option `overlay2`. - -**IMPORTANT:** Completing prerequisite steps is a requirement for successful configuration. - -Complete each prerequisite step in order, customized for your environment. - -1. Verify that you have enough memory and CPU resources available for all containers. For this `README.md`, we used the following configuration: - - - 2 Docker hosts - - CPU Cores: 1 Socket with 4 cores, with 2 threads for each core Intel® Xeon® Platinum 8167M CPU at 2.00 GHz - - RAM: 60GB - - Swap memory: 32 GB - - Oracle Linux 7.9 or later with the Unbreakable Enterprise Kernel 6: 5.4.17-2102.200.13.el7uek.x86_64. - -2. Oracle RAC must run certain processes in real-time mode. To run processes inside a container in real-time mode, you must make changes to the Docker configuration files. For details, see the [`dockerd` documentation](https://docs.docker.com/engine/reference/commandline/dockerd/#examples). Edit the Docker Daemon based on Docker version: - - - Check the Docker version. In the following output, the Oracle `docker-engine` version is 19.03. - - ```bash - rpm -qa | grep docker - docker-cli-19.03.11.ol-9.el7.x86_64 - docker-engine-19.03.11.ol-9.el7.x86_64 - ``` - - - If Oracle `docker-engine` version is greater than or equal to 19.03: Edit `/usr/lib/systemd/system/docker.service` and add additional parameters in the `[Service]` section for the `dockerd` daemon: - - ```bash - ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --cpu-rt-runtime=950000 - ``` - - - If Oracle docker-engine version is less than 19.03: Edit `/etc/sysconfig/docker` and add following - - ```bash - OPTIONS='--selinux-enabled --cpu-rt-runtime=950000' - ``` - -3. After you have modified the `dockerd` daemon, reload the daemon with the changes you have made: - - ```bash - systemctl daemon-reload - systemctl stop docker - systemctl start docker - ``` - -### Section 4.2: Setup Oracle RAC Container on Docker - -This section provides step by step procedure to deploy Oracle RAC on container with block devices and storage container. To understand the details of environment variable, refer For the details of environment variables [Section 7: Environment Variables for the First Node](#section-7-environment-variables-for-the-first-node) - -Refer the [Section 3: Network and Password Management](#section-3--network-and-password-management) and setup the network on a container host based on your Oracle RAC environment. If you have already done the setup, ignore and proceed further. - -#### Deploying Oracle RAC on Container with Block Devices on Docker - -If you are using an NFS volume, skip to the section [Deploying Oracle RAC on Container With Oracle RAC Storage Container](#deploying-oracle-rac-on-container-with-oracle-rac-storage-container). - -Make sure the ASM devices do not have any existing file system. To clear any other file system from the devices, use the following command: - - ```bash - dd if=/dev/zero of=/dev/xvde bs=8k count=10000 - ``` - -Repeat for each shared block device. In the preceding example, `/dev/xvde` is a shared Xen virtual block device. - -Now create the Oracle RAC container using the image. You can use the following example to create a container: - - ```bash -docker create -t -i \ - --hostname racnoded1 \ - --volume /boot:/boot:ro \ - --volume /dev/shm \ - --tmpfs /dev/shm:rw,exec,size=4G \ - --volume /opt/containers/rac_host_file:/etc/hosts \ - --volume /opt/.secrets:/run/secrets:ro \ - --dns=172.16.1.25 \ - --dns-search=example.com \ - --device=/dev/oracleoci/oraclevdd:/dev/asm_disk1 \ - --device=/dev/oracleoci/oraclevde:/dev/asm_disk2 \ - --privileged=false \ - --cap-add=SYS_NICE \ - --cap-add=SYS_RESOURCE \ - --cap-add=NET_ADMIN \ - -e DNS_SERVERS="172.16.1.25" \ - -e NODE_VIP=172.16.1.130 \ - -e VIP_HOSTNAME=racnoded1-vip \ - -e PRIV_IP=192.168.17.100 \ - -e PRIV_HOSTNAME=racnoded1-priv \ - -e PUBLIC_IP=172.16.1.100 \ - -e PUBLIC_HOSTNAME=racnoded1 \ - -e SCAN_NAME=racnodedc1-scan \ - -e OP_TYPE=INSTALL \ - -e DOMAIN=example.com \ - -e ASM_DEVICE_LIST=/dev/asm_disk1,/dev/asm_disk2 \ - -e ASM_DISCOVERY_DIR=/dev \ - -e CMAN_HOSTNAME=racnodedc1-cman \ - -e CMAN_IP=172.16.1.164 \ - -e COMMON_OS_PWD_FILE=common_os_pwdfile.enc \ - -e PWD_KEY=pwd.key \ - -e RESET_FAILED_SYSTEMD="true" \ - --restart=always --tmpfs=/run -v /sys/fs/cgroup:/sys/fs/cgroup:ro \ - --cpu-rt-runtime=95000 --ulimit rtprio=99 \ - --name racnoded1 \ - oracle/database-rac:21.3.0 -``` - -**Note:** Change environment variables such as `NODE_IP`, `PRIV_IP`, `PUBLIC_IP`, `ASM_DEVICE_LIST`, `PWD_FILE`, and `PWD_KEY` based on your environment. Also, ensure you use the correct device names on each host. - -#### Deploying Oracle RAC on Container With Oracle RAC Storage Container - -If you are using block devices, skip to the section [Deploying Oracle RAC on Container with Block Devices on Docker](#deploying-oracle-rac-on-container-with-block-devices-on-docker) - -Now create the Oracle RAC container using the image. You can use the following example to create a container: - - ```bash - docker create -t -i \ - --hostname racnoded1 \ - --volume /boot:/boot:ro \ - --volume /dev/shm \ - --tmpfs /dev/shm:rw,exec,size=4G \ - --volume /opt/containers/rac_host_file:/etc/hosts \ - --volume /opt/.secrets:/run/secrets:ro \ - --dns=172.16.1.25 \ - --dns-search=example.com \ - --privileged=false \ - --volume racstorage:/oradata \ - --cap-add=SYS_NICE \ - --cap-add=SYS_RESOURCE \ - --cap-add=NET_ADMIN \ - -e DNS_SERVERS="172.16.1.25" \ - -e NODE_VIP=172.16.1.130 \ - -e VIP_HOSTNAME=racnoded1-vip \ - -e PRIV_IP=192.168.17.100 \ - -e PRIV_HOSTNAME=racnoded1-priv \ - -e PUBLIC_IP=172.16.1.100 \ - -e PUBLIC_HOSTNAME=racnoded1 \ - -e SCAN_NAME=racnodedc1-scan \ - -e OP_TYPE=INSTALL \ - -e DOMAIN=example.com \ - -e ASM_DISCOVERY_DIR=/oradata \ - -e ASM_DEVICE_LIST=/oradata/asm_disk01.img,/oradata/asm_disk02.img,/oradata/asm_disk03.img,/oradata/asm_disk04.img,/oradata/asm_disk05.img \ - -e CMAN_HOSTNAME=racnodedc1-cman \ - -e CMAN_IP=172.16.1.164 \ - -e COMMON_OS_PWD_FILE=common_os_pwdfile.enc \ - -e PWD_KEY=pwd.key \ - -e RESET_FAILED_SYSTEMD="true" \ - --restart=always \ - --tmpfs=/run -v /sys/fs/cgroup:/sys/fs/cgroup:ro \ - --cpu-rt-runtime=95000 \ - --ulimit rtprio=99 \ - --name racnoded1 \ - oracle/database-rac:21.3.0 - ``` - -**Notes:** - -- Change environment variables such as `NODE_IP`, `PRIV_IP`, `PUBLIC_IP`, `ASM_DEVICE_LIST`, `PWD_FILE`, and `PWD_KEY` based on your environment. Also, ensure you use the correct device names on each host. -- You must have created the `racstorage` volume before the creation of the Oracle RAC Container. For details, please refer [OracleRACStorageServer](../OracleRACStorageServer/README.md). -- For details about the available environment variables, refer the [Section 7](#section-7-environment-variables-for-the-first-node). - -#### Assign networks to Oracle RAC containers - -You need to assign the Docker networks created in section 1 to containers. Execute the following commands: - - ```bash - -docker network disconnect bridge racnoded1 -docker network connect rac_pub1_nw --ip 172.16.1.100 racnoded1 -docker network connect rac_priv1_nw --ip 192.168.17.100 racnoded1 - ``` - -#### Start the first container - -To start the first container, run the following command: - - ```bash - docker start racnoded1 - ``` - -It can take at least 40 minutes or longer to create the first node of the cluster. To check the logs, use the following command from another terminal session: - - ```bash - docker logs -f racnoded1 - ``` - -You should see the database creation success message at the end: - - ```bash - #################################### - ORACLE RAC DATABASE IS READY TO USE! - #################################### - ``` - -#### Connect to the Oracle RAC container - -To connect to the container execute the following command: - -```bash -docker exec -i -t racnoded1 /bin/bash -``` - -If the install fails for any reason, log in to the container using the preceding command and check `/tmp/orod.log`. - -- You can also review the Grid Infrastructure logs located at `$GRID_BASE/diag/crs` and check for failure logs. -- If the failure occurred during the database creation then check the database logs. - -### Section 4.3: Adding an Oracle RAC Node using a Docker Container - -Before proceeding to the next step, ensure Oracle Grid Infrastructure is running and the Oracle RAC Database is open as per instructions in [Section 4.2: Setup Oracle RAC on Docker](#section-42-setup-oracle-rac-container-on-docker). Otherwise, the node addition process will fail. - -Refer the [Section 3: Network and Password Management](#section-3--network-and-password-management) and setup the network on a container host based on your Oracle RAC environment. If you have already done the setup, ignore and proceed further. - -To understand the details of environment variable, refer For the details of environment variables [Section 8](#section-8-environment-variables-for-the-second-and-subsequent-nodes) - -Reset the password on the existing Oracle RAC node for SSH setup between an existing node in the cluster and the new node. Password must be the same on all the nodes for the `grid` and `oracle` users. Execute the following command on an existing node of the cluster. - -```bash -docker exec -i -t -u root racnode1 /bin/bash -sh /opt/scripts/startup/resetOSPassword.sh --help -sh /opt/scripts/startup/resetOSPassword.sh --op_type reset_grid_oracle --pwd_file common_os_pwdfile.enc --secret_volume /run/secrets --pwd_key_file pwd.key -``` - -**Note:** If you do not have a common secret volume among Oracle RAC containers, populate the password file with the same password that you have used on the new node, encrypt the file, and execute `resetOSPassword.sh` on the existing node of the cluster. - -#### Deploying Oracle RAC Additional Node on Container with Block Devices on Docker - -If you are using an NFS volume, skip to the section [Deploying Oracle RAC on Container with Oracle RAC Storage Container on Docker](#deploying-oracle-rac-on-container-with-oracle-rac-storage-container). - -To create additional nodes, use the following command: - -```bash -docker create -t -i \ - --hostname racnoded2 \ - --volume /boot:/boot:ro \ - --volume /dev/shm \ - --tmpfs /dev/shm:rw,exec,size=4G \ - --volume /opt/containers/rac_host_file:/etc/hosts \ - --volume /opt/.secrets:/run/secrets:ro \ - --dns=172.16.1.25 \ - --dns-search=example.com \ - --device=/dev/oracleoci/oraclevdd:/dev/asm_disk1 \ - --device=/dev/oracleoci/oraclevde:/dev/asm_disk2 \ - --privileged=false \ - --cap-add=SYS_NICE \ - --cap-add=SYS_RESOURCE \ - --cap-add=NET_ADMIN \ - -e DNS_SERVERS="172.16.1.25" \ - -e EXISTING_CLS_NODES=racnoded1 \ - -e NODE_VIP=172.16.1.131 \ - -e VIP_HOSTNAME=racnoded2-vip \ - -e PRIV_IP=192.168.17.101 \ - -e PRIV_HOSTNAME=racnoded2-priv \ - -e PUBLIC_IP=172.16.1.101 \ - -e PUBLIC_HOSTNAME=racnoded2 \ - -e DOMAIN=example.com \ - -e SCAN_NAME=racnodedc1-scan \ - -e ASM_DISCOVERY_DIR=/dev \ - -e ASM_DEVICE_LIST=/dev/asm_disk1,/dev/asm_disk2 \ - -e ORACLE_SID=ORCLCDB \ - -e OP_TYPE=ADDNODE \ - -e COMMON_OS_PWD_FILE=common_os_pwdfile.enc \ - -e PWD_KEY=pwd.key \ - -e RESET_FAILED_SYSTEMD="true" \ - --restart=always --tmpfs=/run -v /sys/fs/cgroup:/sys/fs/cgroup:ro \ - --cpu-rt-runtime=95000 --ulimit rtprio=99 \ - --name racnoded2 \ - oracle/database-rac:21.3.0 -``` - -For details of all environment variables and parameters, refer to [Section 7](#section-7-environment-variables-for-the-first-node). - -#### Deploying Oracle RAC Additional Node on Container with Oracle RAC Storage Container on Docker - -If you are using physical block devices for shared storage, skip to [Deploying Oracle RAC on Container with Block Devices on Docker](#deploying-oracle-rac-on-container-with-block-devices-on-docker). - -Use the existing `racstorage:/oradata` volume when creating the additional container using the image. - -For example: - -```bash -docker create -t -i \ - --hostname racnoded2 \ - --volume /boot:/boot:ro \ - --volume /dev/shm \ - --tmpfs /dev/shm:rw,exec,size=4G \ - --volume /opt/containers/rac_host_file:/etc/hosts \ - --volume /opt/.secrets:/run/secrets:ro \ - --dns=172.16.1.25 \ - --dns-search=example.com \ - --volume racstorage:/oradata \ - --privileged=false \ - --cap-add=SYS_NICE \ - --cap-add=SYS_RESOURCE \ - --cap-add=NET_ADMIN \ - -e DNS_SERVERS="172.16.1.25" \ - -e EXISTING_CLS_NODES=racnoded1 \ - -e NODE_VIP=172.16.1.131 \ - -e VIP_HOSTNAME=racnoded2-vip \ - -e PRIV_IP=192.168.17.101 \ - -e PRIV_HOSTNAME=racnoded2-priv \ - -e PUBLIC_IP=172.16.1.101 \ - -e PUBLIC_HOSTNAME=racnoded2 \ - -e DOMAIN=example.com \ - -e SCAN_NAME=racnodedc1-scan \ - -e ASM_DISCOVERY_DIR=/oradata \ - -e ASM_DEVICE_LIST=/oradata/asm_disk01.img,/oradata/asm_disk02.img,/oradata/asm_disk03.img,/oradata/asm_disk04.img,/oradata/asm_disk05.img \ - -e ORACLE_SID=ORCLCDB \ - -e OP_TYPE=ADDNODE \ - -e COMMON_OS_PWD_FILE=common_os_pwdfile.enc \ - -e PWD_KEY=pwd.key \ - -e RESET_FAILED_SYSTEMD="true" \ - --restart=always --tmpfs=/run -v /sys/fs/cgroup:/sys/fs/cgroup:ro \ - --cpu-rt-runtime=95000 --ulimit rtprio=99 \ - --name racnoded2 \ - oracle/database-rac:21.3.0 -``` - -**Notes:** - -- You must have created **racstorage** volume before the creation of the Oracle RAC container. -- You can change env variables such as IPs and ORACLE_PWD based on your env. For details about the env variables, refer the section 8. - -#### Assign Network to additional Oracle RAC container - -Connect the private and public networks you created earlier to the container: - -```bash -docker network disconnect bridge racnoded2 -docker network connect rac_pub1_nw --ip 172.16.1.101 racnoded2 -docker network connect rac_priv1_nw --ip 192.168.17.101 racnoded2 -``` - -#### Start Oracle RAC racnode2 container - -Start the container - -```bash -docker start racnoded2 -``` - -To check the database logs, tail the logs using the following command: - -```bash -docker logs -f racnoded2 -``` - -You should see the database creation success message at the end. - -```bash -################################################################# -Oracle Database ORCLCDB is up and running on racnoded2 -################################################################# -Running User Script for oracle user -Setting Remote Listener -#################################### -ORACLE RAC DATABASE IS READY TO USE! -#################################### -``` - -#### Connect to the Oracle RAC racnode2 container - -To connect to the container execute the following command: - -```bash -docker exec -i -t racnoded2 /bin/bash -``` - -If the node addition fails, log in to the container using the preceding command and review `/tmp/orod.log`. You can also review the Grid Infrastructure logs i.e. `$GRID_BASE/diag/crs` and check for failure logs. If the node creation has failed during the database creation process, then check DB logs. - -## Section 4.4: Setup Oracle RAC Container on Docker with Docker Compose - -Oracle RAC database can also be deployed with Docker Compose. An example of how to install Oracle RAC Database on Single Host via Bridge Network is explained in this [README.md](./samples/racdockercompose/README.md) - -Same section covers various below scenarios as well with docker compose- -1. Deploying Oracle RAC on Container with Block Devices on Docker with Docker Compose -2. Deploying Oracle RAC on Container With Oracle RAC Storage Container with Docker Compose -3. Deploying Oracle RAC Additional Node on Container with Block Devices on Docker with Docker Compose -4. Deploying Oracle RAC Additional Node on Container with Oracle RAC Storage Container on Docker with Docker Compose - -***Note:*** Docker and Docker Compose is not supported with OL8. You need OL7.9 with UEK R5 or R6. - -## Section 5: Oracle RAC on Podman - -If you are deploying Oracle RAC On Docker, skip to [Section 4: Oracle RAC on Docker](#section-4-oracle-rac-on-docker) - -**Note** Oracle RAC is supported for production use on Podman starting with Oracle Database 19c (19.16), and Oracle Database 21c (21.7). You can deploy Oracle RAC on Podman using the pre-built images available on Oracle Container Registry. Execute the following steps in a given order to deploy RAC on Podman: - -To create an Oracle RAC environment on Podman, complete each of these steps in order. - -### Section 5.1 : Prerequisites for Running Oracle RAC on Podman - -You must install and configure [Podman release 4.0.2](https://docs.oracle.com/en/operating-systems/oracle-linux/podman/podman-InstallingPodmanandRelatedUtilities.html#podman-install) or later on Oracle Linux 8.5 or later to run Oracle RAC on Podman. - -**Notes**: - -- You need to remove `"--cpu-rt-runtime=95000 \"` from container creation commands mentioned below in this document in following sections to create the containers if you are running Oracle 8 with UEKR7: - - [Section 5.2: Setup RAC Containers on Podman](#section-52-setup-rac-containers-on-podman). - - [Section 5.3: Adding a Oracle RAC Node using a container on Podman](#section-53-adding-a-oracle-rac-node-using-a-container-on-podman). - -- You can check the details on [Oracle Linux and Unbreakable Enterprise Kernel (UEK) Releases](https://blogs.oracle.com/scoter/post/oracle-linux-and-unbreakable-enterprise-kernel-uek-releases) - -- You do not need to execute step 2 in this section to create and enable `podman-rac-cgroup.service` when we are running Oracle Linux 8 with Unbreakable Enterprise Kernel R7. - -**IMPORTANT:** Completing prerequisite steps is a requirement for successful configuration. - -Complete each prerequisite step in order, customized for your environment. - -1. Verify that you have enough memory and CPU resources available for all containers. In this `README.md` for Podman, we used the following configuration: - - - 2 Podman hosts - - CPU Cores: 1 Socket with 4 cores, with 2 threads for each core Intel® Xeon® Platinum 8167M CPU at 2.00 GHz - - RAM: 60 GB - - Swap memory: 32 GB - - Oracle Linux 8.5 (Linux-x86-64) with the Unbreakable Enterprise Kernel 6: `5.4.17-2136.300.7.el8uek.x86_64`. - -2. Oracle RAC must run certain processes in real-time mode. To run processes inside a container in real-time mode, populate the real-time CPU budgeting on machine restarts. Create a oneshot systemd service as follows: - - - Create a file `/etc/systemd/system/podman-rac-cgroup.service` - - Append the following lines: - - ```INI - [Unit] - Description=Populate Cgroups with real time chunk on machine restart - After=multi-user.target - [Service] - Type=oneshot - ExecStart=/bin/bash -c “/bin/echo 950000 > /sys/fs/cgroup/cpu,cpuacct/machine.slice/cpu.rt_runtime_us && /bin/systemctl restart podman-restart.service” - StandardOutput=journal - CPUAccounting=yes - Slice=machine.slice - [Install] - WantedBy=multi-user.target - ``` - - - After creating the file `/etc/systemd/system/podman-rac-cgroup.service` with the lines appended in the preceding step, reload and restart the Podman daemon using the following steps: - - ```bash - systemctl daemon-reload - systemctl enable podman-rac-cgroup.service - systemctl enable podman-restart.service - systemctl start podman-rac-cgroup.service - ``` - -3. If SELINUX is enabled on the Podman host, then you must create an SELinux policy for Oracle RAC on Podman. - -You can check SELinux Status in your host machine by running the `sestatus` command. - -For details about how to create SELinux policy for Oracle RAC on Podman, see "How to Configure Podman for SELinux Mode" in the publication [Oracle Real Application Clusters Installation Guide for Podman Oracle Linux x86-64](https://docs.oracle.com/en/database/oracle/oracle-database/21/racpd/target-configuration-oracle-rac-podman.html#GUID-59138DF8-3781-4033-A38F-E0466884D008). - -### Section 5.2: Setup RAC Containers on Podman - -This section provides step by step procedure to deploy Oracle RAC on container with block devices and storage container. To understand the details of environment variable, refer For the details of environment variables [Section 7: Environment Variables for the First Node](#section-7-environment-variables-for-the-first-node) - -Refer the [Section 3: Network and Password Management](#section-3--network-and-password-management) and setup the network on a container host based on your Oracle RAC environment. If you have already done the setup, ignore and proceed further. - -#### Deploying Oracle RAC Containers with Block Devices on Podman - -If you are using an NFS volume, skip to the section [Deploying Oracle RAC on Container With Oracle RAC Storage Container on Podman](#deploying-oracle-rac-on-container-with-oracle-rac-storage-container-on-podman). - -Make sure the ASM devices do not have any existing file system. To clear any other file system from the devices, use the following command: - - ```bash - dd if=/dev/zero of=/dev/xvde bs=8k count=10000 - ``` - -Repeat for each shared block device. In the preceding example, `/dev/xvde` is a shared Xen virtual block device. - -Now create the Oracle RAC container using the image. For the details of environment variables, refer to section 7. You can use the following example to create a container: - - ```bash - podman create -t -i \ - --hostname racnodep1 \ - --volume /boot:/boot:ro \ - --tmpfs /dev/shm:rw,exec,size=4G \ - --volume /opt/containers/rac_host_file:/etc/hosts \ - --volume /opt/.secrets:/run/secrets:ro \ - --dns=172.16.1.25 \ - --dns-search=example.com \ - --device=/dev/oracleoci/oraclevdd:/dev/asm_disk1 \ - --device=/dev/oracleoci/oraclevde:/dev/asm_disk2 \ - --privileged=false \ - --cap-add=SYS_NICE \ - --cap-add=SYS_RESOURCE \ - --cap-add=NET_ADMIN \ - --cap-add=AUDIT_WRITE \ - --cap-add=AUDIT_CONTROL \ - --memory 16G \ - --memory-swap 32G \ - --sysctl kernel.shmall=2097152 \ - --sysctl "kernel.sem=250 32000 100 128" \ - --sysctl kernel.shmmax=8589934592 \ - --sysctl kernel.shmmni=4096 \ - -e DNS_SERVERS="172.16.1.25" \ - -e NODE_VIP=172.16.1.200 \ - -e VIP_HOSTNAME=racnodep1-vip \ - -e PRIV_IP=192.168.17.170 \ - -e PRIV_HOSTNAME=racnodep1-priv \ - -e PUBLIC_IP=172.16.1.170 \ - -e PUBLIC_HOSTNAME=racnodep1 \ - -e SCAN_NAME=racnodepc1-scan \ - -e OP_TYPE=INSTALL \ - -e DOMAIN=example.com \ - -e ASM_DEVICE_LIST=/dev/asm_disk1,/dev/asm_disk2 \ - -e ASM_DISCOVERY_DIR=/dev \ - -e CMAN_HOSTNAME=racnodepc1-cman \ - -e CMAN_IP=172.16.1.166 \ - -e COMMON_OS_PWD_FILE=common_os_pwdfile.enc \ - -e PWD_KEY=pwd.key \ - -e ORACLE_SID=ORCLCDB \ - -e RESET_FAILED_SYSTEMD="true" \ - -e DEFAULT_GATEWAY="172.16.1.1" \ - -e TMPDIR=/var/tmp \ - --restart=always \ - --systemd=always \ - --cpu-rt-runtime=95000 \ - --ulimit rtprio=99 \ - --name racnodep1 \ - localhost/oracle/database-rac:21.3.0-21.13.0 - ``` - -**Note:** Change environment variables such as `NODE_IP`, `PRIV_IP`, `PUBLIC_IP`, `ASM_DEVICE_LIST`, `PWD_FILE`, and `PWD_KEY` based on your environment. Also, ensure you use the correct device names on each host. - -#### Deploying Oracle RAC on Container With Oracle RAC Storage Container on Podman - -If you are using block devices, skip to the section [Deploying RAC Containers with Block Devices on Podman](#deploying-oracle-rac-containers-with-block-devices-on-podman). -Now create the Oracle RAC container using the image. You can use the following example to create a container: - - ```bash - podman create -t -i \ - --hostname racnodep1 \ - --volume /boot:/boot:ro \ - --tmpfs /dev/shm:rw,exec,size=4G \ - --volume /opt/containers/rac_host_file:/etc/hosts \ - --volume /opt/.secrets:/run/secrets:ro \ - --dns=172.16.1.25 \ - --dns-search=example.com \ - --privileged=false \ - --volume racstorage:/oradata \ - --cap-add=SYS_NICE \ - --cap-add=SYS_RESOURCE \ - --cap-add=NET_ADMIN \ - --cap-add=AUDIT_WRITE \ - --cap-add=AUDIT_CONTROL \ - --memory 16G \ - --memory-swap 32G \ - --sysctl kernel.shmall=2097152 \ - --sysctl "kernel.sem=250 32000 100 128" \ - --sysctl kernel.shmmax=8589934592 \ - --sysctl kernel.shmmni=4096 \ - -e DNS_SERVERS="172.16.1.25" \ - -e NODE_VIP=172.16.1.200 \ - -e VIP_HOSTNAME=racnodep1-vip \ - -e PRIV_IP=192.168.17.170 \ - -e PRIV_HOSTNAME=racnodep1-priv \ - -e PUBLIC_IP=172.16.1.170 \ - -e PUBLIC_HOSTNAME=racnodep1 \ - -e SCAN_NAME=racnodepc1-scan \ - -e OP_TYPE=INSTALL \ - -e DOMAIN=example.com \ - -e ASM_DISCOVERY_DIR=/oradata \ - -e ASM_DEVICE_LIST=/oradata/asm_disk01.img,/oradata/asm_disk02.img,/oradata/asm_disk03.img,/oradata/asm_disk04.img,/oradata/asm_disk05.img \ - -e CMAN_HOSTNAME=racnodepc1-cman \ - -e CMAN_IP=172.16.1.166 \ - -e COMMON_OS_PWD_FILE=common_os_pwdfile.enc \ - -e PWD_KEY=pwd.key \ - -e ORACLE_SID=ORCLCDB \ - -e RESET_FAILED_SYSTEMD="true" \ - -e DEFAULT_GATEWAY="172.16.1.1" \ - -e TMPDIR=/var/tmp \ - --restart=always \ - --systemd=always \ - --cpu-rt-runtime=95000 \ - --ulimit rtprio=99 \ - --name racnodep1 \ - localhost/oracle/database-rac:21.3.0-21.13.0 - ``` - -**Notes:** - -- Change environment variables such as `NODE_IP`, `PRIV_IP`, `PUBLIC_IP`, `ASM_DEVICE_LIST`, `PWD_FILE`, and `PWD_KEY` based on your environment. Also, ensure you use the correct device names on each host. -- You must have created the `racstorage` volume before the creation of the Oracle RAC Container. For details about the available environment variables, refer the [Section 7](#section-7-environment-variables-for-the-first-node). - -#### Assign networks to Oracle RAC containers Created Using Podman - -You need to assign the Podman networks created in section 1 to containers. Execute the following commands: - - ```bash - podman network disconnect podman racnodep1 - podman network connect rac_pub1_nw --ip 172.16.1.170 racnodep1 - podman network connect rac_priv1_nw --ip 192.168.17.170 racnodep1 - ``` - -#### Start the first container Created Using Podman - -To start the first container, run the following command: - - ```bash - podman start racnodep1 - ``` - -It can take at least 40 minutes or longer to create the first node of the cluster. To check the database logs, tail the logs using the following command: - -```bash -podman exec racnodep1 /bin/bash -c "tail -f /tmp/orod.log" -``` - -You should see the database creation success message at the end. - -```bash -01-31-2024 12:31:20 UTC : : ################################################################# -01-31-2024 12:31:20 UTC : : Oracle Database ORCLCDB is up and running on racnodep1 -01-31-2024 12:31:20 UTC : : ################################################################# -01-31-2024 12:31:20 UTC : : Running User Script -01-31-2024 12:31:20 UTC : : Setting Remote Listener -01-31-2024 12:31:27 UTC : : 172.16.1.166 -01-31-2024 12:31:27 UTC : : Executing script to set the remote listener -01-31-2024 12:31:28 UTC : : #################################### -01-31-2024 12:31:28 UTC : : ORACLE RAC DATABASE IS READY TO USE! -01-31-2024 12:31:28 UTC : : #################################### -``` - -#### Connect to the Oracle RAC container Created Using Podman - -To connect to the container execute the following command: - -```bash -podman exec -i -t racnodep1 /bin/bash -``` - -If the install fails for any reason, log in to the container using the preceding command and check `/tmp/orod.log`. You can also review the Grid Infrastructure logs located at `$GRID_BASE/diag/crs` and check for failure logs. If the failure occurred during the database creation then check the database logs. - -### Section 5.3: Adding a Oracle RAC Node using a container on Podman - -Before proceeding to the next step, ensure Oracle Grid Infrastructure is running and the Oracle RAC Database is open as per instructions in [Section 5.2: Setup RAC Containers on Podman](#section-52-setup-rac-containers-on-podman). Otherwise, the node addition process will fail. - -Refer the [Section 3: Network and Password Management](#section-3--network-and-password-management) and setup the network on a container host based on your Oracle RAC environment. If you have already done the setup, ignore and proceed further. - -To understand the details of environment variable, refer For the details of environment variables [Section 8](#section-8-environment-variables-for-the-second-and-subsequent-nodes). - -Reset the password on the existing Oracle RAC node for SSH setup between an existing node in the cluster and the new node. Password must be the same on all the nodes for the `grid` and `oracle` users. Execute the following command on an existing node of the cluster. - -```bash -podman exec -i -t -u root racnode1 /bin/bash -sh /opt/scripts/startup/resetOSPassword.sh --help -sh /opt/scripts/startup/resetOSPassword.sh --op_type reset_grid_oracle --pwd_file common_os_pwdfile.enc --secret_volume /run/secrets --pwd_key_file pwd.key -``` - -**Note:** If you do not have a common secret volume among Oracle RAC containers, populate the password file with the same password that you have used on the new node, encrypt the file, and execute `resetOSPassword.sh` on the existing node of the cluster. - -#### Deploying Oracle RAC Additional Node on Container with Block Devices on Podman - -If you are using an NFS volume, skip to the section [Deploying Oracle RAC Additional Node on Container with Oracle RAC Storage Container on Podman](#deploying-oracle-rac-additional-node-on-container-with-oracle-rac-storage-container-on-podman). - -To create additional nodes, use the following command: - -```bash -podman create -t -i \ - --hostname racnodep2 \ - --tmpfs /dev/shm:rw,exec,size=4G \ - --volume /boot:/boot:ro \ - --dns-search=example.com \ - --volume /opt/containers/rac_host_file:/etc/hosts \ - --volume /opt/.secrets:/run/secrets:ro \ - --dns=172.16.1.25 \ - --dns-search=example.com \ - --device=/dev/oracleoci/oraclevdd:/dev/asm_disk1 \ - --device=/dev/oracleoci/oraclevde:/dev/asm_disk2 \ - --privileged=false \ - --cap-add=SYS_NICE \ - --cap-add=SYS_RESOURCE \ - --cap-add=NET_ADMIN \ - --cap-add=AUDIT_CONTROL \ - --cap-add=AUDIT_WRITE \ - --memory 16G \ - --memory-swap 32G \ - --sysctl kernel.shmall=2097152 \ - --sysctl "kernel.sem=250 32000 100 128" \ - --sysctl kernel.shmmax=8589934592 \ - --sysctl kernel.shmmni=4096 \ - -e DNS_SERVERS="172.16.1.25" \ - -e EXISTING_CLS_NODES=racnodep1 \ - -e NODE_VIP=172.16.1.201 \ - -e VIP_HOSTNAME=racnodep2-vip \ - -e PRIV_IP=192.168.17.171 \ - -e PRIV_HOSTNAME=racnodep2-priv \ - -e PUBLIC_IP=172.16.1.171 \ - -e PUBLIC_HOSTNAME=racnodep2 \ - -e DOMAIN=example.com \ - -e SCAN_NAME=racnodepc1-scan \ - -e ASM_DISCOVERY_DIR=/dev \ - -e ASM_DEVICE_LIST=/dev/asm_disk1,/dev/asm_disk2 \ - -e ORACLE_SID=ORCLCDB \ - -e OP_TYPE=ADDNODE \ - -e COMMON_OS_PWD_FILE=common_os_pwdfile.enc \ - -e PWD_KEY=pwd.key \ - -e RESET_FAILED_SYSTEMD="true" \ - -e DEFAULT_GATEWAY="172.16.1.1" \ - -e TMPDIR=/var/tmp \ - --systemd=always \ - --cpu-rt-runtime=95000 \ - --ulimit rtprio=99 \ - --restart=always \ - --name racnodep2 \ - localhost/oracle/database-rac:21.3.0-21.13.0 -``` - -For details of all environment variables and parameters, refer to [Section 8](#section-8-environment-variables-for-the-second-and-subsequent-nodes). - -#### Deploying Oracle RAC Additional Node on Container with Oracle RAC Storage Container on Podman - -If you are using physical block devices for shared storage, skip to [Deploying Oracle RAC Additional Node on Container with Block Devices on Podman](#deploying-oracle-rac-additional-node-on-container-with-block-devices-on-podman). - -Use the existing `racstorage:/oradata` volume when creating the additional container using the image. - -For example: - -```bash -podman create -t -i \ - --hostname racnodep2 \ - --tmpfs /dev/shm:rw,exec,size=4G \ - --volume /boot:/boot:ro \ - --dns-search=example.com \ - --volume /opt/containers/rac_host_file:/etc/hosts \ - --volume /opt/.secrets:/run/secrets:ro \ - --dns=172.16.1.25 \ - --dns-search=example.com \ - --privileged=false \ - --volume racstorage:/oradata \ - --cap-add=SYS_NICE \ - --cap-add=SYS_RESOURCE \ - --cap-add=NET_ADMIN \ - --cap-add=AUDIT_WRITE \ - --cap-add=AUDIT_CONTROL \ - --memory 16G \ - --memory-swap 32G \ - --sysctl kernel.shmall=2097152 \ - --sysctl "kernel.sem=250 32000 100 128" \ - --sysctl kernel.shmmax=8589934592 \ - --sysctl kernel.shmmni=4096 \ - -e DNS_SERVERS="172.16.1.25" \ - -e EXISTING_CLS_NODES=racnodep1 \ - -e NODE_VIP=172.16.1.201 \ - -e VIP_HOSTNAME=racnodep2-vip \ - -e PRIV_IP=192.168.17.171 \ - -e PRIV_HOSTNAME=racnodep2-priv \ - -e PUBLIC_IP=172.16.1.171 \ - -e PUBLIC_HOSTNAME=racnodep2 \ - -e DOMAIN=example.com \ - -e SCAN_NAME=racnodepc1-scan \ - -e ASM_DISCOVERY_DIR=/oradata \ - -e ASM_DEVICE_LIST=/oradata/asm_disk01.img,/oradata/asm_disk02.img,/oradata/asm_disk03.img,/oradata/asm_disk04.img,/oradata/asm_disk05.img \ - -e ORACLE_SID=ORCLCDB \ - -e OP_TYPE=ADDNODE \ - -e COMMON_OS_PWD_FILE=common_os_pwdfile.enc \ - -e PWD_KEY=pwd.key \ - -e RESET_FAILED_SYSTEMD="true" \ - -e DEFAULT_GATEWAY="172.16.1.1" \ - -e TMPDIR=/var/tmp \ - --systemd=always \ - --cpu-rt-runtime=95000 \ - --ulimit rtprio=99 \ - --restart=always \ - --name racnodep2 \ - localhost/oracle/database-rac:21.3.0-21.13.0 -``` - -**Notes:** - -- You must have created **racstorage** volume before the creation of the Oracle RAC container. -- You can change env variables such as IPs and ORACLE_PWD based on your env. For details about the env variables, refer the [Section 8](#section-8-environment-variables-for-the-second-and-subsequent-nodes). - -#### Assign Network to additional Oracle RAC container Created Using Podman - -Connect the private and public networks you created earlier to the container: - -```bash -podman network disconnect podman racnodep2 -podman network connect rac_pub1_nw --ip 172.16.1.171 racnodep2 -podman network connect rac_priv1_nw --ip 192.168.17.171 racnodep2 -``` - -#### Start Oracle RAC container - -Start the container - -```bash -podman start racnodep2 -``` - -To check the database logs, tail the logs using the following command: - -```bash -podman exec racnodep2 /bin/bash -c "tail -f /tmp/orod.log" -``` - -You should see the database creation success message at the end. - -```bash -02-01-2024 09:36:14 UTC : : ################################################################# -02-01-2024 09:36:14 UTC : : Oracle Database ORCLCDB is up and running on racnodep2 -02-01-2024 09:36:14 UTC : : ################################################################# -02-01-2024 09:36:14 UTC : : Running User Script -02-01-2024 09:36:14 UTC : : Setting Remote Listener -02-01-2024 09:36:14 UTC : : #################################### -02-01-2024 09:36:14 UTC : : ORACLE RAC DATABASE IS READY TO USE! -02-01-2024 09:36:14 UTC : : #################################### -``` -## Section 5.4: Setup Oracle RAC Container on Podman with Podman Compose - -Oracle RAC database can also be deployed with podman Compose. An example of how to install Oracle RAC Database on Single Host via Bridge Network is explained in this [README.md](./samples/racpodmancompose/README.md) - -Same section covers various below scenarios as well with podman compose- -1. Deploying Oracle RAC on Container with Block Devices on Podman with Podman Compose -2. Deploying Oracle RAC on Container with NFS Devices on Podman with Podman Compose -3. Deploying Oracle RAC Additional Node on Container with Block Devices on Podman with Podman Compose -4. Deploying Oracle RAC Additional Node on Container with Oracle RAC Storage Container on Podman with Podman Compose - -***Note:*** Podman and Podman Compose is not supported with OL7. You need minimum OL8.8 with UEK R7. - -## Section 6: Connecting to an Oracle RAC Database - -**IMPORTANT:** This section assumes that you have successfully created an Oracle RAC cluster using the preceding sections. - -If you are using a connection manager and exposed the port 1521 on the host, then connect from an external client using the following connection string, where `` is the host container, and `` is the database system identifier: - -```bash -system/@//:1521/ -``` - -If you are using the bridge created using MACVLAN driver, and you have configured DNS appropriately, then you can connect using the public Single Client Access (SCAN) listener directly from any external client. To connect with the SCAN, use the following connection string, where `` is the SCAN name for the database, and `` is the database system identifier: - -```bash -system/@//:1521/ -``` - -## Section 7: Environment Variables for the First Node - -This section provides information about the environment variables that can be used when creating the first node of a cluster. - -```bash -OP_TYPE=###Specify the Operation TYPE. It can accept 2 values INSTALL OR ADDNODE#### -NODE_VIP=####Specify the Node VIP### -VIP_HOSTNAME=###Specify the VIP hostname### -PRIV_IP=###Specify the Private IP### -PRIV_HOSTNAME=###Specify the Private Hostname### -PUBLIC_IP=###Specify the public IP### -PUBLIC_HOSTNAME=###Specify the public hostname### -SCAN_NAME=###Specify the scan name### -ASM_DEVICE_LIST=###Specify the ASM Disk lists. -SCAN_IP=###Specify this if you do not have DNS server### -DOMAIN=###Default value set to example.com### -PASSWORD=###OS password will be generated by openssl### -CLUSTER_NAME=###Default value set to racnode-c#### -ORACLE_SID=###Default value set to ORCLCDB### -ORACLE_PDB=###Default value set to ORCLPDB### -ORACLE_PWD=###Default value set to generated by openssl random password### -ORACLE_CHARACTERSET=###Default value set AL32UTF8### -DEFAULT_GATEWAY=###Default gateway. You need this env variable if containers will be running on multiple hosts.#### -CMAN_HOSTNAME=###Connection Manager Host Name### -CMAN_IP=###Connection manager Host IP### -ASM_DISCOVERY_DIR=####ASM disk location insdie the container. By default it is /dev###### -COMMON_OS_PWD_FILE=###Pass the file name to setup grid and oracle user password. If you specify ORACLE_PWD_FILE, GRID_PWD_FILE, and DB_PWD_FILE then you do not need to specify this env variable### -ORACLE_PWD_FILE=###Pass the file name to set the password for oracle user.### -GRID_PWD_FILE=###Pass the file name to set the password for grid user.### -DB_PWD_FILE=###Pass the file name to set the password for DB user i.e. sys.### -REMOVE_OS_PWD_FILES=###Set this env variable to true to remove pwd key file and password file after resetting password.### -CONTAINER_DB_FLAG=###Default value is set to true to create container database. Set this to false if you do not want to create container database.### -``` - -## Section 8: Environment Variables for the Second and Subsequent Nodes - -This section provides the details about the environment variables that can be used for all additional nodes added to an existing cluster. - -```bash -OP_TYPE=###Specify the Operation TYPE. It can accept 2 values INSTALL OR ADDNODE### -EXISTING_CLS_NODES=###Specify the Existing Node of the cluster which you want to join. If you have 2 nodes in the cluster and you are trying to add the third node then specify existing 2 nodes of the clusters and separate them by comma.#### -NODE_VIP=###Specify the Node VIP### -VIP_HOSTNAME=###Specify the VIP hostname### -PRIV_IP=###Specify the Private IP### -PRIV_HOSTNAME=###Specify the Private Hostname### -PUBLIC_IP=###Specify the public IP### -PUBLIC_HOSTNAME=###Specify the public hostname### -SCAN_NAME=###Specify the scan name### -SCAN_IP=###Specify this if you do not have DNS server### -ASM_DEVICE_LIST=###Specify the ASM Disk lists. -DOMAIN=###Default value set to example.com### -ORACLE_SID=###Default value set to ORCLCDB### -DEFAULT_GATEWAY=###Default gateway. You need this env variable if containers will be running on multiple hosts.#### -CMAN_HOSTNAME=###Connection Manager Host Name### -CMAN_IP=###Connection manager Host IP### -ASM_DISCOVERY_DIR=####ASM disk location inside the container. By default it is /dev###### -COMMON_OS_PWD_FILE=###You need to pass the file name to setup grid and oracle user password. If you specify ORACLE_PWD_FILE, GRID_PWD_FILE, and DB_PWD_FILE then you do not need to specify this env variable### -ORACLE_PWD_FILE=###You need to pass the file name to set the password for oracle user.### -GRID_PWD_FILE=###You need to pass the file name to set the password for grid user.### -DB_PWD_FILE=###You need to pass the file name to set the password for DB user i.e. sys.### -REMOVE_OS_PWD_FILES=###You need to set this to true to remove pwd key file and password file after resetting password.### -``` - -## Section 9: Building a Patched Oracle RAC Container Image - -If you want to build a patched image based on a base 21.3.0 container image, then refer to the GitHub page [Example of how to create a patched database image](https://github.com/oracle/docker-images/tree/main/OracleDatabase/RAC/OracleRealApplicationClusters/samples/applypatch). - -## Section 10 : Sample Container Files for Older Releases - -### Docker - -This project offers sample container files for Oracle Grid Infrastructure and Oracle Real Application Clusters for dev and test: - -- Oracle Database 19c Oracle Grid Infrastructure (19.3) for Linux x86-64 -- Oracle Database 19c (19.3) for Linux x86-64 - -- Oracle Database 18c Oracle Grid Infrastructure (18.3) for Linux x86-64 - -- Oracle Database 18c (18.3) for Linux x86-64 - -- Oracle Database 12c Release 2 Oracle Grid Infrastructure (12.2.0.1.0) for Linux x86-64 - -- Oracle Database 12c Release 2 (12.2.0.1.0) Enterprise Edition for Linux x86-64 - - **Notes:** - -- Note that the Oracle RAC on Docker Container releases are supported only for test and development environments, but not for production environments. - -- If you are planning to build and deploy Oracle RAC 18.3.0, you need to download Oracle 18.3.0 Grid Infrastructure and Oracle Database 18.3.0 Database. - - - You also need to download Patch# p28322130_183000OCWRU_Linux-x86-64.zip from [Oracle Technology Network](https://www.oracle.com/technetwork/database/database-technologies/clusterware/downloads/docker-4418413.html). - - - Stage it under dockerfiles/18.3.0 folder. - -- If you are planning to build and deploy Oracle RAC 12.2.0.1, you need to download Oracle 12.2.0.1 Grid Infrastructure and Oracle Database 12.2.0.1 Database. - - - You also need to download Patch# p27383741_122010_Linux-x86-64.zip from [Oracle Technology Network](https://www.oracle.com/technetwork/database/database-technologies/clusterware/downloads/docker-4418413.html). - - - Stage it under dockerfiles/12.2.0.1 folder. - -### Podman - -This project offers sample container files for Oracle Grid Infrastructure and Oracle Real Application Clusters for dev and test: - -- Oracle Database 19c Oracle Grid Infrastructure (19.3) for Linux x86-64 -- Oracle Database 19c (19.3) for Linux x86-64 - -**Notes:** -- Because Oracle RAC on Podman is supported on 19c from 19.16 or later, you must download the grid release update (RU) from [support.oracle.com](https://support.oracle.com/portal/). - -- For RAC on Podman for v19.22, download following one-offs from [support.oracle.com](https://support.oracle.com/portal/) - - `35943157` - - `35940989` - -- Before starting the next step, you must edit `docker-images/OracleDatabase/RAC/OracleRealApplicationClusters/dockerfiles/19.3.0/Dockerfile`, change `oraclelinux:7-slim` to `oraclelinux:8`, and save the file. - -- You must add `CV_ASSUME_DISTID=OEL8` inside the `Dockerfile` as an env variable. - -- Once the `19.3.0` Oracle RAC on Podman image is built, start building patched image with the download 19.16 RU and one-offs. To build the patch the image, refer [Example of how to create a patched database image](https://github.com/oracle/docker-images/tree/main/OracleDatabase/RAC/OracleRealApplicationClusters/samples/applypatch). - -## Section 11 : Support - -### Docker Support - -At the time of this release, Oracle RAC on Docker is supported only on Oracle Linux 7. To see current details, refer the [Real Application Clusters Installation Guide for Docker Containers Oracle Linux x86-64](https://docs.oracle.com/en/database/oracle/oracle-database/21/racdk/oracle-rac-on-docker.html). - -### Podman Support - -At the time of this release, Oracle RAC on Podman is supported for Oracle Linux 8.5 later. To see current Linux support certifications, refer [Oracle RAC on Podman Documentation](https://docs.oracle.com/en/database/oracle/oracle-database/21/install-and-upgrade.html) - -## Section 12 : License - -To download and run Oracle Grid and Database, regardless of whether inside or outside a container, you must download the binaries from the Oracle website and accept the license indicated on that page. - -All scripts and files hosted in this repository which are required to build the container images are, unless otherwise noted, released under UPL 1.0 license. - -## Section 13 : Copyright - -Copyright (c) 2014-2024 Oracle and/or its affiliates. +# Oracle Real Application Clusters in Linux Containers + +Learn about container deployment options for Oracle Real Application Clusters (Oracle RAC) Release 21c + +## Overview of Running Oracle RAC in Containers + +Oracle Real Application Clusters (Oracle RAC) is an option for the award-winning Oracle Database Enterprise Edition. Oracle RAC is a cluster database with a shared cache architecture that overcomes the limitations of traditional shared-nothing and shared-disk approaches to provide highly scalable and available database solutions for all business applications. + +Oracle RAC uses Oracle Clusterware as a portable cluster software that allows clustering of independent servers so that they cooperate as a single system and Oracle Automatic Storage Management (Oracle ASM) to provide simplified storage management that is consistent across all servers and storage platforms. +Oracle Clusterware and Oracle ASM are part of the Oracle Grid Infrastructure, which bundles both solutions in an easy-to-deploy software package. For more information on Oracle RAC Database 21c refer to the [Oracle Database documentation](http://docs.oracle.com/en/database/). + +This guide helps you install Oracle RAC on Containers on Host Machines as explained in detail below. With the current release, you prepare the host machine, build or use pre-built Oracle RAC Container Images v21c, and setup Oracle RAC on Single or Multiple Host machines with Oracle ASM. +In this installation guide, we use [Podman](https://docs.podman.io/en/v3.0/) to create Oracle RAC Containers and manage them. + +## Using this Documentation +To create an Oracle RAC environment, follow these steps: + +- [Oracle Real Application Clusters in Linux Containers](#oracle-real-application-clusters-in-linux-containers) + - [Overview of Running Oracle RAC in Containers](#overview-of-running-oracle-rac-in-containers) + - [Using this Documentation](#using-this-documentation) + - [Preparation Steps for running Oracle RAC in containers](#preparation-steps-for-running-oracle-rac-database-in-containers) + - [Getting Oracle RAC Database Container Images](#getting-oracle-rac-database-container-images) + - [Building Oracle RAC Database Container Image](#building-oracle-rac-database-container-image) + - [Building Oracle RAC Database Container Slim Image](#building-oracle-rac-database-container-slim-image) + - [Network Management](#network-management) + - [Password Management](#password-management) + - [Oracle RAC on Containers Deployment Scenarios](#oracle-rac-on-containers-deployment-scenarios) + - [Oracle RAC Containers on Podman](#oracle-rac-containers-on-podman) + - [Setup Using Oracle RAC Image](#1-setup-using-oracle-rac-container-image) + - [Setup Using Oracle RAC Slim Image](#2-setup-using-oracle-rac-container-slim-image) + - [Connecting to an Oracle RAC Database](#connecting-to-an-oracle-rac-database) + - [Deletion of Node from Oracle RAC Cluster](#deletion-of-node-from-oracle-rac-cluster) + - [Building a Patched Oracle RAC Container Image](#building-a-patched-oracle-rac-container-image) + - [Cleanup](#cleanup) + - [Sample Container Files for Older Releases](#sample-container-files-for-older-releases) + - [Support](#support) + - [License](#license) + - [Copyright](#copyright) + +## Preparation Steps for running Oracle RAC Database in containers + +Before you proceed to the next section, you must complete each of the steps listed in this section and complete the following prerequisites. + +* Refer to the following sections in the publication [Oracle Real Application Clusters Installation Guide](https://docs.oracle.com/cd/F39414_01/racpd/oracle-real-application-clusters-installation-guide-podman-oracle-linux-x86-64.pdf) for Podman Oracle Linux x86-64 to complete the preparation steps for Oracle RAC on Container deployment: + * Overview of Oracle RAC on Podman + * Host Preparation for Oracle RAC on Podman + * Podman Host Server Configuration + * **Note**: As we are following command line installation for Oracle RAC on containers, we don't need X Window System to be configured + * Podman Containers and Oracle RAC Nodes + * Provisioning the Podman Host Server + * Podman Host Preparation + * Preparing for Podman Container Installation + * Installing Podman Engine + * Allocate Linux Resources for Oracle Grid Infrastructure Deployment + * How to Configure Podman for SELinux Mode +* Install `git` from dnf or yum repository and clone the git repo. We clone this repo on a path called `` and refer here. +* Create a NFS Volume if you are planning to use NFS Storage for ASM Devices. See [Configuring NFS for Storage for Oracle RAC on Podman](https://docs.oracle.com/cd/F39414_01/racpd/oracle-real-application-clusters-installation-guide-podman-oracle-linux-x86-64.pdf) for more details. +**Note:** You can skip this step if you are planning to use block devices for storage. +* If SELinux is enabled on the Podman host, then ensure to create an SELinux policy for Oracle RAC on Podman. +For details about this procedure, see `How to Configure Podman for SELinux Mode` in the publication [Oracle Real Application Clusters Installation Guide for Podman Oracle Linux x86-64](https://docs.oracle.com/en/database/oracle/oracle-database/21/racpd/target-configuration-oracle-rac-podman.html#GUID-59138DF8-3781-4033-A38F-E0466884D008). +Also, When you are performing the installation using any files from podman host machine where SELinux is enabled, you need to make sure they are labeled correctly with `container_file_t` context. You can use `ls -lZ ` to see the security context set on files. + +* To resolve VIPs and SCAN IPs in this guide, we use a preconfigured DNS server in our environment. +Replace environment variables `-e DNS_SERVERS=10.0.20.25`,`--dns=10.0.20.25`,`-e DOMAIN=example.info` and `--dns-search=example.info` parameters in the examples in this guide based on your environment. + +* The Oracle RAC `Containerfile` does not contain any Oracle software binaries. Download the following software from the [Oracle Technology Network](https://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html), if you are planning to build Oracle RAC Container Images from next section. +However, if you are using pre-built RAC images from the Oracle Container Registry, then you can skip this step. + - Oracle Grid Infrastructure 21c (21) for Linux x86-64 + - Oracle Database 21c (21) for Linux x86-64 + +**Notes** +* If the Podman bridge network is not available outside your host, you can use the Oracle Connection Manager [CMAN Container](../OracleConnectionManager/README.md) to access the Oracle RAC Database from outside the host. + +## Getting Oracle RAC Database Container Images + +Oracle RAC is supported for production use on Podman starting with Oracle Database 19c (19.16), and Oracle Database 21c (21.7). You can also deploy Oracle RAC on Podman using the pre-built images available on the Oracle Container Registry. +Refer [this documentation](https://docs.oracle.com/en/operating-systems/oracle-linux/docker/docker-UsingDockerRegistries.html#docker-registry) for details on using Oracle Container Registry. + +Example of pulling an Oracle RAC Image from the Oracle Container Registry: +```bash +podman pull container-registry.oracle.com/database/rac:21.16 +podman tag container-registry.oracle.com/database/rac:21.16 localhost/oracle/database-rac:21.3.0 +``` + +If you are using pre-built Oracle RAC images from [the Oracle Container Registry](https://container-registry.oracle.com), then you can skip the section [Building Oracle RAC Database Container Image](#building-oracle-rac-database-container-image) to build the Oracle RAC Container Images. + +Note: +* The Oracle Container registry doesn't contains Oracle RAC Slim Image. If you are planning to use Oracle RAC Slim image then refer [Building Oracle RAC Database Container Slim Image](#building-oracle-rac-database-container-slim-image) + +* If you want to build the latest Oracle RAC Image from this Github repository, instead of a pre-built image, then follow below instructions for build `Oracle RAC Container Image` and `Oracle RAC Container Slim Image`. + +* Below section assumes that you have completed all of the prerequisites in [Preparation Steps for running Oracle RAC Database in containers](#preparation-steps-for-running-oracle-rac-database-in-containers) and completed all the steps, based on your environment. Ensure that you do not uncompress the binaries and patches manually before building the Oracle RAC Image. + +* To assist in building the images, you can use the [`buildContainerImage.sh`](./containerfiles/buildContainerImage.sh) script. See the following for instructions and usage. + +* Ensure that you have enough space in `/var/lib/containers` while building the Oracle RAC image. Also, if required use `export TMPDIR=` for Podman to refer to any other folder as the temporary podman cache location instead of the default `/tmp` location. + +### Building Oracle RAC Database Container Image +In this document,an `Oracle RAC Database Container Image` refers to an Oracle RAC Database Container Image with Oracle Grid Infrastructure and Oracle Database software binaries installed during Oracle RAC Podman image creation. +The resulting images will contain the Oracle Grid Infrastructure and Oracle RAC Database software binaries. Before you begin, you must download grid and database binaries and stage them under `/docker-images/OracleDatabase/RAC/OracleRealApplicationCluster/containerfiles/`. + +```bash + ./buildContainerImage.sh -v +``` +Example: Building Oracle RAC image for v 21.3.0- +```bash + ./buildContainerImage.sh -v 21.3.0 +``` + +### Building Oracle RAC Database Container Slim Image + In this document, an `Oracle RAC container slim image` refers to a container image that does not include installing Oracle Grid Infrastructure and Oracle Database during the Oracle RAC image creation. To build an Oracle RAC slim image that doesn't contain the Oracle RAC Database and Grid infrastructure software, run the following command: +```bash + ./buildContainerImage.sh -v -i -o '--build-arg SLIMMING=true' +``` + Example: Building Oracle Slim Image for v 21.3.0- + ```bash + ./buildContainerImage.sh -v 21.3.0 -i -o '--build-arg SLIMMING=true' + + ``` + To build an Oracle RAC slim image, you must use `--build-arg SLIMMING=true`. + To change the base image for building Oracle RAC images, you must use `--build-arg BASE_OL_IMAGE=oraclelinux:8`. + +**Notes** +- Usage of `./buildContainerImage.sh`- + ```text + -v: version to build + -i: ignore the MD5 checksums + -t: user-defined image name and tag (e.g., image_name:tag). Default is set to `oracle/database-rac:` for RAC Image and `oracle/database-rac:-slim` for RAC slim image. + -o: passes on container build option (e.g., --build-arg SLIMMIMG=true for slim,--build-arg BASE_OL_IMAGE=oraclelinux:8 to change base image). The default is "--build-arg SLIMMING=false" + ``` +- After the `21.3.0` Oracle RAC container image is built, to apply the 21c RU and build the 21c patched image, refer to [Example of how to create a patched database image](./samples/applypatch/README.md). +- If you are behind a proxy wall, then you must set the `https_proxy` or `http_proxy` environment variable based on your environment before building the image. +- In the slim image case, the resulting images will not contain the Oracle Grid Infrastructure binaries and Oracle RAC Database binaries. + +## Network Management + +Before you start the installation, you must plan your private and public podman networks. Refer to section `Podman Host Preparation` in the publication [Oracle Real Application Clusters Installation Guide](https://docs.oracle.com/cd/F39414_01/racpd/oracle-real-application-clusters-installation-guide-podman-oracle-linux-x86-64.pdf) for Podman Oracle Linux x86-64. +You can create a [podman network](https://docs.podman.io/en/latest/markdown/podman-network-create.1.html) on every container host so that the containers running within that host can communicate with each other. +For example: create `rac_pub1_nw` for the public network (`10.0.20.0/24`), `rac_priv1_nw` (`192.168.17.0/24`) and `rac_priv2_nw`(`192.168.18.0/24`) for private networks. You can use any network subnet based on your environment. + +### Standard Frames MTU Networks Configuration +```bash + ip link show|grep ens + 3: ens5: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 + 4: ens6: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 + 5: ens7: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 +``` +a. Create Podman Bridge networks using the following commands: + ```bash + podman network create --driver=bridge --subnet=10.0.20.0/24 rac_pub1_nw + podman network create --driver=bridge --subnet=192.168.17.0/24 rac_priv1_nw --disable-dns --internal + podman network create --driver=bridge --subnet=192.168.18.0/24 rac_priv2_nw --disable-dns --internal + ``` + +- To run Oracle RAC using Oracle Container Runtime for Podman on multiple hosts, you must create one of the following: + +b. Create Podman macvlan networks using the following commands: + + ```bash + podman network create -d macvlan --subnet=10.0.20.0/24 -o parent=ens5 rac_pub1_nw + podman network create -d macvlan --subnet=192.168.17.0/24 -o parent=ens6 rac_priv1_nw --disable-dns --internal + podman network create -d macvlan --subnet=192.168.18.0/24 -o parent=ens7 rac_priv2_nw --disable-dns --internal + ``` + + +c. Create Podman ipvlan networks using the following commands: + ```bash + podman network create -d ipvlan --subnet=10.0.20.0/24 -o parent=ens5 rac_pub1_nw + podman network create -d ipvlan --subnet=192.168.17.0/24 -o parent=ens6 rac_priv1_nw --disable-dns --internal + podman network create -d ipvlan --subnet=192.168.18.0/24 -o parent=ens7 rac_priv2_nw --disable-dns --internal + ``` +### Jumbo Frames MTU Network Configuration +```bash +ip link show | egrep "ens" +3: ens5: mtu 9000 qdisc mq state UP mode DEFAULT group default qlen 1000 +4: ens6: mtu 9000 qdisc mq state UP mode DEFAULT group default qlen 1000 +5: ens7: mtu 9000 qdisc mq state UP mode DEFAULT group default qlen 1000 +``` +If the MTU on each interface is set to 9000, then you can then run the following commands on each Podman host to extend the maximum payload length for each network to use the entire MTU: +```bash +#Podman bridge networks +podman network create --driver=bridge --subnet=10.0.20.0/24 --opt mtu=9000 rac_pub1_nw +podman network create --driver=bridge --subnet=192.168.17.0/24 --opt mtu=9000 rac_priv1_nw --disable-dns --internal +podman network create --driver=bridge --subnet=192.168.18.0/24 --opt mtu=9000 rac_priv2_nw --disable-dns --internal + +# Podman macvlan networks +podman network create -d macvlan --subnet=10.0.20.0/24 --opt mtu=9000 -o parent=ens5 rac_pub1_nw +podman network create -d macvlan --subnet=192.168.17.0/24 --opt mtu=9000 -o parent=ens6 rac_priv1_nw --disable-dns --internal +podman network create -d macvlan --subnet=192.168.18.0/24 --opt mtu=9000 -o parent=ens7 rac_priv2_nw --disable-dns --internal + +#Podman ipvlan networks +podman network create -d ipvlan --subnet=10.0.20.0/24 --opt mtu=9000 -o parent=ens5 rac_pub1_nw +podman network create -d ipvlan --subnet=192.168.17.0/24 --opt mtu=9000 -o parent=ens6 rac_priv1_nw --disable-dns --internal +podman network create -d ipvlan --subnet=192.168.18.0/24 --opt mtu=9000 -o parent=ens7 rac_priv2_nw --disable-dns --internal +``` +## Password Management +- Specify the secret volume for resetting the grid, oracle, and database user password during node creation or node addition. The volume can be a shared volume among all the containers. For example: + + ```bash + mkdir /opt/.secrets/ + ``` +- Generate a password file - Edit the `/opt/.secrets/pwdfile.txt` and seed the password for the grid, oracle, and database users. For this deployment scenario, it will be a common password for the grid, oracle, and database users. Run the command: + + ```bash + cd /opt/.secrets + openssl genrsa -out key.pem + openssl rsa -in key.pem -out key.pub -pubout + openssl pkeyutl -in pwdfile.txt -out pwdfile.enc -pubin -inkey key.pub -encrypt + rm -rf /opt/.secrets/pwdfile.txt + ``` +- Oracle recommends using Podman secrets inside the containers. To create Podman secrets, run the following command: + + ```bash + podman secret create pwdsecret /opt/.secrets/pwdfile.enc + podman secret create keysecret /opt/.secrets/key.pem + + podman secret ls + ID NAME DRIVER CREATED UPDATED + 7eb7f573905283c808bdabaff keysecret file 13 hours ago 13 hours ago + e3ac963fd736d8bc01dcd44dd pwdsecret file 13 hours ago 13 hours ago + + podman secret inspect + ``` +Notes: +- In this example we use `pwdsecret` as the common password for SSH setup between containers for the oracle, grid, and Oracle RAC database users. Also, `keysecret` is used to extract secrets inside the Oracle RAC Containers. + +## Oracle RAC on Containers Deployment Scenarios +Oracle RAC can be deployed with various scenarios, such as using NFS vs Block Devices, Oracle RAC Container Image vs Slim Image, with User Defined Response files, and so on. All are covered in detail in the instructions below. + +### Oracle RAC Containers on Podman +#### [1. Setup Using Oracle RAC Container Image](docs/rac-container/racimage/README.md) +#### [2. Setup Using Oracle RAC Container Slim Image](docs/rac-container/racslimimage/README.md) + +## Connecting to an Oracle RAC Database + +**IMPORTANT:** This section assumes that you have successfully created an Oracle RAC cluster using the preceding sections. +Refer to the [README](./docs/CONNECTING.md) for instructions on how to connect to the Oracle RAC Database. + +## Deletion of Node from Oracle RAC Cluster +Refer to [README](./docs/DELETION.md) for instructions on how to delete a Node from Existing Oracle RAC Container Cluster. + +## Building a Patched Oracle RAC Container Image + +If you want to build a patched image based on a base 21.3.0 container image, then refer to the GitHub page [Example of how to create a patched database image](./samples/applypatch/README.md). + +## Cleanup +Refer to [README](./docs/CLEANUP.md) for instructions on how to connect to an Oracle RAC Database Container Environment. + +## Sample Container Files for Older Releases + +This project offers example container (Podman) files for Oracle Grid Infrastructure and Oracle Real Application Clusters for dev and test: + +* Oracle Database 21c Oracle Grid Infrastructure (21.3) for Linux x86-64 +* Oracle Database 21c (21.3) for Linux x86-64 +* Oracle Database 19c Oracle Grid Infrastructure (19.3) for Linux x86-64 +* Oracle Database 19c (19.3) for Linux x86-64 + +To install older releases of Oracle RAC on Docker, refer to the [README.md](./docs/README_1.md#section-4-oracle-rac-on-docker) + +**Note** For RAC on Podman, do not refer older release details. Refer this project instructions for latest on RAC on Podman. +## Support + +At the time of this release, Oracle RAC on Podman is supported for Oracle Linux 8.10 or later. To see the current Linux support certifications, refer to [Oracle RAC on Podman Documentation](https://docs.oracle.com/en/database/oracle/oracle-database/21/install-and-upgrade.html) + +## License + +To download and run Oracle Grid Infrastructure and Oracle Database, regardless of whether inside or outside a container, you must download the binaries from the Oracle website and accept the license indicated on that page. + +All scripts and files hosted in this repository that are required to build the container images are, unless otherwise noted, released under a UPL 1.0 license. + +## Copyright + +Copyright (c) 2014-2024 Oracle and/or its affiliates. diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/README1.md b/OracleDatabase/RAC/OracleRealApplicationClusters/README1.md new file mode 100644 index 0000000000..ad36e25c55 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/README1.md @@ -0,0 +1,1194 @@ +# Oracle Real Application Clusters in Linux Containers + +Learn about container deployment options for Oracle Real Application Clusters (Oracle RAC) Release 21c (21.3) + +## Overview of Running Oracle RAC in Containers + +Oracle Real Application Clusters (Oracle RAC) is an option to the award-winning Oracle Database Enterprise Edition. Oracle RAC is a cluster database with a shared cache architecture that overcomes the limitations of traditional shared-nothing and shared-disk approaches to provide highly scalable and available database solutions for all business applications. +Oracle RAC uses Oracle Clusterware as a portable cluster software that allows clustering of independent servers so that they cooperate as a single system and Oracle Automatic Storage Management (Oracle ASM) to provide simplified storage management that is consistent across all servers and storage platforms. +Oracle Clusterware and Oracle ASM are part of the Oracle Grid Infrastructure, which bundles both solutions in an easy to deploy software package. + +For more information on Oracle RAC Database 21c refer to the [Oracle Database documentation](http://docs.oracle.com/en/database/). + +## Using this Image + +To create an Oracle RAC environment, complete these steps in order: + +- [Oracle Real Application Clusters in Linux Containers](#oracle-real-application-clusters-in-linux-containers) + - [Overview of Running Oracle RAC in Containers](#overview-of-running-oracle-rac-in-containers) + - [Using this Image](#using-this-image) + - [Section 1 : Prerequisites for running Oracle RAC in containers](#section-1--prerequisites-for-running-oracle-rac-in-containers) + - [Section 2: Building Oracle RAC Database Container Images](#section-2-building-oracle-rac-database-container-images) + - [Oracle RAC Container Image for Docker](#oracle-rac-container-image-for-docker) + - [Oracle RAC Container Image for Podman](#oracle-rac-container-image-for-podman) + - [Section 3: Network and Password Management](#section-3--network-and-password-management) + - [Section 4: Oracle RAC on Docker](#section-4-oracle-rac-on-docker) + - [Section 4.1 : Prerequisites for Running Oracle RAC on Docker](#section-41--prerequisites-for-running-oracle-rac-on-docker) + - [Section 4.2: Setup Oracle RAC Container on Docker](#section-42-setup-oracle-rac-container-on-docker) + - [Deploying Oracle RAC on Container with Block Devices on Docker](#deploying-oracle-rac-on-container-with-block-devices-on-docker) + - [Deploying Oracle RAC on Container With Oracle RAC Storage Container](#deploying-oracle-rac-on-container-with-oracle-rac-storage-container) + - [Assign networks to Oracle RAC containers](#assign-networks-to-oracle-rac-containers) + - [Start the first container](#start-the-first-container) + - [Connect to the Oracle RAC container](#connect-to-the-oracle-rac-container) + - [Section 4.3: Adding an Oracle RAC Node using a Docker Container](#section-43-adding-an-oracle-rac-node-using-a-docker-container) + - [Deploying Oracle RAC Additional Node on Container with Block Devices on Docker](#deploying-oracle-rac-additional-node-on-container-with-block-devices-on-docker) + - [Deploying Oracle RAC Additional Node on Container with Oracle RAC Storage Container on Docker](#deploying-oracle-rac-additional-node-on-container-with-oracle-rac-storage-container-on-docker) + - [Assign Network to additional Oracle RAC container](#assign-network-to-additional-oracle-rac-container) + - [Start Oracle RAC racnode2 container](#start-oracle-rac-racnode2-container) + - [Connect to the Oracle RAC racnode2 container](#connect-to-the-oracle-rac-racnode2-container) + - [Section 4.4: Setup Oracle RAC Container on Docker with Docker Compose](#section-44-setup-oracle-rac-container-on-docker-with-docker-compose) + - [Section 5: Oracle RAC on Podman](#section-5-oracle-rac-on-podman) + - [Section 5.1 : Prerequisites for Running Oracle RAC on Podman](#section-51--prerequisites-for-running-oracle-rac-on-podman) + - [Section 5.2: Setup RAC Containers on Podman](#section-52-setup-rac-containers-on-podman) + - [Deploying Oracle RAC Containers with Block Devices on Podman](#deploying-oracle-rac-containers-with-block-devices-on-podman) + - [Deploying Oracle RAC on Container With Oracle RAC Storage Container on Podman](#deploying-oracle-rac-on-container-with-oracle-rac-storage-container-on-podman) + - [Assign networks to Oracle RAC containers Created Using Podman](#assign-networks-to-oracle-rac-containers-created-using-podman) + - [Start the first container Created Using Podman](#start-the-first-container-created-using-podman) + - [Connect to the Oracle RAC container Created Using Podman](#connect-to-the-oracle-rac-container-created-using-podman) + - [Section 5.3: Adding a Oracle RAC Node using a container on Podman](#section-53-adding-a-oracle-rac-node-using-a-container-on-podman) + - [Deploying Oracle RAC Additional Node on Container with Block Devices on Podman](#deploying-oracle-rac-additional-node-on-container-with-block-devices-on-podman) + - [Deploying Oracle RAC Additional Node on Container with Oracle RAC Storage Container on Podman](#deploying-oracle-rac-additional-node-on-container-with-oracle-rac-storage-container-on-podman) + - [Assign Network to additional Oracle RAC container Created Using Podman](#assign-network-to-additional-oracle-rac-container-created-using-podman) + - [Start Oracle RAC container](#start-oracle-rac-container) + - [Section 5.4: Setup Oracle RAC Container on Podman with Podman Compose](#section-54-setup-oracle-rac-container-on-podman-with-podman-compose) + - [Section 6: Connecting to an Oracle RAC Database](#section-6-connecting-to-an-oracle-rac-database) + - [Section 7: Environment Variables for the First Node](#section-7-environment-variables-for-the-first-node) + - [Section 8: Environment Variables for the Second and Subsequent Nodes](#section-8-environment-variables-for-the-second-and-subsequent-nodes) + - [Section 9: Building a Patched Oracle RAC Container Image](#section-9-building-a-patched-oracle-rac-container-image) + - [Section 10 : Sample Container Files for Older Releases](#section-10--sample-container-files-for-older-releases) + - [Docker](#docker) + - [Podman](#podman) + - [Section 11 : Support](#section-11--support) + - [Docker Support](#docker-support) + - [Podman Support](#podman-support) + - [Section 12 : License](#section-12--license) + - [Section 13 : Copyright](#section-13--copyright) + +## Section 1 : Prerequisites for running Oracle RAC in containers + +Before you proceed to section two, you must complete each of the steps listed in this section. + +To review the resource requirements for Oracle RAC, see Oracle Database 21c Release documentation [Oracle Grid Infrastructure Installation and Upgrade Guide](https://docs.oracle.com/en/database/oracle/oracle-database/21/cwlin/index.html) + +Complete each of the following prerequisites: + +1. Ensure that each container that you will deploy as part of your cluster meets the minimum hardware requirements for Oracle RAC and Oracle Grid Infrastructure software. +2. Ensure all data files, control files, redo log files, and the server parameter file (`SPFILE`) used by the Oracle RAC database reside on shared storage that is accessible by all the Oracle RAC database instances. An Oracle RAC database is a shared-everything database, so each Oracle RAC Node must have the same access. +3. Configure the following addresses manually in your DNS. + + - Public IP address for each container + - Private IP address for each container + - Virtual IP address for each container + - Three single client access name (SCAN) addresses for the cluster. +4. If you are planning to set up RAC on Docker, refer Docker Host machine details in [Section 4.1](#section-41--prerequisites-for-running-oracle-rac-on-docker) +5. If you are planning to set up RAC on Podman, refer Podman Host machine details in [Section 5.1](#section-51--prerequisites-for-running-oracle-rac-on-podman) +6. Block storage: If you are planning to use block devices for shared storage, then allocate block devices for OCR, voting and database files. +7. NFS storage: If you are planning to use NFS storage for OCR, Voting Disk and Database files, then configure NFS storage and export at least one NFS mount. You can also use `/docker-images/OracleDatabase/RAC/OracleRACStorageServer` container for shared file system on NFS. +8. Set`/etc/sysctl.conf`parameters: For Oracle RAC, you must set following parameters at host level in `/etc/sysctl.conf`: + ```INI + fs.aio-max-nr = 1048576 + fs.file-max = 6815744 + net.core.rmem_max = 4194304 + net.core.rmem_default = 262144 + net.core.wmem_max = 1048576 + net.core.wmem_default = 262144 + net.core.rmem_default = 262144 + ``` +9. List and reload parameters: After the `/etc/sysctl.conf` file is modified, run the following commands: + ```bash + sysctl -a + sysctl -p + ``` +10. To resolve VIPs and SCAN IPs, we are using a DNS container in this guide. Before proceeding to the next step, create a [DNS server container](../OracleDNSServer/README.md). +**Note** If you have a pre-configured DNS server in your environment, then you can replace `-e DNS_SERVERS=172.16.1.25`, `--dns=172.16.1.25`, `-e DOMAIN=example.com` and `--dns-search=example.com` parameters in **Section 2: Building Oracle RAC Database Podman Install Images** with the `DOMAIN_NAME` and `DNS_SERVER` based on your environment. +11. If you are running RAC on Podman, make sure that you have installed the `podman-docker` rpm package so that podman commands can be run using `docker` utility. +12. The Oracle RAC `Dockerfile` does not contain any Oracle software binaries. Download the following software from the [Oracle Technology Network](https://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html) and stage them under `/docker-images/OracleDatabase/RAC/OracleRealApplicationCluster/dockerfiles/` folder. + + - Oracle Database 21c Grid Infrastructure (21.3) for Linux x86-64 + - Oracle Database 21c (21.3) for Linux x86-64 + + - If you are deploying Oracle RAC on Podman then execute following, otherwise skip to next section. + - Because Oracle RAC on Podman is supported on Release 21c (21.7) or later, you must download the grid release update (RU) from [support.oracle.com](https://support.oracle.com/portal/). + + - In this Example we download the following latest one-off patches for release 21.13 from [support.oracle.com](https://support.oracle.com/portal/) + - `36031790` + - `36041222` +13. Ensure you have git configured in your host machine, [refer this page](https://docs.oracle.com/en/learn/ol-git-start/index.html) for instructions. Clone this git repo by running below command - +```bash +git clone git@github.com:oracle/docker-images.git +``` + +**Notes** + +- If you are planning to use a `DNSServer` container for SCAN IPs, VIPs resolution, then configure the DNSServer. For development and testing purposes only, use the Oracle `DNSServer` image to deploy a container providing DNS resolutions. Please check [OracleDNSServer](../OracleDNSServer/README.md) for details. +- `OracleRACStorageServer` docker image can be used only for development and testing purpose. Please check [OracleRACStorageServer](../OracleRACStorageServer/README.md) for details. +- When you want to deploy RAC on Docker or Podman on Single host, create bridge networks for containers. +- When you want to deploy RAC on Docker or Podman on Multiple host, create macvlan networks for containers. +- To run Oracle RAC using Podman on multiple hosts, refer [Podman macvlan network](https://docs.podman.io/en/latest/markdown/podman-network-create.1.html). + To run Oracle RAC using Oracle Container Runtime for Docker on multiple hosts, refer [Docker macvlan network](https://docs.docker.com/network/macvlan/). +- If the Docker or Podman bridge network is not available outside your host, you can use the Oracle Connection Manager [CMAN image](https://github.com/oracle/docker-images/tree/main/OracleDatabase/RAC/OracleConnectionManager) to access the Oracle RAC Database from outside the host. + +## Section 2: Building Oracle RAC Database Container Images + +**IMPORTANT :** This section assumes that you have gone through all the prerequisites in Section 1 and completed all the steps, based on your environment. Do not uncompress the binaries and patches. + +To assist in building the images, you can use the [`buildContainerImage.sh`](https://github.com/oracle/docker-images/blob/master/OracleDatabase/RAC/OracleRealApplicationClusters/dockerfiles/buildContainerImage.sh) script. See the following for instructions and usage. + +### Oracle RAC Container Image for Docker + +If you are planing to deploy Oracle RAC container image on Podman, skip to the section [Oracle RAC Container Image for Podman](#oracle-rac-container-image-for-podman). + +```bash +cd /docker-images/OracleDatabase/RAC/OracleRealApplicationClusters/dockerfiles +./buildContainerImage.sh -v -o '--build-arg BASE_OL_IMAGE=oraclelinux:7 --build-arg SLIMMING=true|false' + +# for example ./buildContainerImage.sh -v 21.3.0 -o '--build-arg BASE_OL_IMAGE=oraclelinux:7 --build-arg SLIMMING=false' +``` + +### Oracle RAC Container Image for Podman + +If you are planing to deploy Oracle RAC container image on Docker, skip to the section [Oracle RAC Container Image for Docker](#oracle-rac-container-image-for-docker). + +```bash +cd /docker-images/OracleDatabase/RAC/OracleRealApplicationClusters/dockerfiles +./buildContainerImage.sh -v -o '--build-arg BASE_OL_IMAGE=oraclelinux:8 --build-arg SLIMMING=true|false' + +# for example ./buildContainerImage.sh -v 21.3.0 -o '--build-arg BASE_OL_IMAGE=oraclelinux:8 --build-arg SLIMMING=false' +``` + +- After the `21.3.0` Oracle RAC container image is built, start building a patched image with the download 21.7 RU and one-offs. To build the patch image, refer [Example of how to create a patched database image](https://github.com/oracle/docker-images/tree/main/OracleDatabase/RAC/OracleRealApplicationClusters/samples/applypatch). + + +**Notes** + +- The resulting images will contain the Oracle Grid Infrastructure binaries and Oracle RAC Database binaries. +- If you are behind a proxy wall, then you must set the `https_proxy` environment variable based on your environment before building the image. + +## Section 3: Network and Password Management + +1. Before you start the installation, you must plan your private and public network. You can create a network bridge on every container host so containers running within that host can communicate with each other. + - For example, create `rac_pub1_nw` for the public network (`172.16.1.0/24`) and `rac_priv1_nw` (`192.168.17.0/24`) for a private network. You can use any network subnet for testing. + - In this document we reference the public network on `172.16.1.0/24` and the private network on `192.168.17.0/24`. + + ```bash + docker network create --driver=bridge --subnet=172.16.1.0/24 rac_pub1_nw + docker network create --driver=bridge --subnet=192.168.17.0/24 rac_priv1_nw + ``` + + - To run Oracle RAC using Oracle Container Runtime for Docker on multiple hosts, you will need to create a [Docker macvlan network](https://docs.docker.com/network/macvlan/) using the following commands: + + ```bash + docker network create -d macvlan --subnet=172.16.1.0/24 --gateway=172.16.1.1 -o parent=eth0 rac_pub1_nw + docker network create -d macvlan --subnet=192.168.17.0/24 --gateway=192.168.17.1 -o parent=eth1 rac_priv1_nw + ``` + +2. Specify the secret volume for resetting the grid, oracle, and database user password during node creation or node addition. The volume can be a shared volume among all the containers. For example: + + ```bash + mkdir /opt/.secrets/ + openssl rand -out /opt/.secrets/pwd.key -hex 64 + ``` + + - Edit the `/opt/.secrets/common_os_pwdfile` and seed the password for the grid, oracle and database users. For this deployment scenario, it will be a common password for the grid, oracle, and database users. Run the command: + + ```bash + openssl enc -aes-256-cbc -salt -in /opt/.secrets/common_os_pwdfile -out /opt/.secrets/common_os_pwdfile.enc -pass file:/opt/.secrets/pwd.key + rm -f /opt/.secrets/common_os_pwdfile + ``` + +3. Create `rac_host_file` on both Podman and Docker hosts: + + ```bash + mkdir /opt/containers/ + touch /opt/containers/rac_host_file + ``` + +**Notes** + +- To run Oracle RAC using Podman on multiple hosts, refer [Podman macvlan network](https://docs.podman.io/en/latest/markdown/podman-network-create.1.html). +To run Oracle RAC using Oracle Container Runtime for Docker on multiple hosts, refer [Docker macvlan network](https://docs.docker.com/network/macvlan/). +- If the Docker or Podman bridge network is not available outside your host, you can use the Oracle Connection Manager [CMAN image](https://github.com/oracle/docker-images/tree/main/OracleDatabase/RAC/OracleConnectionManager) to access the Oracle RAC Database from outside the host. +- If you want to specify a different password for each of the user accounts, then create three different files, encrypt them under `/opt/.secrets`, and pass the file name to the container using the environment variable. Environment variables can be ORACLE_PWD_FILE for the oracle user, GRID_PWD_FILE for the grid user, and DB_PWD_FILE for the database password. +- If you want to use a common password for the oracle, grid, and database users, then you can assign a password file name to COMMON_OS_PWD_FILE environment variable. + +## Section 4: Oracle RAC on Docker + +If you are deploying Oracle RAC On Podman, skip to the [Section 5: Oracle RAC on Podman](#section-5-oracle-rac-on-podman). + +**Note** Oracle RAC is supported for production use on Docker starting with Oracle Database 21c (21.3). On earlier releases, Oracle RAC on Docker is supported for development and and test environments. To deploy Oracle RAC on Docker, use the pre-built images available on the Oracle Container Registry. Execute the following steps in a given order to deploy RAC on Docker: + +To create an Oracle RAC environment on Docker, complete each of these steps in order. + +### Section 4.1 : Prerequisites for Running Oracle RAC on Docker + +To run Oracle RAC on Docker, you must install and configure [Oracle Container Runtime for Docker](https://docs.oracle.com/cd/E52668_01/E87205/html/index.html) on Oracle Linux 7. You must have sufficient space on docker file system (`/var/lib/docker`), configured with the Docker OverlayFS storage driver option `overlay2`. + +**IMPORTANT:** Completing prerequisite steps is a requirement for successful configuration. + +Complete each prerequisite step in order, customized for your environment. + +1. Verify that you have enough memory and CPU resources available for all containers. For this `README.md`, we used the following configuration: + + - 2 Docker hosts + - CPU Cores: 1 Socket with 4 cores, with 2 threads for each core Intel® Xeon® Platinum 8167M CPU at 2.00 GHz + - RAM: 60GB + - Swap memory: 32 GB + - Oracle Linux 7.9 or later with the Unbreakable Enterprise Kernel 6: 5.4.17-2102.200.13.el7uek.x86_64. + +2. Oracle RAC must run certain processes in real-time mode. To run processes inside a container in real-time mode, you must make changes to the Docker configuration files. For details, see the [`dockerd` documentation](https://docs.docker.com/engine/reference/commandline/dockerd/#examples). Edit the Docker Daemon based on Docker version: + + - Check the Docker version. In the following output, the Oracle `docker-engine` version is 19.03. + + ```bash + rpm -qa | grep docker + docker-cli-19.03.11.ol-9.el7.x86_64 + docker-engine-19.03.11.ol-9.el7.x86_64 + ``` + + - If Oracle `docker-engine` version is greater than or equal to 19.03: Edit `/usr/lib/systemd/system/docker.service` and add additional parameters in the `[Service]` section for the `dockerd` daemon: + + ```bash + ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --cpu-rt-runtime=950000 + ``` + + - If Oracle docker-engine version is less than 19.03: Edit `/etc/sysconfig/docker` and add following + + ```bash + OPTIONS='--selinux-enabled --cpu-rt-runtime=950000' + ``` + +3. After you have modified the `dockerd` daemon, reload the daemon with the changes you have made: + + ```bash + systemctl daemon-reload + systemctl stop docker + systemctl start docker + ``` + +### Section 4.2: Setup Oracle RAC Container on Docker + +This section provides step by step procedure to deploy Oracle RAC on container with block devices and storage container. To understand the details of environment variable, refer For the details of environment variables [Section 7: Environment Variables for the First Node](#section-7-environment-variables-for-the-first-node) + +Refer the [Section 3: Network and Password Management](#section-3--network-and-password-management) and setup the network on a container host based on your Oracle RAC environment. If you have already done the setup, ignore and proceed further. + +#### Deploying Oracle RAC on Container with Block Devices on Docker + +If you are using an NFS volume, skip to the section [Deploying Oracle RAC on Container With Oracle RAC Storage Container](#deploying-oracle-rac-on-container-with-oracle-rac-storage-container). + +Make sure the ASM devices do not have any existing file system. To clear any other file system from the devices, use the following command: + + ```bash + dd if=/dev/zero of=/dev/xvde bs=8k count=10000 + ``` + +Repeat for each shared block device. In the preceding example, `/dev/xvde` is a shared Xen virtual block device. + +Now create the Oracle RAC container using the image. You can use the following example to create a container: + + ```bash +docker create -t -i \ + --hostname racnoded1 \ + --volume /boot:/boot:ro \ + --volume /dev/shm \ + --tmpfs /dev/shm:rw,exec,size=4G \ + --volume /opt/containers/rac_host_file:/etc/hosts \ + --volume /opt/.secrets:/run/secrets:ro \ + --dns=172.16.1.25 \ + --dns-search=example.com \ + --device=/dev/oracleoci/oraclevdd:/dev/asm_disk1 \ + --device=/dev/oracleoci/oraclevde:/dev/asm_disk2 \ + --privileged=false \ + --cap-add=SYS_NICE \ + --cap-add=SYS_RESOURCE \ + --cap-add=NET_ADMIN \ + -e DNS_SERVERS="172.16.1.25" \ + -e NODE_VIP=172.16.1.130 \ + -e VIP_HOSTNAME=racnoded1-vip \ + -e PRIV_IP=192.168.17.100 \ + -e PRIV_HOSTNAME=racnoded1-priv \ + -e PUBLIC_IP=172.16.1.100 \ + -e PUBLIC_HOSTNAME=racnoded1 \ + -e SCAN_NAME=racnodedc1-scan \ + -e OP_TYPE=INSTALL \ + -e DOMAIN=example.com \ + -e ASM_DEVICE_LIST=/dev/asm_disk1,/dev/asm_disk2 \ + -e ASM_DISCOVERY_DIR=/dev \ + -e CMAN_HOSTNAME=racnodedc1-cman \ + -e CMAN_IP=172.16.1.164 \ + -e COMMON_OS_PWD_FILE=common_os_pwdfile.enc \ + -e PWD_KEY=pwd.key \ + -e RESET_FAILED_SYSTEMD="true" \ + --restart=always --tmpfs=/run -v /sys/fs/cgroup:/sys/fs/cgroup:ro \ + --cpu-rt-runtime=95000 --ulimit rtprio=99 \ + --name racnoded1 \ + oracle/database-rac:21.3.0 +``` + +**Note:** Change environment variables such as `NODE_IP`, `PRIV_IP`, `PUBLIC_IP`, `ASM_DEVICE_LIST`, `PWD_FILE`, and `PWD_KEY` based on your environment. Also, ensure you use the correct device names on each host. + +#### Deploying Oracle RAC on Container With Oracle RAC Storage Container + +If you are using block devices, skip to the section [Deploying Oracle RAC on Container with Block Devices on Docker](#deploying-oracle-rac-on-container-with-block-devices-on-docker) + +Now create the Oracle RAC container using the image. You can use the following example to create a container: + + ```bash + docker create -t -i \ + --hostname racnoded1 \ + --volume /boot:/boot:ro \ + --volume /dev/shm \ + --tmpfs /dev/shm:rw,exec,size=4G \ + --volume /opt/containers/rac_host_file:/etc/hosts \ + --volume /opt/.secrets:/run/secrets:ro \ + --dns=172.16.1.25 \ + --dns-search=example.com \ + --privileged=false \ + --volume racstorage:/oradata \ + --cap-add=SYS_NICE \ + --cap-add=SYS_RESOURCE \ + --cap-add=NET_ADMIN \ + -e DNS_SERVERS="172.16.1.25" \ + -e NODE_VIP=172.16.1.130 \ + -e VIP_HOSTNAME=racnoded1-vip \ + -e PRIV_IP=192.168.17.100 \ + -e PRIV_HOSTNAME=racnoded1-priv \ + -e PUBLIC_IP=172.16.1.100 \ + -e PUBLIC_HOSTNAME=racnoded1 \ + -e SCAN_NAME=racnodedc1-scan \ + -e OP_TYPE=INSTALL \ + -e DOMAIN=example.com \ + -e ASM_DISCOVERY_DIR=/oradata \ + -e ASM_DEVICE_LIST=/oradata/asm_disk01.img,/oradata/asm_disk02.img,/oradata/asm_disk03.img,/oradata/asm_disk04.img,/oradata/asm_disk05.img \ + -e CMAN_HOSTNAME=racnodedc1-cman \ + -e CMAN_IP=172.16.1.164 \ + -e COMMON_OS_PWD_FILE=common_os_pwdfile.enc \ + -e PWD_KEY=pwd.key \ + -e RESET_FAILED_SYSTEMD="true" \ + --restart=always \ + --tmpfs=/run -v /sys/fs/cgroup:/sys/fs/cgroup:ro \ + --cpu-rt-runtime=95000 \ + --ulimit rtprio=99 \ + --name racnoded1 \ + oracle/database-rac:21.3.0 + ``` + +**Notes:** + +- Change environment variables such as `NODE_IP`, `PRIV_IP`, `PUBLIC_IP`, `ASM_DEVICE_LIST`, `PWD_FILE`, and `PWD_KEY` based on your environment. Also, ensure you use the correct device names on each host. +- You must have created the `racstorage` volume before the creation of the Oracle RAC Container. For details, please refer [OracleRACStorageServer](../OracleRACStorageServer/README.md). +- For details about the available environment variables, refer the [Section 7](#section-7-environment-variables-for-the-first-node). + +#### Assign networks to Oracle RAC containers + +You need to assign the Docker networks created in section 1 to containers. Execute the following commands: + + ```bash + +docker network disconnect bridge racnoded1 +docker network connect rac_pub1_nw --ip 172.16.1.100 racnoded1 +docker network connect rac_priv1_nw --ip 192.168.17.100 racnoded1 + ``` + +#### Start the first container + +To start the first container, run the following command: + + ```bash + docker start racnoded1 + ``` + +It can take at least 40 minutes or longer to create the first node of the cluster. To check the logs, use the following command from another terminal session: + + ```bash + docker logs -f racnoded1 + ``` + +You should see the database creation success message at the end: + + ```bash + #################################### + ORACLE RAC DATABASE IS READY TO USE! + #################################### + ``` + +#### Connect to the Oracle RAC container + +To connect to the container execute the following command: + +```bash +docker exec -i -t racnoded1 /bin/bash +``` + +If the install fails for any reason, log in to the container using the preceding command and check `/tmp/orod.log`. + +- You can also review the Grid Infrastructure logs located at `$GRID_BASE/diag/crs` and check for failure logs. +- If the failure occurred during the database creation then check the database logs. + +### Section 4.3: Adding an Oracle RAC Node using a Docker Container + +Before proceeding to the next step, ensure Oracle Grid Infrastructure is running and the Oracle RAC Database is open as per instructions in [Section 4.2: Setup Oracle RAC on Docker](#section-42-setup-oracle-rac-container-on-docker). Otherwise, the node addition process will fail. + +Refer the [Section 3: Network and Password Management](#section-3--network-and-password-management) and setup the network on a container host based on your Oracle RAC environment. If you have already done the setup, ignore and proceed further. + +To understand the details of environment variable, refer For the details of environment variables [Section 8](#section-8-environment-variables-for-the-second-and-subsequent-nodes) + +Reset the password on the existing Oracle RAC node for SSH setup between an existing node in the cluster and the new node. Password must be the same on all the nodes for the `grid` and `oracle` users. Execute the following command on an existing node of the cluster. + +```bash +docker exec -i -t -u root racnode1 /bin/bash +sh /opt/scripts/startup/resetOSPassword.sh --help +sh /opt/scripts/startup/resetOSPassword.sh --op_type reset_grid_oracle --pwd_file common_os_pwdfile.enc --secret_volume /run/secrets --pwd_key_file pwd.key +``` + +**Note:** If you do not have a common secret volume among Oracle RAC containers, populate the password file with the same password that you have used on the new node, encrypt the file, and execute `resetOSPassword.sh` on the existing node of the cluster. + +#### Deploying Oracle RAC Additional Node on Container with Block Devices on Docker + +If you are using an NFS volume, skip to the section [Deploying Oracle RAC on Container with Oracle RAC Storage Container on Docker](#deploying-oracle-rac-on-container-with-oracle-rac-storage-container). + +To create additional nodes, use the following command: + +```bash +docker create -t -i \ + --hostname racnoded2 \ + --volume /boot:/boot:ro \ + --volume /dev/shm \ + --tmpfs /dev/shm:rw,exec,size=4G \ + --volume /opt/containers/rac_host_file:/etc/hosts \ + --volume /opt/.secrets:/run/secrets:ro \ + --dns=172.16.1.25 \ + --dns-search=example.com \ + --device=/dev/oracleoci/oraclevdd:/dev/asm_disk1 \ + --device=/dev/oracleoci/oraclevde:/dev/asm_disk2 \ + --privileged=false \ + --cap-add=SYS_NICE \ + --cap-add=SYS_RESOURCE \ + --cap-add=NET_ADMIN \ + -e DNS_SERVERS="172.16.1.25" \ + -e EXISTING_CLS_NODES=racnoded1 \ + -e NODE_VIP=172.16.1.131 \ + -e VIP_HOSTNAME=racnoded2-vip \ + -e PRIV_IP=192.168.17.101 \ + -e PRIV_HOSTNAME=racnoded2-priv \ + -e PUBLIC_IP=172.16.1.101 \ + -e PUBLIC_HOSTNAME=racnoded2 \ + -e DOMAIN=example.com \ + -e SCAN_NAME=racnodedc1-scan \ + -e ASM_DISCOVERY_DIR=/dev \ + -e ASM_DEVICE_LIST=/dev/asm_disk1,/dev/asm_disk2 \ + -e ORACLE_SID=ORCLCDB \ + -e OP_TYPE=ADDNODE \ + -e COMMON_OS_PWD_FILE=common_os_pwdfile.enc \ + -e PWD_KEY=pwd.key \ + -e RESET_FAILED_SYSTEMD="true" \ + --restart=always --tmpfs=/run -v /sys/fs/cgroup:/sys/fs/cgroup:ro \ + --cpu-rt-runtime=95000 --ulimit rtprio=99 \ + --name racnoded2 \ + oracle/database-rac:21.3.0 +``` + +For details of all environment variables and parameters, refer to [Section 7](#section-7-environment-variables-for-the-first-node). + +#### Deploying Oracle RAC Additional Node on Container with Oracle RAC Storage Container on Docker + +If you are using physical block devices for shared storage, skip to [Deploying Oracle RAC on Container with Block Devices on Docker](#deploying-oracle-rac-on-container-with-block-devices-on-docker). + +Use the existing `racstorage:/oradata` volume when creating the additional container using the image. + +For example: + +```bash +docker create -t -i \ + --hostname racnoded2 \ + --volume /boot:/boot:ro \ + --volume /dev/shm \ + --tmpfs /dev/shm:rw,exec,size=4G \ + --volume /opt/containers/rac_host_file:/etc/hosts \ + --volume /opt/.secrets:/run/secrets:ro \ + --dns=172.16.1.25 \ + --dns-search=example.com \ + --volume racstorage:/oradata \ + --privileged=false \ + --cap-add=SYS_NICE \ + --cap-add=SYS_RESOURCE \ + --cap-add=NET_ADMIN \ + -e DNS_SERVERS="172.16.1.25" \ + -e EXISTING_CLS_NODES=racnoded1 \ + -e NODE_VIP=172.16.1.131 \ + -e VIP_HOSTNAME=racnoded2-vip \ + -e PRIV_IP=192.168.17.101 \ + -e PRIV_HOSTNAME=racnoded2-priv \ + -e PUBLIC_IP=172.16.1.101 \ + -e PUBLIC_HOSTNAME=racnoded2 \ + -e DOMAIN=example.com \ + -e SCAN_NAME=racnodedc1-scan \ + -e ASM_DISCOVERY_DIR=/oradata \ + -e ASM_DEVICE_LIST=/oradata/asm_disk01.img,/oradata/asm_disk02.img,/oradata/asm_disk03.img,/oradata/asm_disk04.img,/oradata/asm_disk05.img \ + -e ORACLE_SID=ORCLCDB \ + -e OP_TYPE=ADDNODE \ + -e COMMON_OS_PWD_FILE=common_os_pwdfile.enc \ + -e PWD_KEY=pwd.key \ + -e RESET_FAILED_SYSTEMD="true" \ + --restart=always --tmpfs=/run -v /sys/fs/cgroup:/sys/fs/cgroup:ro \ + --cpu-rt-runtime=95000 --ulimit rtprio=99 \ + --name racnoded2 \ + oracle/database-rac:21.3.0 +``` + +**Notes:** + +- You must have created **racstorage** volume before the creation of the Oracle RAC container. +- You can change env variables such as IPs and ORACLE_PWD based on your env. For details about the env variables, refer the section 8. + +#### Assign Network to additional Oracle RAC container + +Connect the private and public networks you created earlier to the container: + +```bash +docker network disconnect bridge racnoded2 +docker network connect rac_pub1_nw --ip 172.16.1.101 racnoded2 +docker network connect rac_priv1_nw --ip 192.168.17.101 racnoded2 +``` + +#### Start Oracle RAC racnode2 container + +Start the container + +```bash +docker start racnoded2 +``` + +To check the database logs, tail the logs using the following command: + +```bash +docker logs -f racnoded2 +``` + +You should see the database creation success message at the end. + +```bash +################################################################# +Oracle Database ORCLCDB is up and running on racnoded2 +################################################################# +Running User Script for oracle user +Setting Remote Listener +#################################### +ORACLE RAC DATABASE IS READY TO USE! +#################################### +``` + +#### Connect to the Oracle RAC racnode2 container + +To connect to the container execute the following command: + +```bash +docker exec -i -t racnoded2 /bin/bash +``` + +If the node addition fails, log in to the container using the preceding command and review `/tmp/orod.log`. You can also review the Grid Infrastructure logs i.e. `$GRID_BASE/diag/crs` and check for failure logs. If the node creation has failed during the database creation process, then check DB logs. + +## Section 4.4: Setup Oracle RAC Container on Docker with Docker Compose + +Oracle RAC database can also be deployed with Docker Compose. An example of how to install Oracle RAC Database on Single Host via Bridge Network is explained in this [README.md](./samples/racdockercompose/README.md) + +Same section covers various below scenarios as well with docker compose- +1. Deploying Oracle RAC on Container with Block Devices on Docker with Docker Compose +2. Deploying Oracle RAC on Container With Oracle RAC Storage Container with Docker Compose +3. Deploying Oracle RAC Additional Node on Container with Block Devices on Docker with Docker Compose +4. Deploying Oracle RAC Additional Node on Container with Oracle RAC Storage Container on Docker with Docker Compose + +***Note:*** Docker and Docker Compose is not supported with OL8. You need OL7.9 with UEK R5 or R6. + +## Section 5: Oracle RAC on Podman + +If you are deploying Oracle RAC On Docker, skip to [Section 4: Oracle RAC on Docker](#section-4-oracle-rac-on-docker) + +**Note** Oracle RAC is supported for production use on Podman starting with Oracle Database 19c (19.16), and Oracle Database 21c (21.7). You can deploy Oracle RAC on Podman using the pre-built images available on Oracle Container Registry. Execute the following steps in a given order to deploy RAC on Podman: + +To create an Oracle RAC environment on Podman, complete each of these steps in order. + +### Section 5.1 : Prerequisites for Running Oracle RAC on Podman + +You must install and configure [Podman release 4.0.2](https://docs.oracle.com/en/operating-systems/oracle-linux/podman/podman-InstallingPodmanandRelatedUtilities.html#podman-install) or later on Oracle Linux 8.5 or later to run Oracle RAC on Podman. + +**Notes**: + +- You need to remove `"--cpu-rt-runtime=95000 \"` from container creation commands mentioned below in this document in following sections to create the containers if you are running Oracle 8 with UEKR7: + - [Section 5.2: Setup RAC Containers on Podman](#section-52-setup-rac-containers-on-podman). + - [Section 5.3: Adding a Oracle RAC Node using a container on Podman](#section-53-adding-a-oracle-rac-node-using-a-container-on-podman). + +- You can check the details on [Oracle Linux and Unbreakable Enterprise Kernel (UEK) Releases](https://blogs.oracle.com/scoter/post/oracle-linux-and-unbreakable-enterprise-kernel-uek-releases) + +- You do not need to execute step 2 in this section to create and enable `podman-rac-cgroup.service` when we are running Oracle Linux 8 with Unbreakable Enterprise Kernel R7. + +**IMPORTANT:** Completing prerequisite steps is a requirement for successful configuration. + +Complete each prerequisite step in order, customized for your environment. + +1. Verify that you have enough memory and CPU resources available for all containers. In this `README.md` for Podman, we used the following configuration: + + - 2 Podman hosts + - CPU Cores: 1 Socket with 4 cores, with 2 threads for each core Intel® Xeon® Platinum 8167M CPU at 2.00 GHz + - RAM: 60 GB + - Swap memory: 32 GB + - Oracle Linux 8.5 (Linux-x86-64) with the Unbreakable Enterprise Kernel 6: `5.4.17-2136.300.7.el8uek.x86_64`. + +2. Oracle RAC must run certain processes in real-time mode. To run processes inside a container in real-time mode, populate the real-time CPU budgeting on machine restarts. Create a oneshot systemd service as follows: + + - Create a file `/etc/systemd/system/podman-rac-cgroup.service` + - Append the following lines: + + ```INI + [Unit] + Description=Populate Cgroups with real time chunk on machine restart + After=multi-user.target + [Service] + Type=oneshot + ExecStart=/bin/bash -c “/bin/echo 950000 > /sys/fs/cgroup/cpu,cpuacct/machine.slice/cpu.rt_runtime_us && /bin/systemctl restart podman-restart.service” + StandardOutput=journal + CPUAccounting=yes + Slice=machine.slice + [Install] + WantedBy=multi-user.target + ``` + + - After creating the file `/etc/systemd/system/podman-rac-cgroup.service` with the lines appended in the preceding step, reload and restart the Podman daemon using the following steps: + + ```bash + systemctl daemon-reload + systemctl enable podman-rac-cgroup.service + systemctl enable podman-restart.service + systemctl start podman-rac-cgroup.service + ``` + +3. If SELINUX is enabled on the Podman host, then you must create an SELinux policy for Oracle RAC on Podman. + +You can check SELinux Status in your host machine by running the `sestatus` command. + +For details about how to create SELinux policy for Oracle RAC on Podman, see "How to Configure Podman for SELinux Mode" in the publication [Oracle Real Application Clusters Installation Guide for Podman Oracle Linux x86-64](https://docs.oracle.com/en/database/oracle/oracle-database/21/racpd/target-configuration-oracle-rac-podman.html#GUID-59138DF8-3781-4033-A38F-E0466884D008). + +### Section 5.2: Setup RAC Containers on Podman + +This section provides step by step procedure to deploy Oracle RAC on container with block devices and storage container. To understand the details of environment variable, refer For the details of environment variables [Section 7: Environment Variables for the First Node](#section-7-environment-variables-for-the-first-node) + +Refer the [Section 3: Network and Password Management](#section-3--network-and-password-management) and setup the network on a container host based on your Oracle RAC environment. If you have already done the setup, ignore and proceed further. + +#### Deploying Oracle RAC Containers with Block Devices on Podman + +If you are using an NFS volume, skip to the section [Deploying Oracle RAC on Container With Oracle RAC Storage Container on Podman](#deploying-oracle-rac-on-container-with-oracle-rac-storage-container-on-podman). + +Make sure the ASM devices do not have any existing file system. To clear any other file system from the devices, use the following command: + + ```bash + dd if=/dev/zero of=/dev/xvde bs=8k count=10000 + ``` + +Repeat for each shared block device. In the preceding example, `/dev/xvde` is a shared Xen virtual block device. + +Now create the Oracle RAC container using the image. For the details of environment variables, refer to section 7. You can use the following example to create a container: + + ```bash + podman create -t -i \ + --hostname racnodep1 \ + --volume /boot:/boot:ro \ + --tmpfs /dev/shm:rw,exec,size=4G \ + --volume /opt/containers/rac_host_file:/etc/hosts \ + --volume /opt/.secrets:/run/secrets:ro \ + --dns=172.16.1.25 \ + --dns-search=example.com \ + --device=/dev/oracleoci/oraclevdd:/dev/asm_disk1 \ + --device=/dev/oracleoci/oraclevde:/dev/asm_disk2 \ + --privileged=false \ + --cap-add=SYS_NICE \ + --cap-add=SYS_RESOURCE \ + --cap-add=NET_ADMIN \ + --cap-add=AUDIT_WRITE \ + --cap-add=AUDIT_CONTROL \ + --memory 16G \ + --memory-swap 32G \ + --sysctl kernel.shmall=2097152 \ + --sysctl "kernel.sem=250 32000 100 128" \ + --sysctl kernel.shmmax=8589934592 \ + --sysctl kernel.shmmni=4096 \ + -e DNS_SERVERS="172.16.1.25" \ + -e NODE_VIP=172.16.1.200 \ + -e VIP_HOSTNAME=racnodep1-vip \ + -e PRIV_IP=192.168.17.170 \ + -e PRIV_HOSTNAME=racnodep1-priv \ + -e PUBLIC_IP=172.16.1.170 \ + -e PUBLIC_HOSTNAME=racnodep1 \ + -e SCAN_NAME=racnodepc1-scan \ + -e OP_TYPE=INSTALL \ + -e DOMAIN=example.com \ + -e ASM_DEVICE_LIST=/dev/asm_disk1,/dev/asm_disk2 \ + -e ASM_DISCOVERY_DIR=/dev \ + -e CMAN_HOSTNAME=racnodepc1-cman \ + -e CMAN_IP=172.16.1.166 \ + -e COMMON_OS_PWD_FILE=common_os_pwdfile.enc \ + -e PWD_KEY=pwd.key \ + -e ORACLE_SID=ORCLCDB \ + -e RESET_FAILED_SYSTEMD="true" \ + -e DEFAULT_GATEWAY="172.16.1.1" \ + -e TMPDIR=/var/tmp \ + --restart=always \ + --systemd=always \ + --cpu-rt-runtime=95000 \ + --ulimit rtprio=99 \ + --name racnodep1 \ + localhost/oracle/database-rac:21.3.0-21.13.0 + ``` + +**Note:** Change environment variables such as `NODE_IP`, `PRIV_IP`, `PUBLIC_IP`, `ASM_DEVICE_LIST`, `PWD_FILE`, and `PWD_KEY` based on your environment. Also, ensure you use the correct device names on each host. + +#### Deploying Oracle RAC on Container With Oracle RAC Storage Container on Podman + +If you are using block devices, skip to the section [Deploying RAC Containers with Block Devices on Podman](#deploying-oracle-rac-containers-with-block-devices-on-podman). +Now create the Oracle RAC container using the image. You can use the following example to create a container: + + ```bash + podman create -t -i \ + --hostname racnodep1 \ + --volume /boot:/boot:ro \ + --tmpfs /dev/shm:rw,exec,size=4G \ + --volume /opt/containers/rac_host_file:/etc/hosts \ + --volume /opt/.secrets:/run/secrets:ro \ + --dns=172.16.1.25 \ + --dns-search=example.com \ + --privileged=false \ + --volume racstorage:/oradata \ + --cap-add=SYS_NICE \ + --cap-add=SYS_RESOURCE \ + --cap-add=NET_ADMIN \ + --cap-add=AUDIT_WRITE \ + --cap-add=AUDIT_CONTROL \ + --memory 16G \ + --memory-swap 32G \ + --sysctl kernel.shmall=2097152 \ + --sysctl "kernel.sem=250 32000 100 128" \ + --sysctl kernel.shmmax=8589934592 \ + --sysctl kernel.shmmni=4096 \ + -e DNS_SERVERS="172.16.1.25" \ + -e NODE_VIP=172.16.1.200 \ + -e VIP_HOSTNAME=racnodep1-vip \ + -e PRIV_IP=192.168.17.170 \ + -e PRIV_HOSTNAME=racnodep1-priv \ + -e PUBLIC_IP=172.16.1.170 \ + -e PUBLIC_HOSTNAME=racnodep1 \ + -e SCAN_NAME=racnodepc1-scan \ + -e OP_TYPE=INSTALL \ + -e DOMAIN=example.com \ + -e ASM_DISCOVERY_DIR=/oradata \ + -e ASM_DEVICE_LIST=/oradata/asm_disk01.img,/oradata/asm_disk02.img,/oradata/asm_disk03.img,/oradata/asm_disk04.img,/oradata/asm_disk05.img \ + -e CMAN_HOSTNAME=racnodepc1-cman \ + -e CMAN_IP=172.16.1.166 \ + -e COMMON_OS_PWD_FILE=common_os_pwdfile.enc \ + -e PWD_KEY=pwd.key \ + -e ORACLE_SID=ORCLCDB \ + -e RESET_FAILED_SYSTEMD="true" \ + -e DEFAULT_GATEWAY="172.16.1.1" \ + -e TMPDIR=/var/tmp \ + --restart=always \ + --systemd=always \ + --cpu-rt-runtime=95000 \ + --ulimit rtprio=99 \ + --name racnodep1 \ + localhost/oracle/database-rac:21.3.0-21.13.0 + ``` + +**Notes:** + +- Change environment variables such as `NODE_IP`, `PRIV_IP`, `PUBLIC_IP`, `ASM_DEVICE_LIST`, `PWD_FILE`, and `PWD_KEY` based on your environment. Also, ensure you use the correct device names on each host. +- You must have created the `racstorage` volume before the creation of the Oracle RAC Container. For details about the available environment variables, refer the [Section 7](#section-7-environment-variables-for-the-first-node). + +#### Assign networks to Oracle RAC containers Created Using Podman + +You need to assign the Podman networks created in section 1 to containers. Execute the following commands: + + ```bash + podman network disconnect podman racnodep1 + podman network connect rac_pub1_nw --ip 172.16.1.170 racnodep1 + podman network connect rac_priv1_nw --ip 192.168.17.170 racnodep1 + ``` + +#### Start the first container Created Using Podman + +To start the first container, run the following command: + + ```bash + podman start racnodep1 + ``` + +It can take at least 40 minutes or longer to create the first node of the cluster. To check the database logs, tail the logs using the following command: + +```bash +podman exec racnodep1 /bin/bash -c "tail -f /tmp/orod.log" +``` + +You should see the database creation success message at the end. + +```bash +01-31-2024 12:31:20 UTC : : ################################################################# +01-31-2024 12:31:20 UTC : : Oracle Database ORCLCDB is up and running on racnodep1 +01-31-2024 12:31:20 UTC : : ################################################################# +01-31-2024 12:31:20 UTC : : Running User Script +01-31-2024 12:31:20 UTC : : Setting Remote Listener +01-31-2024 12:31:27 UTC : : 172.16.1.166 +01-31-2024 12:31:27 UTC : : Executing script to set the remote listener +01-31-2024 12:31:28 UTC : : #################################### +01-31-2024 12:31:28 UTC : : ORACLE RAC DATABASE IS READY TO USE! +01-31-2024 12:31:28 UTC : : #################################### +``` + +#### Connect to the Oracle RAC container Created Using Podman + +To connect to the container execute the following command: + +```bash +podman exec -i -t racnodep1 /bin/bash +``` + +If the install fails for any reason, log in to the container using the preceding command and check `/tmp/orod.log`. You can also review the Grid Infrastructure logs located at `$GRID_BASE/diag/crs` and check for failure logs. If the failure occurred during the database creation then check the database logs. + +### Section 5.3: Adding a Oracle RAC Node using a container on Podman + +Before proceeding to the next step, ensure Oracle Grid Infrastructure is running and the Oracle RAC Database is open as per instructions in [Section 5.2: Setup RAC Containers on Podman](#section-52-setup-rac-containers-on-podman). Otherwise, the node addition process will fail. + +Refer the [Section 3: Network and Password Management](#section-3--network-and-password-management) and setup the network on a container host based on your Oracle RAC environment. If you have already done the setup, ignore and proceed further. + +To understand the details of environment variable, refer For the details of environment variables [Section 8](#section-8-environment-variables-for-the-second-and-subsequent-nodes). + +Reset the password on the existing Oracle RAC node for SSH setup between an existing node in the cluster and the new node. Password must be the same on all the nodes for the `grid` and `oracle` users. Execute the following command on an existing node of the cluster. + +```bash +podman exec -i -t -u root racnode1 /bin/bash +sh /opt/scripts/startup/resetOSPassword.sh --help +sh /opt/scripts/startup/resetOSPassword.sh --op_type reset_grid_oracle --pwd_file common_os_pwdfile.enc --secret_volume /run/secrets --pwd_key_file pwd.key +``` + +**Note:** If you do not have a common secret volume among Oracle RAC containers, populate the password file with the same password that you have used on the new node, encrypt the file, and execute `resetOSPassword.sh` on the existing node of the cluster. + +#### Deploying Oracle RAC Additional Node on Container with Block Devices on Podman + +If you are using an NFS volume, skip to the section [Deploying Oracle RAC Additional Node on Container with Oracle RAC Storage Container on Podman](#deploying-oracle-rac-additional-node-on-container-with-oracle-rac-storage-container-on-podman). + +To create additional nodes, use the following command: + +```bash +podman create -t -i \ + --hostname racnodep2 \ + --tmpfs /dev/shm:rw,exec,size=4G \ + --volume /boot:/boot:ro \ + --dns-search=example.com \ + --volume /opt/containers/rac_host_file:/etc/hosts \ + --volume /opt/.secrets:/run/secrets:ro \ + --dns=172.16.1.25 \ + --dns-search=example.com \ + --device=/dev/oracleoci/oraclevdd:/dev/asm_disk1 \ + --device=/dev/oracleoci/oraclevde:/dev/asm_disk2 \ + --privileged=false \ + --cap-add=SYS_NICE \ + --cap-add=SYS_RESOURCE \ + --cap-add=NET_ADMIN \ + --cap-add=AUDIT_CONTROL \ + --cap-add=AUDIT_WRITE \ + --memory 16G \ + --memory-swap 32G \ + --sysctl kernel.shmall=2097152 \ + --sysctl "kernel.sem=250 32000 100 128" \ + --sysctl kernel.shmmax=8589934592 \ + --sysctl kernel.shmmni=4096 \ + -e DNS_SERVERS="172.16.1.25" \ + -e EXISTING_CLS_NODES=racnodep1 \ + -e NODE_VIP=172.16.1.201 \ + -e VIP_HOSTNAME=racnodep2-vip \ + -e PRIV_IP=192.168.17.171 \ + -e PRIV_HOSTNAME=racnodep2-priv \ + -e PUBLIC_IP=172.16.1.171 \ + -e PUBLIC_HOSTNAME=racnodep2 \ + -e DOMAIN=example.com \ + -e SCAN_NAME=racnodepc1-scan \ + -e ASM_DISCOVERY_DIR=/dev \ + -e ASM_DEVICE_LIST=/dev/asm_disk1,/dev/asm_disk2 \ + -e ORACLE_SID=ORCLCDB \ + -e OP_TYPE=ADDNODE \ + -e COMMON_OS_PWD_FILE=common_os_pwdfile.enc \ + -e PWD_KEY=pwd.key \ + -e RESET_FAILED_SYSTEMD="true" \ + -e DEFAULT_GATEWAY="172.16.1.1" \ + -e TMPDIR=/var/tmp \ + --systemd=always \ + --cpu-rt-runtime=95000 \ + --ulimit rtprio=99 \ + --restart=always \ + --name racnodep2 \ + localhost/oracle/database-rac:21.3.0-21.13.0 +``` + +For details of all environment variables and parameters, refer to [Section 8](#section-8-environment-variables-for-the-second-and-subsequent-nodes). + +#### Deploying Oracle RAC Additional Node on Container with Oracle RAC Storage Container on Podman + +If you are using physical block devices for shared storage, skip to [Deploying Oracle RAC Additional Node on Container with Block Devices on Podman](#deploying-oracle-rac-additional-node-on-container-with-block-devices-on-podman). + +Use the existing `racstorage:/oradata` volume when creating the additional container using the image. + +For example: + +```bash +podman create -t -i \ + --hostname racnodep2 \ + --tmpfs /dev/shm:rw,exec,size=4G \ + --volume /boot:/boot:ro \ + --dns-search=example.com \ + --volume /opt/containers/rac_host_file:/etc/hosts \ + --volume /opt/.secrets:/run/secrets:ro \ + --dns=172.16.1.25 \ + --dns-search=example.com \ + --privileged=false \ + --volume racstorage:/oradata \ + --cap-add=SYS_NICE \ + --cap-add=SYS_RESOURCE \ + --cap-add=NET_ADMIN \ + --cap-add=AUDIT_WRITE \ + --cap-add=AUDIT_CONTROL \ + --memory 16G \ + --memory-swap 32G \ + --sysctl kernel.shmall=2097152 \ + --sysctl "kernel.sem=250 32000 100 128" \ + --sysctl kernel.shmmax=8589934592 \ + --sysctl kernel.shmmni=4096 \ + -e DNS_SERVERS="172.16.1.25" \ + -e EXISTING_CLS_NODES=racnodep1 \ + -e NODE_VIP=172.16.1.201 \ + -e VIP_HOSTNAME=racnodep2-vip \ + -e PRIV_IP=192.168.17.171 \ + -e PRIV_HOSTNAME=racnodep2-priv \ + -e PUBLIC_IP=172.16.1.171 \ + -e PUBLIC_HOSTNAME=racnodep2 \ + -e DOMAIN=example.com \ + -e SCAN_NAME=racnodepc1-scan \ + -e ASM_DISCOVERY_DIR=/oradata \ + -e ASM_DEVICE_LIST=/oradata/asm_disk01.img,/oradata/asm_disk02.img,/oradata/asm_disk03.img,/oradata/asm_disk04.img,/oradata/asm_disk05.img \ + -e ORACLE_SID=ORCLCDB \ + -e OP_TYPE=ADDNODE \ + -e COMMON_OS_PWD_FILE=common_os_pwdfile.enc \ + -e PWD_KEY=pwd.key \ + -e RESET_FAILED_SYSTEMD="true" \ + -e DEFAULT_GATEWAY="172.16.1.1" \ + -e TMPDIR=/var/tmp \ + --systemd=always \ + --cpu-rt-runtime=95000 \ + --ulimit rtprio=99 \ + --restart=always \ + --name racnodep2 \ + localhost/oracle/database-rac:21.3.0-21.13.0 +``` + +**Notes:** + +- You must have created **racstorage** volume before the creation of the Oracle RAC container. +- You can change env variables such as IPs and ORACLE_PWD based on your env. For details about the env variables, refer the [Section 8](#section-8-environment-variables-for-the-second-and-subsequent-nodes). + +#### Assign Network to additional Oracle RAC container Created Using Podman + +Connect the private and public networks you created earlier to the container: + +```bash +podman network disconnect podman racnodep2 +podman network connect rac_pub1_nw --ip 172.16.1.171 racnodep2 +podman network connect rac_priv1_nw --ip 192.168.17.171 racnodep2 +``` + +#### Start Oracle RAC container + +Start the container + +```bash +podman start racnodep2 +``` + +To check the database logs, tail the logs using the following command: + +```bash +podman exec racnodep2 /bin/bash -c "tail -f /tmp/orod.log" +``` + +You should see the database creation success message at the end. + +```bash +02-01-2024 09:36:14 UTC : : ################################################################# +02-01-2024 09:36:14 UTC : : Oracle Database ORCLCDB is up and running on racnodep2 +02-01-2024 09:36:14 UTC : : ################################################################# +02-01-2024 09:36:14 UTC : : Running User Script +02-01-2024 09:36:14 UTC : : Setting Remote Listener +02-01-2024 09:36:14 UTC : : #################################### +02-01-2024 09:36:14 UTC : : ORACLE RAC DATABASE IS READY TO USE! +02-01-2024 09:36:14 UTC : : #################################### +``` +## Section 5.4: Setup Oracle RAC Container on Podman with Podman Compose + +Oracle RAC database can also be deployed with podman Compose. An example of how to install Oracle RAC Database on Single Host via Bridge Network is explained in this [README.md](./samples/racpodmancompose/README.md) + +Same section covers various below scenarios as well with podman compose- +1. Deploying Oracle RAC on Container with Block Devices on Podman with Podman Compose +2. Deploying Oracle RAC on Container with NFS Devices on Podman with Podman Compose +3. Deploying Oracle RAC Additional Node on Container with Block Devices on Podman with Podman Compose +4. Deploying Oracle RAC Additional Node on Container with Oracle RAC Storage Container on Podman with Podman Compose + +***Note:*** Podman and Podman Compose is not supported with OL7. You need minimum OL8.8 with UEK R7. + +## Section 6: Connecting to an Oracle RAC Database + +**IMPORTANT:** This section assumes that you have successfully created an Oracle RAC cluster using the preceding sections. + +If you are using a connection manager and exposed the port 1521 on the host, then connect from an external client using the following connection string, where `` is the host container, and `` is the database system identifier: + +```bash +system/@//:1521/ +``` + +If you are using the bridge created using MACVLAN driver, and you have configured DNS appropriately, then you can connect using the public Single Client Access (SCAN) listener directly from any external client. To connect with the SCAN, use the following connection string, where `` is the SCAN name for the database, and `` is the database system identifier: + +```bash +system/@//:1521/ +``` + +## Section 7: Environment Variables for the First Node + +This section provides information about the environment variables that can be used when creating the first node of a cluster. + +```bash +OP_TYPE=###Specify the Operation TYPE. It can accept 2 values INSTALL OR ADDNODE#### +NODE_VIP=####Specify the Node VIP### +VIP_HOSTNAME=###Specify the VIP hostname### +PRIV_IP=###Specify the Private IP### +PRIV_HOSTNAME=###Specify the Private Hostname### +PUBLIC_IP=###Specify the public IP### +PUBLIC_HOSTNAME=###Specify the public hostname### +SCAN_NAME=###Specify the scan name### +ASM_DEVICE_LIST=###Specify the ASM Disk lists. +SCAN_IP=###Specify this if you do not have DNS server### +DOMAIN=###Default value set to example.com### +PASSWORD=###OS password will be generated by openssl### +CLUSTER_NAME=###Default value set to racnode-c#### +ORACLE_SID=###Default value set to ORCLCDB### +ORACLE_PDB=###Default value set to ORCLPDB### +ORACLE_PWD=###Default value set to generated by openssl random password### +ORACLE_CHARACTERSET=###Default value set AL32UTF8### +DEFAULT_GATEWAY=###Default gateway. You need this env variable if containers will be running on multiple hosts.#### +CMAN_HOSTNAME=###Connection Manager Host Name### +CMAN_IP=###Connection manager Host IP### +ASM_DISCOVERY_DIR=####ASM disk location insdie the container. By default it is /dev###### +COMMON_OS_PWD_FILE=###Pass the file name to setup grid and oracle user password. If you specify ORACLE_PWD_FILE, GRID_PWD_FILE, and DB_PWD_FILE then you do not need to specify this env variable### +ORACLE_PWD_FILE=###Pass the file name to set the password for oracle user.### +GRID_PWD_FILE=###Pass the file name to set the password for grid user.### +DB_PWD_FILE=###Pass the file name to set the password for DB user i.e. sys.### +REMOVE_OS_PWD_FILES=###Set this env variable to true to remove pwd key file and password file after resetting password.### +CONTAINER_DB_FLAG=###Default value is set to true to create container database. Set this to false if you do not want to create container database.### +``` + +## Section 8: Environment Variables for the Second and Subsequent Nodes + +This section provides the details about the environment variables that can be used for all additional nodes added to an existing cluster. + +```bash +OP_TYPE=###Specify the Operation TYPE. It can accept 2 values INSTALL OR ADDNODE### +EXISTING_CLS_NODES=###Specify the Existing Node of the cluster which you want to join. If you have 2 nodes in the cluster and you are trying to add the third node then specify existing 2 nodes of the clusters and separate them by comma.#### +NODE_VIP=###Specify the Node VIP### +VIP_HOSTNAME=###Specify the VIP hostname### +PRIV_IP=###Specify the Private IP### +PRIV_HOSTNAME=###Specify the Private Hostname### +PUBLIC_IP=###Specify the public IP### +PUBLIC_HOSTNAME=###Specify the public hostname### +SCAN_NAME=###Specify the scan name### +SCAN_IP=###Specify this if you do not have DNS server### +ASM_DEVICE_LIST=###Specify the ASM Disk lists. +DOMAIN=###Default value set to example.com### +ORACLE_SID=###Default value set to ORCLCDB### +DEFAULT_GATEWAY=###Default gateway. You need this env variable if containers will be running on multiple hosts.#### +CMAN_HOSTNAME=###Connection Manager Host Name### +CMAN_IP=###Connection manager Host IP### +ASM_DISCOVERY_DIR=####ASM disk location inside the container. By default it is /dev###### +COMMON_OS_PWD_FILE=###You need to pass the file name to setup grid and oracle user password. If you specify ORACLE_PWD_FILE, GRID_PWD_FILE, and DB_PWD_FILE then you do not need to specify this env variable### +ORACLE_PWD_FILE=###You need to pass the file name to set the password for oracle user.### +GRID_PWD_FILE=###You need to pass the file name to set the password for grid user.### +DB_PWD_FILE=###You need to pass the file name to set the password for DB user i.e. sys.### +REMOVE_OS_PWD_FILES=###You need to set this to true to remove pwd key file and password file after resetting password.### +``` + +## Section 9: Building a Patched Oracle RAC Container Image + +If you want to build a patched image based on a base 21.3.0 container image, then refer to the GitHub page [Example of how to create a patched database image](https://github.com/oracle/docker-images/tree/main/OracleDatabase/RAC/OracleRealApplicationClusters/samples/applypatch). + +## Section 10 : Sample Container Files for Older Releases + +### Docker + +This project offers sample container files for Oracle Grid Infrastructure and Oracle Real Application Clusters for dev and test: + +- Oracle Database 19c Oracle Grid Infrastructure (19.3) for Linux x86-64 +- Oracle Database 19c (19.3) for Linux x86-64 + +- Oracle Database 18c Oracle Grid Infrastructure (18.3) for Linux x86-64 + +- Oracle Database 18c (18.3) for Linux x86-64 + +- Oracle Database 12c Release 2 Oracle Grid Infrastructure (12.2.0.1.0) for Linux x86-64 + +- Oracle Database 12c Release 2 (12.2.0.1.0) Enterprise Edition for Linux x86-64 + + **Notes:** + +- Note that the Oracle RAC on Docker Container releases are supported only for test and development environments, but not for production environments. + +- If you are planning to build and deploy Oracle RAC 18.3.0, you need to download Oracle 18.3.0 Grid Infrastructure and Oracle Database 18.3.0 Database. + + - You also need to download Patch# p28322130_183000OCWRU_Linux-x86-64.zip from [Oracle Technology Network](https://www.oracle.com/technetwork/database/database-technologies/clusterware/downloads/docker-4418413.html). + + - Stage it under dockerfiles/18.3.0 folder. + +- If you are planning to build and deploy Oracle RAC 12.2.0.1, you need to download Oracle 12.2.0.1 Grid Infrastructure and Oracle Database 12.2.0.1 Database. + + - You also need to download Patch# p27383741_122010_Linux-x86-64.zip from [Oracle Technology Network](https://www.oracle.com/technetwork/database/database-technologies/clusterware/downloads/docker-4418413.html). + + - Stage it under dockerfiles/12.2.0.1 folder. + +### Podman + +This project offers sample container files for Oracle Grid Infrastructure and Oracle Real Application Clusters for dev and test: + +- Oracle Database 19c Oracle Grid Infrastructure (19.3) for Linux x86-64 +- Oracle Database 19c (19.3) for Linux x86-64 + +**Notes:** +- Because Oracle RAC on Podman is supported on 19c from 19.16 or later, you must download the grid release update (RU) from [support.oracle.com](https://support.oracle.com/portal/). + +- For RAC on Podman for v19.22, download following one-offs from [support.oracle.com](https://support.oracle.com/portal/) + - `35943157` + - `35940989` + +- Before starting the next step, you must edit `docker-images/OracleDatabase/RAC/OracleRealApplicationClusters/dockerfiles/19.3.0/Dockerfile`, change `oraclelinux:7-slim` to `oraclelinux:8`, and save the file. + +- You must add `CV_ASSUME_DISTID=OEL8` inside the `Dockerfile` as an env variable. + +- Once the `19.3.0` Oracle RAC on Podman image is built, start building patched image with the download 19.16 RU and one-offs. To build the patch the image, refer [Example of how to create a patched database image](https://github.com/oracle/docker-images/tree/main/OracleDatabase/RAC/OracleRealApplicationClusters/samples/applypatch). + +## Section 11 : Support + +### Docker Support + +At the time of this release, Oracle RAC on Docker is supported only on Oracle Linux 7. To see current details, refer the [Real Application Clusters Installation Guide for Docker Containers Oracle Linux x86-64](https://docs.oracle.com/en/database/oracle/oracle-database/21/racdk/oracle-rac-on-docker.html). + +### Podman Support + +At the time of this release, Oracle RAC on Podman is supported for Oracle Linux 8.5 later. To see current Linux support certifications, refer [Oracle RAC on Podman Documentation](https://docs.oracle.com/en/database/oracle/oracle-database/21/install-and-upgrade.html) + +## Section 12 : License + +To download and run Oracle Grid and Database, regardless of whether inside or outside a container, you must download the binaries from the Oracle website and accept the license indicated on that page. + +All scripts and files hosted in this repository which are required to build the container images are, unless otherwise noted, released under UPL 1.0 license. + +## Section 13 : Copyright + +Copyright (c) 2014-2024 Oracle and/or its affiliates. diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/Checksum b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/Checksum new file mode 100644 index 0000000000..821d5a3ad9 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/Checksum @@ -0,0 +1,2 @@ +b7c4c66f801f92d14faa0d791ccda721 LINUX.X64_193000_grid_home.zip +1858bd0d281c60f4ddabd87b1c214a4f LINUX.X64_193000_db_home.zip diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/Containerfile b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/Containerfile new file mode 100644 index 0000000000..728bce6249 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/Containerfile @@ -0,0 +1,269 @@ +# LICENSE UPL 1.0 +# +# Copyright (c) 2018,2022 Oracle and/or its affiliates. +# +# ORACLE CONTAINERFILES PROJECT +# -------------------------- +# This is the Containefile for Oracle Database 19c Release 3 Real Application Clusters +# +# REQUIRED FILES TO BUILD THIS IMAGE +# ---------------------------------- +# (1) LINUX.X64_193000_grid_home.zip +# (2 LINUX.X64_193000_db_home.zip +# Download Oracle Grid 19c Release 3 Enterprise Edition for Linux x64 +# Download Oracle Database 19c Release 3 Enterprise Edition for Linux x64 +# from http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html +# +# HOW TO BUILD THIS IMAGE +# ----------------------- +# Run: +# $ docker build -t oracle/database:19.3.0-rac . + + +ARG BASE_OL_IMAGE=oraclelinux:8 +ARG SLIMMING=false +# Pull base image +# --------------- +# hadolint ignore=DL3006,DL3025 +FROM $BASE_OL_IMAGE AS base +ARG SLIMMING=false +ARG VERSION +# Labels +# ------ +LABEL "provider"="Oracle" \ + "issues"="https://github.com/oracle/docker-images/issues" \ + "volume.setup.location1"="/opt/scripts" \ + "volume.startup.location1"="/opt/scripts/startup" \ + "port.listener"="1521" \ + "port.oemexpress"="5500" + +# Argument to control removal of components not needed after db software installation +ARG INSTALL_FILE_1="LINUX.X64_193000_grid_home.zip" +ARG INSTALL_FILE_2="LINUX.X64_193000_db_home.zip" +ARG DB_EDITION="EE" +ARG USER="root" +ARG WORKDIR="/rac-work-dir" +ARG IGNORE_PREREQ=false + +# Environment variables required for this build (do NOT change) +# ------------------------------------------------------------- +# Linux Env Variable +# hadolint ignore=DL3044 +ENV SETUP_LINUX_FILE="setupLinuxEnv.sh" \ + INSTALL_DIR=/opt/scripts \ +# Grid Env variables + GRID_INSTALL_RSP="gridsetup_19c.rsp" \ + GRID_SW_INSTALL_RSP="grid_sw_install_19c.rsp" \ + GRID_SETUP_FILE="setupGrid.sh" \ + INITSH="initsh" \ + WORKDIR=$WORKDIR \ + FIXUP_PREQ_FILE="fixupPreq.sh" \ + INSTALL_GRID_BINARIES_FILE="installGridBinaries.sh" \ + INSTALL_GRID_PATCH="applyGridPatch.sh" \ + INVENTORY=/u01/app/oraInventory \ + INSTALL_FILE_1=$INSTALL_FILE_1 \ + INSTALL_FILE_2=$INSTALL_FILE_2 \ + DB_EDITION=$DB_EDITION \ + CONFIGGRID="configGrid.sh" \ + ADDNODE="AddNode.sh" \ + DELNODE="DelNode.sh" \ + ADDNODE_RSP="grid_addnode.rsp" \ + SETUPSSH="setupSSH.expect" \ + DOCKERORACLEINIT="dockeroracleinit" \ + GRID_USER_HOME="/home/grid" \ + SETUPGRIDENV="setupGridEnv.sh" \ + ASM_DISCOVERY_DIR="/dev" \ + RESET_OS_PASSWORD="resetOSPassword.sh" \ + MULTI_NODE_INSTALL="MultiNodeInstall.py" \ +# RAC DB Env Variables + DB_INSTALL_RSP="db_sw_install_19c.rsp" \ + DBCA_RSP="dbca_19c.rsp" \ + DB_SETUP_FILE="setupDB.sh" \ + PWD_FILE="setPassword.sh" \ + RUN_FILE="runOracle.sh" \ + STOP_FILE="stopOracle.sh" \ + ENABLE_RAC_FILE="enableRAC.sh" \ + CHECK_DB_FILE="checkDBStatus.sh" \ + USER_SCRIPTS_FILE="runUserScripts.sh" \ + REMOTE_LISTENER_FILE="remoteListener.sh" \ + INSTALL_DB_BINARIES_FILE="installDBBinaries.sh" \ + GRID_HOME_CLEANUP="GridHomeCleanup.sh" \ + ORACLE_HOME_CLEANUP="OracleHomeCleanup.sh" \ + DB_USER="oracle" \ + GRID_USER="grid" \ + SLIMMING=$SLIMMING \ + container="true" \ + FUNCTIONS="functions.sh" \ + COMMON_SCRIPTS="/common_scripts" \ + CHECK_SPACE_FILE="checkSpace.sh" \ + RESET_FAILED_UNITS="resetFailedUnits.sh" \ + SET_CRONTAB="setCrontab.sh" \ + CRONTAB_ENTRY="crontabEntry" \ + EXPECT="/usr/bin/expect" \ + BIN="/usr/sbin" \ + IGNORE_PREREQ=$IGNORE_PREREQ + +############################################# +# ------------------------------------------- +# Start new stage for Non-Slim Image +# ------------------------------------------- +############################################# + +FROM base AS rac-image-slim-false +ARG SLIMMING +ARG VERSION +# Environment variables required for this build (do NOT change) +# ------------------------------------------------------------- +# Linux Env Variable +ENV GRID_BASE=/u01/app/grid \ + GRID_HOME=/u01/app/19c/grid \ + DB_BASE=/u01/app/oracle \ + DB_HOME=/u01/app/oracle/product/19c/dbhome_1 +# Use second ENV so that variable get substituted +# hadolint ignore=DL3044 +ENV INSTALL_SCRIPTS=$INSTALL_DIR/install \ + PATH=/bin:/usr/bin:/sbin:/usr/sbin \ + SCRIPT_DIR=$INSTALL_DIR/startup \ + RAC_SCRIPTS_DIR="scripts" \ + GRID_PATH=$GRID_HOME/bin:$GRID_HOME/OPatch/:$GRID_HOME/perl/bin:/usr/sbin:/bin:/sbin \ + DB_PATH=$DB_HOME/bin:$DB_HOME/OPatch/:$DB_HOME/perl/bin:/usr/sbin:/bin:/sbin \ + GRID_LD_LIBRARY_PATH=$GRID_HOME/lib:/usr/lib:/lib \ + DB_LD_LIBRARY_PATH=$DB_HOME/lib:/usr/lib:/lib + +# Copy binaries +# ------------- +# COPY Binaries +COPY $VERSION/$SETUP_LINUX_FILE $VERSION/$GRID_SETUP_FILE $VERSION/$DB_SETUP_FILE $VERSION/$CHECK_SPACE_FILE $VERSION/$FIXUP_PREQ_FILE $INSTALL_SCRIPTS/ + +# Setup Scripts +COPY $VERSION/$RUN_FILE $VERSION/$ADDNODE $VERSION/$ADDNODE_RSP $VERSION/$SETUPSSH $VERSION/$FUNCTIONS $VERSION/$CONFIGGRID $VERSION/$GRID_INSTALL_RSP $VERSION/$DBCA_RSP $VERSION/$PWD_FILE $VERSION/$CHECK_DB_FILE $VERSION/$USER_SCRIPTS_FILE $VERSION/$STOP_FILE $VERSION/$CHECK_DB_FILE $VERSION/$REMOTE_LISTENER_FILE $VERSION/$SETUPGRIDENV $VERSION/$DELNODE $VERSION/$INITSH $VERSION/$RESET_OS_PASSWORD $VERSION/$MULTI_NODE_INSTALL $SCRIPT_DIR/ + +COPY $RAC_SCRIPTS_DIR $SCRIPT_DIR/scripts +# hadolint ignore=SC2086 +RUN chmod 755 $INSTALL_SCRIPTS/*.sh && \ + sync && \ + $INSTALL_DIR/install/$CHECK_SPACE_FILE && \ + $INSTALL_DIR/install/$SETUP_LINUX_FILE && \ + $INSTALL_DIR/install/$GRID_SETUP_FILE && \ + $INSTALL_DIR/install/$DB_SETUP_FILE && \ + sync + +############################################# +# ------------------------------------------- +# Start new stage for slim image +# ------------------------------------------- +############################################# +FROM base AS rac-image-slim-true +ARG SLIMMING +ARG VERSION +ENV CV_ASSUME_DISTID=OEL7.8 + +# Environment variables required for this build (do NOT change) +# ------------------------------------------------------------- +# Linux Env Variable +ENV INSTALL_SCRIPTS=$INSTALL_DIR/install \ + PATH=/bin:/usr/bin:/sbin:/usr/sbin \ + SCRIPT_DIR=$INSTALL_DIR/startup \ + RAC_SCRIPTS_DIR="scripts" + +# Copy binaries +# ------------- +# COPY Binaries +COPY $VERSION/$SETUP_LINUX_FILE $VERSION/$GRID_SETUP_FILE $VERSION/$DB_SETUP_FILE $VERSION/$CHECK_SPACE_FILE $VERSION/$FIXUP_PREQ_FILE $INSTALL_SCRIPTS/ + +# Setup Scripts +COPY $VERSION/$RUN_FILE $VERSION/$SETUPSSH $VERSION/$USER_SCRIPTS_FILE $VERSION/$STOP_FILE $VERSION/$CHECK_DB_FILE $VERSION/$REMOTE_LISTENER_FILE $VERSION/$INITSH $VERSION/$RESET_OS_PASSWORD $SCRIPT_DIR/ + +COPY $RAC_SCRIPTS_DIR $SCRIPT_DIR/scripts +# hadolint ignore=SC2086 +RUN chmod 755 $INSTALL_SCRIPTS/*.sh && \ + sync && \ + $INSTALL_DIR/install/$CHECK_SPACE_FILE && \ + $INSTALL_DIR/install/$SETUP_LINUX_FILE && \ + $INSTALL_DIR/install/$GRID_SETUP_FILE && \ + $INSTALL_DIR/install/$DB_SETUP_FILE && \ + sync + + +############################################# +# ------------------------------------------- +# Start new stage for installing the grid and DB +# ------------------------------------------- +############################################# +# hadolint ignore=DL3006 +FROM rac-image-slim-${SLIMMING} AS builder +ARG SLIMMING +# hadolint ignore=DL3006 +ARG VERSION +COPY $VERSION/$INSTALL_GRID_BINARIES_FILE $VERSION/$GRID_SW_INSTALL_RSP $VERSION/$DB_SETUP_FILE $VERSION/$DB_INSTALL_RSP $VERSION/$INSTALL_DB_BINARIES_FILE $VERSION/$ENABLE_RAC_FILE $VERSION/$GRID_HOME_CLEANUP $VERSION/$ORACLE_HOME_CLEANUP $VERSION/$INSTALL_FILE_1* $VERSION/$INSTALL_FILE_2* $INSTALL_SCRIPTS/ +# hadolint ignore=SC2086 +RUN chmod 755 $INSTALL_SCRIPTS/*.sh +## Install software if SLIMMING is false +# hadolint ignore=SC2086 +RUN if [ "${SLIMMING}x" != 'truex' ]; then \ + sed -e '/hard *memlock/s/^/#/g' -i /etc/security/limits.d/oracle-database-preinstall-19c.conf && \ + sed -e '/ *nofile /s/^/#/g' -i /etc/security/limits.d/oracle-database-preinstall-19c.conf && \ + su $GRID_USER -c "$INSTALL_DIR/install/$INSTALL_GRID_BINARIES_FILE EE $PATCH_NUMBER" && \ + $INVENTORY/orainstRoot.sh && \ + $GRID_HOME/root.sh && \ + su $DB_USER -c "$INSTALL_DIR/install/$INSTALL_DB_BINARIES_FILE EE" && \ + su $DB_USER -c "$INSTALL_DIR/install/$ENABLE_RAC_FILE" && \ + $INVENTORY/orainstRoot.sh && \ + $DB_HOME/root.sh && \ + su $GRID_USER -c "$INSTALL_SCRIPTS/$GRID_HOME_CLEANUP" && \ + su $DB_USER -c "$INSTALL_SCRIPTS/$ORACLE_HOME_CLEANUP" && \ + :; \ + fi +# hadolint ignore=SC3014 +RUN if [ "${SLIMMING}x" == 'truex' ]; then \ + mkdir /u01 && \ + :; \ + fi +# hadolint ignore=SC2086 +RUN rm -f $INSTALL_DIR/install/* && \ + sync + +############################################# +# ------------------------------------------- +# Start new layer for grid & database runtime +# ------------------------------------------- +############################################# +# hadolint ignore=DL3006 +FROM rac-image-slim-${SLIMMING} AS final +# hadolint ignore=DL3006 +COPY --from=builder /u01 /u01 +# hadolint ignore=SC2086 +RUN if [ "${SLIMMING}x" != 'truex' ]; then \ + $INVENTORY/orainstRoot.sh && \ + $GRID_HOME/root.sh && \ + $DB_HOME/root.sh && \ + chmod 666 $SCRIPT_DIR/*.rsp && \ + :; \ + fi && \ + $INSTALL_DIR/install/$FIXUP_PREQ_FILE && \ + sync && \ + chmod 755 $SCRIPT_DIR/*.sh && \ + chmod 755 $SCRIPT_DIR/scripts/*.py && \ + chmod 755 $SCRIPT_DIR/scripts/cmdExec && \ + chmod 755 $SCRIPT_DIR/scripts/*.expect && \ + echo "nohup $SCRIPT_DIR/runOracle.sh &" >> /etc/rc.local && \ + rm -f /etc/rc.d/init.d/oracle-database-preinstall-19c-firstboot && \ + chmod +x /etc/rc.d/rc.local && \ + cp $SCRIPT_DIR/$INITSH /usr/bin/$INITSH && \ + setcap 'cap_net_admin,cap_net_raw+ep' /usr/bin/ping && \ + chmod 755 /usr/bin/$INITSH && \ + rm -f /etc/sysctl.d/99-oracle-database-preinstall-19c-sysctl.conf && \ + rm -f /etc/sysctl.d/99-sysctl.conf && \ + rm -f $INSTALL_DIR/install/* && \ + sync + +USER ${USER} +VOLUME ["/common_scripts"] +WORKDIR $WORKDIR + +HEALTHCHECK --interval=2m --start-period=30m \ + CMD "$SCRIPT_DIR/scripts/main.py --checkracinst=true" >/dev/null || exit 1 + +# Define default command to start Oracle Grid and RAC Database setup. +# hadolint ignore=DL3025 +ENTRYPOINT /usr/bin/$INITSH diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/Dockerfile_orig b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/Dockerfile_orig new file mode 100644 index 0000000000..abb87f678a --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/Dockerfile_orig @@ -0,0 +1,132 @@ +# LICENSE UPL 1.0 +# +# Copyright (c) 1982-2018 Oracle and/or its affiliates. All rights reserved. +# +# ORACLE DOCKERFILES PROJECT +# -------------------------- +# This is the Dockerfile for Oracle Database 19c Release 3 Real Application Clusters +# +# REQUIRED FILES TO BUILD THIS IMAGE +# ---------------------------------- +# (1) LINUX.X64_180000_db_home.zip +# (2) LINUX.X64_180000_grid_home.zip +# Download Oracle Grid 19c Release 3 Enterprise Edition for Linux x64 +# Download Oracle Database 19c Release 3 Enterprise Edition for Linux x64 +# from http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html +# +# HOW TO BUILD THIS IMAGE +# ----------------------- +# Run: +# $ docker build -t oracle/database:19.3.0-rac . +# +# Pull base image +# --------------- +FROM oraclelinux:7-slim + +# Maintainer +# ---------- +MAINTAINER Paramdeep Saini + +# Environment variables required for this build (do NOT change) +# ------------------------------------------------------------- +# Linux Env Variable +ENV SETUP_LINUX_FILE="setupLinuxEnv.sh" \ + INSTALL_DIR=/opt/scripts \ +# Grid Env variables + GRID_BASE=/u01/app/grid \ + GRID_HOME=/u01/app/19.3.0/grid \ + INSTALL_FILE_1="grid_home.zip" \ + GRID_INSTALL_RSP="gridsetup_19c.rsp" \ + GRID_SW_INSTALL_RSP="grid_sw_install_19c.rsp" \ + GRID_SETUP_FILE="setupGrid.sh" \ + FIXUP_PREQ_FILE="fixupPreq.sh" \ + INSTALL_GRID_BINARIES_FILE="installGridBinaries.sh" \ + INSTALL_GRID_PATCH="applyGridPatch.sh" \ + INVENTORY=/u01/app/oraInventory \ + CONFIGGRID="configGrid.sh" \ + ADDNODE="AddNode.sh" \ + DELNODE="DelNode.sh" \ + ADDNODE_RSP="grid_addnode.rsp" \ + SETUPSSH="setupSSH.expect" \ + DOCKERORACLEINIT="dockeroracleinit" \ + GRID_USER_HOME="/home/grid" \ + SETUPGRIDENV="setupGridEnv.sh" \ + ASM_DISCOVERY_DIR="/dev" \ + RESET_OS_PASSWORD="resetOSPassword.sh" \ + MULTI_NODE_INSTALL="MultiNodeInstall.py" \ +# RAC DB Env Variables + DB_BASE=/u01/app/oracle \ + DB_HOME=/u01/app/oracle/product/19.3.0/dbhome_1 \ + INSTALL_FILE_2="db_home.zip" \ + DB_INSTALL_RSP="db_sw_install_19c.rsp" \ + DBCA_RSP="dbca_19c.rsp" \ + DB_SETUP_FILE="setupDB.sh" \ + PWD_FILE="setPassword.sh" \ + RUN_FILE="runOracle.sh" \ + STOP_FILE="stopOracle.sh" \ + ENABLE_RAC_FILE="enableRAC.sh" \ + CHECK_DB_FILE="checkDBStatus.sh" \ + USER_SCRIPTS_FILE="runUserScripts.sh" \ + REMOTE_LISTENER_FILE="remoteListener.sh" \ + INSTALL_DB_BINARIES_FILE="installDBBinaries.sh" \ +# COMMON ENV Variable + FUNCTIONS="functions.sh" \ + COMMON_SCRIPTS="/common_scripts" \ + CHECK_SPACE_FILE="checkSpace.sh" \ + EXPECT="/usr/bin/expect" \ + BIN="/usr/sbin" \ + container="true" +# Use second ENV so that variable get substituted +ENV INSTALL_SCRIPTS=$INSTALL_DIR/install \ + PATH=/bin:/usr/bin:/sbin:/usr/sbin:$PATH \ + SCRIPT_DIR=$INSTALL_DIR/startup \ + GRID_PATH=$GRID_HOME/bin:$GRID_HOME/OPatch/:/usr/sbin:$PATH \ + DB_PATH=$DB_HOME/bin:$DB_HOME/OPatch/:/usr/sbin:$PATH \ + GRID_LD_LIBRARY_PATH=$GRID_HOME/lib:/usr/lib:/lib \ + DB_LD_LIBRARY_PATH=$DB_HOME/lib:/usr/lib:/lib + +# Copy binaries +# ------------- +# COPY Binaries +COPY $GRID_SW_INSTALL_RSP $INSTALL_GRID_PATCH $SETUP_LINUX_FILE $GRID_SETUP_FILE $INSTALL_GRID_BINARIES_FILE $FIXUP_PREQ_FILE $DB_SETUP_FILE $CHECK_SPACE_FILE $DB_INSTALL_RSP $INSTALL_DB_BINARIES_FILE $ENABLE_RAC_FILE $INSTALL_FILE_1 $INSTALL_FILE_2 $INSTALL_SCRIPTS/ + +# Setup Scripts +COPY $RUN_FILE $ADDNODE $ADDNODE_RSP $SETUPSSH $FUNCTIONS $CONFIGGRID $GRID_INSTALL_RSP $DBCA_RSP $PWD_FILE $CHECK_DB_FILE $USER_SCRIPTS_FILE $STOP_FILE $CHECK_DB_FILE $REMOTE_LISTENER_FILE $SETUPGRIDENV $DELNODE $RESET_OS_PASSWORD $MULTI_NODE_INSTALL $SCRIPT_DIR/ + +RUN chmod 755 $INSTALL_SCRIPTS/*.sh && \ + sync && \ + $INSTALL_DIR/install/$CHECK_SPACE_FILE && \ + $INSTALL_DIR/install/$SETUP_LINUX_FILE && \ + $INSTALL_DIR/install/$GRID_SETUP_FILE && \ + $INSTALL_DIR/install/$DB_SETUP_FILE && \ + sed -e '/hard *memlock/s/^/#/g' -i /etc/security/limits.d/oracle-database-preinstall-18c.conf && \ + su grid -c "$INSTALL_DIR/install/$INSTALL_GRID_BINARIES_FILE EE $PATCH_NUMBER" && \ + $INVENTORY/orainstRoot.sh && \ + $GRID_HOME/root.sh && \ + su oracle -c "$INSTALL_DIR/install/$INSTALL_DB_BINARIES_FILE EE" && \ + su oracle -c "$INSTALL_DIR/install/$ENABLE_RAC_FILE" && \ + $INVENTORY/orainstRoot.sh && \ + $DB_HOME/root.sh && \ + $INSTALL_DIR/install/$FIXUP_PREQ_FILE && \ + rm -rf $INSTALL_DIR/install && \ + rm -rf $INSTALL_DIR/install && \ + sync && \ + chmod 755 $SCRIPT_DIR/*.sh && \ + chmod 755 $SCRIPT_DIR/*.expect && \ + chmod 666 $SCRIPT_DIR/*.rsp && \ + chown root:oinstall $GRID_HOME/bin/$DOCKERORACLEINIT && \ + chmod 4755 $GRID_HOME/bin/$DOCKERORACLEINIT && \ + echo "nohup $SCRIPT_DIR/runOracle.sh &" >> /etc/rc.local && \ + rm -f /etc/rc.d/init.d/oracle-database-preinstall-18c-firstboot && \ + ln -s $GRID_HOME/bin/$DOCKERORACLEINIT /usr/sbin/oracleinit && \ + chmod +x /etc/rc.d/rc.local && \ + sed -i 's/#X11UseLocalhost.*/X11UseLocalhost no/' /etc/ssh/sshd_config && \ + sync + +USER grid +WORKDIR /home/grid +VOLUME ["/common_scripts"] + +# Define default command to start Oracle Grid and RAC Database setup. + +CMD ["/usr/sbin/oracleinit"] diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/GridHomeCleanup.sh b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/GridHomeCleanup.sh new file mode 100755 index 0000000000..434c40db42 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/GridHomeCleanup.sh @@ -0,0 +1,59 @@ +#!/bin/bash +# LICENSE UPL 1.0 +# +# Copyright (c) 2019,2021 Oracle and/or its affiliates. +# +# Since: January, 2019 +# Author: paramdeep.saini@oracle.com +# Description: Cleanup the $GRID_HOME and ORACLE_BASE after Grid confguration in the image +# +# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS HEADER. +# + +# Image Cleanup Script +# shellcheck disable=SC1090 +source /home/"${GRID_USER}"/.bashrc +# shellcheck disable=SC2034 +ORACLE_HOME=${GRID_HOME} + +rm -rf /u01/app/grid/* +rm -rf "$GRID_HOME"/log +rm -rf "$GRID_HOME"/logs +rm -rf "$GRID_HOME"/crs/init +rm -rf "$GRID_HOME"/crs/install/rhpdata +rm -rf "$GRID_HOME"/crs/log +rm -rf "$GRID_HOME"/racg/dump +rm -rf "$GRID_HOME"/srvm/log +rm -rf "$GRID_HOME"/cv/log +rm -rf "$GRID_HOME"/cdata +rm -rf "$GRID_HOME"/bin/core* +rm -rf "$GRID_HOME"/bin/diagsnap.pl +rm -rf "$GRID_HOME"/cfgtoollogs/* +rm -rf "$GRID_HOME"/network/admin/listener.ora +rm -rf "$GRID_HOME"/crf +rm -rf "$GRID_HOME"/ologgerd/init +rm -rf "$GRID_HOME"/osysmond/init +rm -rf "$GRID_HOME"/ohasd/init +rm -rf "$GRID_HOME"/ctss/init +rm -rf "$GRID_HOME"/dbs/.*.dat +rm -rf "$GRID_HOME"/oc4j/j2ee/home/log +rm -rf "$GRID_HOME"/inventory/Scripts/ext/bin/log +rm -rf "$GRID_HOME"/inventory/backup/* +rm -rf "$GRID_HOME"/mdns/init +rm -rf "$GRID_HOME"/gnsd/init +rm -rf "$GRID_HOME"/evm/init +rm -rf "$GRID_HOME"/gipc/init +rm -rf "$GRID_HOME"/gpnp/gpnp_bcp.* +rm -rf "$GRID_HOME"/gpnp/init +rm -rf "$GRID_HOME"/auth +rm -rf "$GRID_HOME"/tfa +rm -rf "$GRID_HOME"/suptools/tfa/release/diag +rm -rf "$GRID_HOME"/rdbms/audit/* +rm -rf "$GRID_HOME"/rdbms/log/* +rm -rf "$GRID_HOME"/network/log/* +rm -rf "$GRID_HOME"/inventory/Scripts/comps.xml.* +rm -rf "$GRID_HOME"/inventory/Scripts/oraclehomeproperties.xml.* +rm -rf "$GRID_HOME"/inventory/Scripts/oraInst.loc.* +rm -rf "$GRID_HOME"/inventory/Scripts/inventory.xml.* +rm -rf "$GRID_HOME"/log_file_client.log +rm -rf "$INVENTORY"/logs/* diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/MultiNodeInstall.py b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/MultiNodeInstall.py new file mode 100644 index 0000000000..45144061a4 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/MultiNodeInstall.py @@ -0,0 +1,324 @@ +#!/usr/bin/python +#!/usr/bin/env python + +########################################################################################################### + +# LICENSE UPL 1.0 +# Copyright (c) 2019,2021, Oracle and/or its affiliates. +# Since: January, 2019 +# NAME +# buildImage.py - +# +# DESCRIPTION +# +# +# NOTES + + +# Global Variables +Period = '.' + + +# Import standard python libraries +import subprocess +import sys +import time +import datetime +import os +import commands +import getopt +import shlex +import json +import logging +import socket + + +etchostfile="/etc/hosts" +racenvfile="/etc/rac_env_vars" +domain="none" + +def Usage(): + pass + +def Update_Envfile(common_params): + global racenvfile + global domain + filedata1 = None + f1 = open(racenvfile, 'r') + filedata1 = f1.read() + f1.close + + for keys in common_params.keys(): + if keys == 'domain': + domain = common_params[keys] + + env_var_str = "export " + keys.upper() + "=" + common_params[keys] + Redirect_To_File("Env vars for RAC Env set to " + env_var_str, "INFO") + filedata1 = filedata1 + "\n" + env_var_str + + Write_To_File(filedata1,racenvfile) + return "Env file updated sucesfully" + + +def Update_Hostfile(node_list): + counter=0 + global etchostfile + global domain + filedata = None + filedata1 = None + f = open(etchostfile, 'r') + filedata = f.read() + f.close + + global racenvfile + filedata1 = None + f1 = open(racenvfile, 'r') + filedata1 = f1.read() + f1.close + host_name=socket.gethostname() + + if domain == 'none': + fqdn_hostname=socket.getfqdn() + domain=fqdn_hostname.split(".")[1] + if not host_name: + Redirect_To_File("Unable to get the container host name! Exiting..", "INFO") + else: + Redirect_To_File("Container Hostname and Domain name : " + host_name + " " + domain, "INFO") + +# Replace and add the target string + for dict_list in node_list: + print dict_list + if "public_hostname" in dict_list.keys(): + pubhost = dict_list['public_hostname'] + if host_name == pubhost: + Redirect_To_File("PUBLIC Hostname set to" + pubhost, "INFO") + PUBLIC_HOSTNAME=pubhost + if counter == 0: + CRS_NODES = pubhost + CRS_CONFIG_NODES = pubhost + counter = counter + 1 + else: + CRS_NODES = CRS_NODES + "," + pubhost + CRS_CONFIG_NODES = CRS_CONFIG_NODES + "," + pubhost + counter = counter + 1 + else: + return "Error: Did not find the key public_hostname" + if "public_ip" in dict_list.keys(): + pubip = dict_list['public_ip'] + if host_name == pubhost: + Redirect_To_File("PUBLIC IP set to" + pubip, "INFO") + PUBLIC_IP=pubip + else: + return "Error: Did not find the key public_ip" + if "private_ip" in dict_list.keys(): + privip = dict_list['private_ip'] + if host_name == pubhost: + Redirect_To_File("Private IP set to" + privip, "INFO") + PRIV_IP=privip + else: + return "Error: Did not find the key private_ip" + if "private_hostname" in dict_list.keys(): + privhost = dict_list['private_hostname'] + if host_name == pubhost: + Redirect_To_File("Private HOSTNAME set to" + privhost, "INFO") + PRIV_HOSTNAME=privhost + else: + return "Error: Did not find the key private_hostname" + if "vip_hostname" in dict_list.keys(): + viphost = dict_list['vip_hostname'] + CRS_CONFIG_NODES = CRS_CONFIG_NODES + ":" + viphost + ":" + "HUB" + if host_name == pubhost: + Redirect_To_File("VIP HOSTNAME set to" + viphost, "INFO") + VIP_HOSTNAME=viphost + else: + return "Error: Did not find the key vip_hostname" + if "vip_ip" in dict_list.keys(): + vipip = dict_list['vip_ip'] + if host_name == pubhost: + Redirect_To_File("NODE VIP set to" + vipip, "INFO") + NODE_VIP=vipip + else: + return "Error: Did not find the key vip_ip" + + delete_entry = [pubhost, privhost, viphost, pubip, privip, vipip] + for hostentry in delete_entry: + print "Processing " + hostentry + cmd=cmd= '""' + "sed " + "'" + "/" + hostentry + "/d" + "'" + " <<<" + '"' + filedata + '"' + '""' + output,retcode=Execute_Single_Command(cmd,'None','') + filedata=output + print "New Contents of Host file " + filedata + + # Removing Empty Lines + cmd=cmd= '""' + "sed " + "'" + "/^$/d" + "'" + " <<<" + '"' + filedata + '"' + '""' + output,retcode=Execute_Single_Command(cmd,'None','') + filedata=output + print "New Contents of Host file " + filedata + + delete_entry [:] + + if pubhost not in filedata: + if pubip not in filedata: + hoststring='%s %s %s' %(pubip, pubhost + "." + domain, pubhost) + Redirect_To_File(hoststring, "INFO") + filedata = filedata + '\n' + hoststring + + if privhost not in filedata: + if privip not in filedata: + hoststring='%s %s %s' %(privip, privhost + "." + domain, privhost) + Redirect_To_File(hoststring, "INFO") + filedata = filedata + '\n' + hoststring + + if viphost not in filedata: + if vipip not in filedata: + hoststring='%s %s %s' %(vipip, viphost + "." + domain, viphost) + Redirect_To_File(hoststring, "INFO") + filedata = filedata + '\n' + hoststring + print filedata + + Write_To_File(filedata,etchostfile) + if CRS_NODES: + Redirect_To_File("Cluster Nodes set to " + CRS_NODES, "INFO") + filedata1 = filedata1 + '\n' + 'export CRS_NODES=' + CRS_NODES + if CRS_CONFIG_NODES: + Redirect_To_File("CRS CONFIG Variable set to " + CRS_CONFIG_NODES, "INFO") + filedata1 = filedata1 + '\n' + 'export CRS_CONFIG_NODES=' + CRS_CONFIG_NODES + if NODE_VIP: + filedata1 = filedata1 + '\n' + 'export NODE_VIP=' + NODE_VIP + if PRIV_IP: + filedata1 = filedata1 + '\n' + 'export PRIV_IP=' + PRIV_IP + if PUBLIC_HOSTNAME: + filedata1 = filedata1 + '\n' + 'export PUBLIC_HOSTNAME=' + PUBLIC_HOSTNAME + if PUBLIC_IP: + filedata1 = filedata1 + '\n' + 'export PUBLIC_IP=' + PUBLIC_IP + if VIP_HOSTNAME: + filedata1 = filedata1 + '\n' + 'export VIP_HOSTNAME=' + VIP_HOSTNAME + if PRIV_HOSTNAME: + filedata1 = filedata1 + '\n' + 'export PRIV_HOSTNAME=' + PRIV_HOSTNAME + + Write_To_File(filedata1,racenvfile) + return "Host and Env file updated sucesfully" + + +def Write_To_File(text,filename): + f = open(filename,'w') + f.write(text) + f.close() + +def Setup_Operation(op_type): + if op_type == 'installrac': + cmd="sudo /opt/scripts/startup/runOracle.sh" + + if op_type == 'addnode': + cmd="sudo /opt/scripts/startup/runOracle.sh" + + if op_type == 'delnode': + cmd="sudo /opt/scripts/startup/DelNode.sh" + + output,retcode=Execute_Single_Command(cmd,'None','') + if retcode != 0: + return "Error occuurred in setting up env" + else: + return "setup operation completed sucessfully!" + + +def Execute_Single_Command(cmd,env,dir): + try: + if not dir: + dir=os.getcwd() + print shlex.split(cmd) + out = subprocess.Popen(cmd, shell=True, cwd=dir, stdout=subprocess.PIPE) + output, retcode = out.communicate()[0],out.returncode + return output,retcode + except: + Redirect_To_File("Error Occurred in Execute_Single_Command block! Please Check", "ERROR") + sys.exit(2) + +def Redirect_To_File(text,level): + original = sys.stdout + sys.stdout = open('/proc/1/fd/1', 'w') + root = logging.getLogger() + if not root.handlers: + root.setLevel(logging.INFO) + ch = logging.StreamHandler(sys.stdout) + ch.setLevel(logging.INFO) + formatter = logging.Formatter('%(asctime)s :%(message)s', "%Y-%m-%d %T %Z") + ch.setFormatter(formatter) + root.addHandler(ch) + message = os.path.basename(__file__) + " : " + text + root.info(' %s ' % message ) + sys.stdout = original + + +#BEGIN : TO check whether valid arguments are passed for the container ceation or not +def main(argv): + version= '' + type= '' + dir='' + script='' + Redirect_To_File("Passed Parameters " + str(sys.argv[1:]), "INFO") + try: + opts, args = getopt.getopt(sys.argv[1:], '', ['setuptype=','nodeparams=','comparams=','help']) + + except getopt.GetoptError: + Usage() + sys.exit(2) + #Redirect_To_File("Option Arguments are : " + opts , "INFO") + for opt, arg in opts: + if opt in ('--help'): + Usage() + sys.exit(2) + elif opt in ('--nodeparams'): + nodeparams = arg + elif opt in ('--comparams'): + comparams = arg + elif opt in ('--setuptype'): + setuptype = arg + else: + Usage() + sys.exit(2) + + if setuptype == 'installrac': + Redirect_To_File("setup type parameter is set to installrac", "INFO") + elif setuptype == 'addnode': + Redirect_To_File("setup type parameter is set to addnode", "INFO") + elif setuptype == 'delnode': + Redirect_To_File("setup type parameter is set to delnode", "INFO") + else: + setupUsage() + sys.exit(2) + if not nodeparams: + Redirect_To_File("Node Parameters for the Cluster not specified", "Error") + sys.exit(2) + if not comparams: + Redirect_To_File("Common Parameter for the Cluster not specified", "Error") + sys.exit(2) + + + Redirect_To_File("NodeParams set to" + nodeparams , "INFO" ) + Redirect_To_File("Comparams set to" + comparams , "INFO" ) + + + comparams = comparams.replace('\\"','"') + Redirect_To_File("Comparams set to" + comparams , "INFO" ) + envfile_status=Update_Envfile(json.loads(comparams)) + if 'Error' in envfile_status: + Redirect_To_File(envfile_status, "ERROR") + return sys.exit(2) + + nodeparams = nodeparams.replace('\\"','"') + Redirect_To_File("NodeParams set to" + nodeparams , "INFO" ) + hostfile_status=Update_Hostfile(json.loads(nodeparams)) + if 'Error' in hostfile_status: + Redirect_To_File(hostfile_status, "ERROR") + return sys.exit(2) + + Redirect_To_File("Executing operation" + setuptype, "INFO") + setup_op=Setup_Operation(setuptype) + if 'Error' in setup_op: + Redirect_To_File(setup_op, "ERROR") + return sys.exit(2) + + sys.exit(0) + +if __name__ == '__main__': + main(sys.argv) diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/OracleHomeCleanup.sh b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/OracleHomeCleanup.sh new file mode 100755 index 0000000000..f486495639 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/OracleHomeCleanup.sh @@ -0,0 +1,35 @@ +#!/bin/bash +# LICENSE UPL 1.0 +# +# Copyright (c) 2019,2021 Oracle and/or its affiliates. +# +# Since: January, 2019 +# Author: sanjay.singh@oracle.com, paramdeep.saini@oracle.com +# Description: Cleanup the $ORACLE_HOME and ORACLE_BASE after Grid confguration in the image +# +# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS HEADER. +# + +# Image Cleanup Script +# shellcheck disable=SC1090 +source /home/"${DB_USER}"/.bashrc +ORACLE_HOME=${DB_HOME} + +rm -rf "$ORACLE_HOME"/bin/extjob +rm -rf "$ORACLE_HOME"/PAF +rm -rf "$ORACLE_HOME"/install/oratab +rm -rf "$ORACLE_HOME"/install/make.log +rm -rf "$ORACLE_HOME"/network/admin/listener.ora +rm -rf "$ORACLE_HOME"/network/admin/tnsnames.ora +rm -rf "$ORACLE_HOME"/bin/nmo +rm -rf "$ORACLE_HOME"/bin/nmb +rm -rf "$ORACLE_HOME"/bin/nmhs +rm -rf "$ORACLE_HOME"/log/.* +rm -rf "$ORACLE_HOME"/oc4j/j2ee/oc4j_applications/applications/em/em/images/chartCache/* +rm -rf "$ORACLE_HOME"/rdbms/audit/* +rm -rf "$ORACLE_HOME"/cfgtoollogs/* +rm -rf "$ORACLE_HOME"/inventory/Scripts/comps.xml.* +rm -rf "$ORACLE_HOME"/inventory/Scripts/oraclehomeproperties.xml.* +rm -rf "$ORACLE_HOME"/inventory/Scripts/oraInst.loc.* +rm -rf "$ORACLE_HOME"/inventory/Scripts/inventory.xml.* +rm -rf "$INVENTORY"/logs/* diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/applyGridPatch.sh b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/applyGridPatch.sh new file mode 100755 index 0000000000..af451a6e68 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/applyGridPatch.sh @@ -0,0 +1,43 @@ +#!/bin/bash +# LICENSE UPL 1.0 +# +# Copyright (c) 2018,2021 Oracle and/or its affiliates. +# +# Since: January, 2018 +# Author: sanjay.singh@oracle.com, paramdeep.saini@oracle.com +# Description: Apply Patch for Oracle Grid and Databas. +# +# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS HEADER. +# + +PATCH=$1 + +# Check whether edition has been passed on +if [ "$PATCH" == "" ]; then + echo "ERROR: No Patch has been passed on!" + echo "Please specify the correct PATCH!" + exit 1; +fi; + +# Check whether GRID_BASE is set +if [ "$GRID_BASE" == "" ]; then + echo "ERROR: GRID_BASE has not been set!" + echo "You have to have the GRID_BASE environment variable set to a valid value!" + exit 1; +fi; + +# Check whether GRID_HOME is set +if [ "$GRID_HOME" == "" ]; then + echo "ERROR: GRID_HOME has not been set!" + echo "You have to have the GRID_HOME environment variable set to a valid value!" + exit 1; +fi; + +# Install Oracle binaries +# shellcheck disable=SC2115 +unzip -q "$INSTALL_SCRIPTS"/"$PATCH" -d "$GRID_USER_HOME" && \ +rm -f "$INSTALL_SCRIPTS"/"$GRID_PATCH" && \ +cd "$GRID_USER_HOME"/"$PATCH_NUMBER"/"$PATCH_NUMBER" && \ +"$GRID_HOME"/OPatch/opatch napply -silent -local -oh "$GRID_HOME" -id "$PATCH_NUMBER" && \ +cd "$GRID_USER_HOME" && \ +rm -rf "$GRID_USER_HOME"/"$PATCH_NUMBER" diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/checkSpace.sh b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/checkSpace.sh new file mode 100755 index 0000000000..0480158b95 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/checkSpace.sh @@ -0,0 +1,23 @@ +#!/bin/bash +# LICENSE UPL 1.0 +# +# Copyright (c) 2018,2021 Oracle and/or its affiliates. +# +# Since: January, 2018 +# Author: sanjay.singh@oracle.com, paramdeep.saini@oracle.com +# Description: Checks the available space of the system. +# +# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS HEADER. +# + +REQUIRED_SPACE_GB=35 +AVAILABLE_SPACE_GB=`df -PB 1G / | tail -n 1 | awk '{print $4}'` + +if [ $AVAILABLE_SPACE_GB -lt $REQUIRED_SPACE_GB ]; then + script_name=`basename "$0"` + echo "!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!" + echo "$script_name: ERROR - There is not enough space available in the docker container." + echo "$script_name: The container needs at least $REQUIRED_SPACE_GB GB , but only $AVAILABLE_SPACE_GB available." + echo "!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!" + exit 1; +fi; diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/db_inst.rsp b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/db_inst.rsp new file mode 100644 index 0000000000..90ff555e5d --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/db_inst.rsp @@ -0,0 +1,125 @@ +#################################################################### +## Copyright(c) Oracle Corporation 1998,2018. All rights reserved.## +## ## +## Specify values for the variables listed below to customize ## +## your installation. ## +## ## +## Each variable is associated with a comment. The comment ## +## can help to populate the variables with the appropriate ## +## values. ## +## ## +## IMPORTANT NOTE: This file contains plain text passwords and ## +## should be secured to have read permission only by oracle user ## +## or db administrator who owns this installation. ## +## ## +#################################################################### + + +#------------------------------------------------------------------------------- +# Do not change the following system generated value. +#------------------------------------------------------------------------------- +oracle.install.responseFileVersion=/oracle/install/rspfmt_dbinstall_response_schema_v18.0.0 + +#------------------------------------------------------------------------------- +# Specify the installation option. +# It can be one of the following: +# - INSTALL_DB_SWONLY +# - INSTALL_DB_AND_CONFIG +#------------------------------------------------------------------------------- +oracle.install.option=INSTALL_DB_SWONLY + +#------------------------------------------------------------------------------- +# Specify the Unix group to be set for the inventory directory. +#------------------------------------------------------------------------------- +UNIX_GROUP_NAME=oinstall + +#------------------------------------------------------------------------------- +# Specify the location which holds the inventory files. +# This is an optional parameter if installing on +# Windows based Operating System. +#------------------------------------------------------------------------------- +INVENTORY_LOCATION=/u01/app/oraInventory +#------------------------------------------------------------------------------- +# Specify the complete path of the Oracle Home. +#------------------------------------------------------------------------------- +ORACLE_HOME=/u01/app/oracle/product/18.3.0/dbhome_1 + +#------------------------------------------------------------------------------- +# Specify the complete path of the Oracle Base. +#------------------------------------------------------------------------------- +ORACLE_BASE=/u01/app/oracle + +#------------------------------------------------------------------------------- +# Specify the installation edition of the component. +# +# The value should contain only one of these choices. +# - EE : Enterprise Edition +# - SE2 : Standard Edition 2 + + +#------------------------------------------------------------------------------- + +oracle.install.db.InstallEdition=EE +############################################################################### +# # +# PRIVILEGED OPERATING SYSTEM GROUPS # +# ------------------------------------------ # +# Provide values for the OS groups to which SYSDBA and SYSOPER privileges # +# needs to be granted. If the install is being performed as a member of the # +# group "dba", then that will be used unless specified otherwise below. # +# # +# The value to be specified for OSDBA and OSOPER group is only for UNIX based # +# Operating System. # +# # +############################################################################### + +#------------------------------------------------------------------------------ +# The OSDBA_GROUP is the OS group which is to be granted SYSDBA privileges. +#------------------------------------------------------------------------------- +oracle.install.db.OSDBA_GROUP=dba + +#------------------------------------------------------------------------------ +# The OSOPER_GROUP is the OS group which is to be granted SYSOPER privileges. +# The value to be specified for OSOPER group is optional. +#------------------------------------------------------------------------------ +oracle.install.db.OSOPER_GROUP=oper + +#------------------------------------------------------------------------------ +# The OSBACKUPDBA_GROUP is the OS group which is to be granted SYSBACKUP privileges. +#------------------------------------------------------------------------------ +oracle.install.db.OSBACKUPDBA_GROUP=backupdba + +#------------------------------------------------------------------------------ +# The OSDGDBA_GROUP is the OS group which is to be granted SYSDG privileges. +#------------------------------------------------------------------------------ +oracle.install.db.OSDGDBA_GROUP=dgdba + +#------------------------------------------------------------------------------ +# The OSKMDBA_GROUP is the OS group which is to be granted SYSKM privileges. +#------------------------------------------------------------------------------ +oracle.install.db.OSKMDBA_GROUP=kmdba + +#------------------------------------------------------------------------------ +# The OSRACDBA_GROUP is the OS group which is to be granted SYSRAC privileges. +#------------------------------------------------------------------------------ +oracle.install.db.OSRACDBA_GROUP=racdba +#------------------------------------------------------------------------------ +# Specify whether to enable the user to set the password for +# My Oracle Support credentials. The value can be either true or false. +# If left blank it will be assumed to be false. +# +# Example : SECURITY_UPDATES_VIA_MYORACLESUPPORT=true +#------------------------------------------------------------------------------ +SECURITY_UPDATES_VIA_MYORACLESUPPORT=false + +#------------------------------------------------------------------------------ +# Specify whether user doesn't want to configure Security Updates. +# The value for this variable should be true if you don't want to configure +# Security Updates, false otherwise. +# +# The value can be either true or false. If left blank it will be assumed +# to be true. +# +# Example : DECLINE_SECURITY_UPDATES=false +#------------------------------------------------------------------------------ +DECLINE_SECURITY_UPDATES=true diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/db_install_19cv1.rsp b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/db_install_19cv1.rsp new file mode 100644 index 0000000000..9aa4cd6136 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/db_install_19cv1.rsp @@ -0,0 +1,356 @@ +#################################################################### +## Copyright(c) Oracle Corporation 1998,2019. All rights reserved.## +## ## +## Specify values for the variables listed below to customize ## +## your installation. ## +## ## +## Each variable is associated with a comment. The comment ## +## can help to populate the variables with the appropriate ## +## values. ## +## ## +## IMPORTANT NOTE: This file contains plain text passwords and ## +## should be secured to have read permission only by oracle user ## +## or db administrator who owns this installation. ## +## ## +#################################################################### + + +#------------------------------------------------------------------------------ +# Do not change the following system generated value. +#------------------------------------------------------------------------------ +oracle.install.responseFileVersion=/oracle/install/rspfmt_dbinstall_response_schema_v19.0.0 + +#------------------------------------------------------------------------------- +# Specify the installation option. +# It can be one of the following: +# - INSTALL_DB_SWONLY +# - INSTALL_DB_AND_CONFIG +#------------------------------------------------------------------------------- +oracle.install.option= + +#------------------------------------------------------------------------------- +# Specify the Unix group to be set for the inventory directory. +#------------------------------------------------------------------------------- +UNIX_GROUP_NAME= + +#------------------------------------------------------------------------------- +# Specify the location which holds the inventory files. +# This is an optional parameter if installing on +# Windows based Operating System. +#------------------------------------------------------------------------------- +INVENTORY_LOCATION= +#------------------------------------------------------------------------------- +# Specify the complete path of the Oracle Home. +#------------------------------------------------------------------------------- +ORACLE_HOME= + +#------------------------------------------------------------------------------- +# Specify the complete path of the Oracle Base. +#------------------------------------------------------------------------------- +ORACLE_BASE= + +#------------------------------------------------------------------------------- +# Specify the installation edition of the component. +# +# The value should contain only one of these choices. +# - EE : Enterprise Edition +# - SE2 : Standard Edition 2 + + +#------------------------------------------------------------------------------- + +oracle.install.db.InstallEdition= +############################################################################### +# # +# PRIVILEGED OPERATING SYSTEM GROUPS # +# ------------------------------------------ # +# Provide values for the OS groups to which SYSDBA and SYSOPER privileges # +# needs to be granted. If the install is being performed as a member of the # +# group "dba", then that will be used unless specified otherwise below. # +# # +# The value to be specified for OSDBA and OSOPER group is only for UNIX based # +# Operating System. # +# # +############################################################################### + +#------------------------------------------------------------------------------ +# The OSDBA_GROUP is the OS group which is to be granted SYSDBA privileges. +#------------------------------------------------------------------------------- +oracle.install.db.OSDBA_GROUP= + +#------------------------------------------------------------------------------ +# The OSOPER_GROUP is the OS group which is to be granted SYSOPER privileges. +# The value to be specified for OSOPER group is optional. +#------------------------------------------------------------------------------ +oracle.install.db.OSOPER_GROUP= + +#------------------------------------------------------------------------------ +# The OSBACKUPDBA_GROUP is the OS group which is to be granted SYSBACKUP privileges. +#------------------------------------------------------------------------------ +oracle.install.db.OSBACKUPDBA_GROUP= + +#------------------------------------------------------------------------------ +# The OSDGDBA_GROUP is the OS group which is to be granted SYSDG privileges. +#------------------------------------------------------------------------------ +oracle.install.db.OSDGDBA_GROUP= + +#------------------------------------------------------------------------------ +# The OSKMDBA_GROUP is the OS group which is to be granted SYSKM privileges. +#------------------------------------------------------------------------------ +oracle.install.db.OSKMDBA_GROUP= + +#------------------------------------------------------------------------------ +# The OSRACDBA_GROUP is the OS group which is to be granted SYSRAC privileges. +#------------------------------------------------------------------------------ +oracle.install.db.OSRACDBA_GROUP= +################################################################################ +# # +# Root script execution configuration # +# # +################################################################################ + +#------------------------------------------------------------------------------------------------------- +# Specify the root script execution mode. +# +# - true : To execute the root script automatically by using the appropriate configuration methods. +# - false : To execute the root script manually. +# +# If this option is selected, password should be specified on the console. +#------------------------------------------------------------------------------------------------------- +oracle.install.db.rootconfig.executeRootScript= + +#-------------------------------------------------------------------------------------- +# Specify the configuration method to be used for automatic root script execution. +# +# Following are the possible choices: +# - ROOT +# - SUDO +#-------------------------------------------------------------------------------------- +oracle.install.db.rootconfig.configMethod= +#-------------------------------------------------------------------------------------- +# Specify the absolute path of the sudo program. +# +# Applicable only when SUDO configuration method was chosen. +#-------------------------------------------------------------------------------------- +oracle.install.db.rootconfig.sudoPath= + +#-------------------------------------------------------------------------------------- +# Specify the name of the user who is in the sudoers list. +# Applicable only when SUDO configuration method was chosen. +# Note:For Single Instance database installations,the sudo user name must be the username of the user installing the database. +#-------------------------------------------------------------------------------------- +oracle.install.db.rootconfig.sudoUserName= + +############################################################################### +# # +# Grid Options # +# # +############################################################################### + +#------------------------------------------------------------------------------ +# Value is required only if the specified install option is INSTALL_DB_SWONLY +# +# Specify the cluster node names selected during the installation. +# +# Example : oracle.install.db.CLUSTER_NODES=node1,node2 +#------------------------------------------------------------------------------ +oracle.install.db.CLUSTER_NODES= + +############################################################################### +# # +# Database Configuration Options # +# # +############################################################################### + +#------------------------------------------------------------------------------- +# Specify the type of database to create. +# It can be one of the following: +# - GENERAL_PURPOSE +# - DATA_WAREHOUSE +# GENERAL_PURPOSE: A starter database designed for general purpose use or transaction-heavy applications. +# DATA_WAREHOUSE : A starter database optimized for data warehousing applications. +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.type= + +#------------------------------------------------------------------------------- +# Specify the Starter Database Global Database Name. +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.globalDBName= + +#------------------------------------------------------------------------------- +# Specify the Starter Database SID. +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.SID= + +#------------------------------------------------------------------------------- +# Specify whether the database should be configured as a Container database. +# The value can be either "true" or "false". If left blank it will be assumed +# to be "false". +#------------------------------------------------------------------------------- +oracle.install.db.ConfigureAsContainerDB= + +#------------------------------------------------------------------------------- +# Specify the Pluggable Database name for the pluggable database in Container Database. +#------------------------------------------------------------------------------- +oracle.install.db.config.PDBName= + +#------------------------------------------------------------------------------- +# Specify the Starter Database character set. +# +# One of the following +# AL32UTF8, WE8ISO8859P15, WE8MSWIN1252, EE8ISO8859P2, +# EE8MSWIN1250, NE8ISO8859P10, NEE8ISO8859P4, BLT8MSWIN1257, +# BLT8ISO8859P13, CL8ISO8859P5, CL8MSWIN1251, AR8ISO8859P6, +# AR8MSWIN1256, EL8ISO8859P7, EL8MSWIN1253, IW8ISO8859P8, +# IW8MSWIN1255, JA16EUC, JA16EUCTILDE, JA16SJIS, JA16SJISTILDE, +# KO16MSWIN949, ZHS16GBK, TH8TISASCII, ZHT32EUC, ZHT16MSWIN950, +# ZHT16HKSCS, WE8ISO8859P9, TR8MSWIN1254, VN8MSWIN1258 +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.characterSet= + +#------------------------------------------------------------------------------ +# This variable should be set to true if Automatic Memory Management +# in Database is desired. +# If Automatic Memory Management is not desired, and memory allocation +# is to be done manually, then set it to false. +#------------------------------------------------------------------------------ +oracle.install.db.config.starterdb.memoryOption= + +#------------------------------------------------------------------------------- +# Specify the total memory allocation for the database. Value(in MB) should be +# at least 256 MB, and should not exceed the total physical memory available +# on the system. +# Example: oracle.install.db.config.starterdb.memoryLimit=512 +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.memoryLimit= + +#------------------------------------------------------------------------------- +# This variable controls whether to load Example Schemas onto +# the starter database or not. +# The value can be either "true" or "false". If left blank it will be assumed +# to be "false". +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.installExampleSchemas= + +############################################################################### +# # +# Passwords can be supplied for the following four schemas in the # +# starter database: # +# SYS # +# SYSTEM # +# DBSNMP (used by Enterprise Manager) # +# # +# Same password can be used for all accounts (not recommended) # +# or different passwords for each account can be provided (recommended) # +# # +############################################################################### + +#------------------------------------------------------------------------------ +# This variable holds the password that is to be used for all schemas in the +# starter database. +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.password.ALL= + +#------------------------------------------------------------------------------- +# Specify the SYS password for the starter database. +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.password.SYS= + +#------------------------------------------------------------------------------- +# Specify the SYSTEM password for the starter database. +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.password.SYSTEM= + +#------------------------------------------------------------------------------- +# Specify the DBSNMP password for the starter database. +# Applicable only when oracle.install.db.config.starterdb.managementOption=CLOUD_CONTROL +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.password.DBSNMP= + +#------------------------------------------------------------------------------- +# Specify the PDBADMIN password required for creation of Pluggable Database in the Container Database. +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.password.PDBADMIN= + +#------------------------------------------------------------------------------- +# Specify the management option to use for managing the database. +# Options are: +# 1. CLOUD_CONTROL - If you want to manage your database with Enterprise Manager Cloud Control along with Database Express. +# 2. DEFAULT -If you want to manage your database using the default Database Express option. +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.managementOption= + +#------------------------------------------------------------------------------- +# Specify the OMS host to connect to Cloud Control. +# Applicable only when oracle.install.db.config.starterdb.managementOption=CLOUD_CONTROL +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.omsHost= + +#------------------------------------------------------------------------------- +# Specify the OMS port to connect to Cloud Control. +# Applicable only when oracle.install.db.config.starterdb.managementOption=CLOUD_CONTROL +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.omsPort= + +#------------------------------------------------------------------------------- +# Specify the EM Admin user name to use to connect to Cloud Control. +# Applicable only when oracle.install.db.config.starterdb.managementOption=CLOUD_CONTROL +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.emAdminUser= + +#------------------------------------------------------------------------------- +# Specify the EM Admin password to use to connect to Cloud Control. +# Applicable only when oracle.install.db.config.starterdb.managementOption=CLOUD_CONTROL +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.emAdminPassword= + +############################################################################### +# # +# SPECIFY RECOVERY OPTIONS # +# ------------------------------------ # +# Recovery options for the database can be mentioned using the entries below # +# # +############################################################################### + +#------------------------------------------------------------------------------ +# This variable is to be set to false if database recovery is not required. Else +# this can be set to true. +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.enableRecovery= + +#------------------------------------------------------------------------------- +# Specify the type of storage to use for the database. +# It can be one of the following: +# - FILE_SYSTEM_STORAGE +# - ASM_STORAGE +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.storageType= + +#------------------------------------------------------------------------------- +# Specify the database file location which is a directory for datafiles, control +# files, redo logs. +# +# Applicable only when oracle.install.db.config.starterdb.storage=FILE_SYSTEM_STORAGE +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.fileSystemStorage.dataLocation= + +#------------------------------------------------------------------------------- +# Specify the recovery location. +# +# Applicable only when oracle.install.db.config.starterdb.storage=FILE_SYSTEM_STORAGE +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.fileSystemStorage.recoveryLocation= + +#------------------------------------------------------------------------------- +# Specify the existing ASM disk groups to be used for storage. +# +# Applicable only when oracle.install.db.config.starterdb.storageType=ASM_STORAGE +#------------------------------------------------------------------------------- +oracle.install.db.config.asm.diskGroup= + +#------------------------------------------------------------------------------- +# Specify the password for ASMSNMP user of the ASM instance. +# +# Applicable only when oracle.install.db.config.starterdb.storage=ASM_STORAGE +#------------------------------------------------------------------------------- +oracle.install.db.config.asm.ASMSNMPPassword= diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/db_sw_install_19c.rsp b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/db_sw_install_19c.rsp new file mode 100644 index 0000000000..25dc006b8e --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/db_sw_install_19c.rsp @@ -0,0 +1,45 @@ +oracle.install.responseFileVersion=/oracle/install/rspfmt_dbinstall_response_schema_v19.0.0 +oracle.install.option=INSTALL_DB_SWONLY +UNIX_GROUP_NAME=oinstall +INVENTORY_LOCATION=/u01/app/oraInventory +ORACLE_HOME=/u01/app/oracle/product/19c/dbhome_1 +ORACLE_BASE=/u01/app/oracle +oracle.install.db.InstallEdition=EE +oracle.install.db.OSDBA_GROUP=dba +oracle.install.db.OSOPER_GROUP=oper +oracle.install.db.OSBACKUPDBA_GROUP=backupdba +oracle.install.db.OSDGDBA_GROUP=dgdba +oracle.install.db.OSKMDBA_GROUP=kmdba +oracle.install.db.OSRACDBA_GROUP=racdba +oracle.install.db.rootconfig.executeRootScript= +oracle.install.db.rootconfig.configMethod= +oracle.install.db.rootconfig.sudoPath= +oracle.install.db.rootconfig.sudoUserName= +oracle.install.db.CLUSTER_NODES= +oracle.install.db.config.starterdb.type= +oracle.install.db.config.starterdb.globalDBName= +oracle.install.db.config.starterdb.SID= +oracle.install.db.ConfigureAsContainerDB= +oracle.install.db.config.PDBName= +oracle.install.db.config.starterdb.characterSet= +oracle.install.db.config.starterdb.memoryOption= +oracle.install.db.config.starterdb.memoryLimit= +oracle.install.db.config.starterdb.installExampleSchemas= +oracle.install.db.config.starterdb.password.ALL= +oracle.install.db.config.starterdb.password.SYS= +oracle.install.db.config.starterdb.password.SYSTEM= +oracle.install.db.config.starterdb.password.DBSNMP= +oracle.install.db.config.starterdb.password.PDBADMIN= +oracle.install.db.config.starterdb.managementOption= +oracle.install.db.config.starterdb.omsHost= +oracle.install.db.config.starterdb.omsPort= +oracle.install.db.config.starterdb.emAdminUser= +oracle.install.db.config.starterdb.emAdminPassword= +oracle.install.db.config.starterdb.enableRecovery= +oracle.install.db.config.starterdb.storageType= +oracle.install.db.config.starterdb.fileSystemStorage.dataLocation= +oracle.install.db.config.starterdb.fileSystemStorage.recoveryLocation= +oracle.install.db.config.asm.diskGroup= +oracle.install.db.config.asm.ASMSNMPPassword= +SECURITY_UPDATES_VIA_MYORACLESUPPORT=false +DECLINE_SECURITY_UPDATES=true diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/dbca.rsp b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/dbca.rsp new file mode 100644 index 0000000000..92c74e1eb4 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/dbca.rsp @@ -0,0 +1,605 @@ +############################################################################## +## ## +## DBCA response file ## +## ------------------ ## +## Copyright(c) Oracle Corporation 1998,2018. All rights reserved. ## +## ## +## Specify values for the variables listed below to customize ## +## your installation. ## +## ## +## Each variable is associated with a comment. The comment ## +## can help to populate the variables with the appropriate ## +## values. ## +## ## +## IMPORTANT NOTE: This file contains plain text passwords and ## +## should be secured to have read permission only by oracle user ## +## or db administrator who owns this installation. ## +############################################################################## +#------------------------------------------------------------------------------- +# Do not change the following system generated value. +#------------------------------------------------------------------------------- +responseFileVersion=/oracle/assistants/rspfmt_dbca_response_schema_v18.0.0 + +#----------------------------------------------------------------------------- +# Name : gdbName +# Datatype : String +# Description : Global database name of the database +# Valid values : . - when database domain isn't NULL +# - when database domain is NULL +# Default value : None +# Mandatory : Yes +#----------------------------------------------------------------------------- +gdbName=###ORACLE_SID### + +#----------------------------------------------------------------------------- +# Name : sid +# Datatype : String +# Description : System identifier (SID) of the database +# Valid values : Check Oracle12c Administrator's Guide +# Default value : specified in GDBNAME +# Mandatory : No +#----------------------------------------------------------------------------- +sid=###ORACLE_SID### + +#----------------------------------------------------------------------------- +# Name : databaseConfigType +# Datatype : String +# Description : database conf type as Single Instance, Real Application Cluster or Real Application Cluster One Nodes database +# Valid values : SI\RAC\RACONENODE +# Default value : SI +# Mandatory : No +#----------------------------------------------------------------------------- +databaseConfigType=RAC + +#----------------------------------------------------------------------------- +# Name : RACOneNodeServiceName +# Datatype : String +# Description : Service is required by application to connect to RAC One +# Node Database +# Valid values : Service Name +# Default value : None +# Mandatory : No [required in case DATABASECONFTYPE is set to RACONENODE ] +#----------------------------------------------------------------------------- +RACOneNodeServiceName= + +#----------------------------------------------------------------------------- +# Name : policyManaged +# Datatype : Boolean +# Description : Set to true if Database is policy managed and +# set to false if Database is admin managed +# Valid values : TRUE\FALSE +# Default value : FALSE +# Mandatory : No +#----------------------------------------------------------------------------- +policyManaged=false + + +#----------------------------------------------------------------------------- +# Name : createServerPool +# Datatype : Boolean +# Description : Set to true if new server pool need to be created for database +# if this option is specified then the newly created database +# will use this newly created serverpool. +# Multiple serverpoolname can not be specified for database +# Valid values : TRUE\FALSE +# Default value : FALSE +# Mandatory : No +#----------------------------------------------------------------------------- +createServerPool=false + +#----------------------------------------------------------------------------- +# Name : serverPoolName +# Datatype : String +# Description : Only one serverpool name need to be specified +# if Create Server Pool option is specified. +# Comma-separated list of Serverpool names if db need to use +# multiple Server pool +# Valid values : ServerPool name + +# Default value : None +# Mandatory : No [required in case of RAC service centric database] +#----------------------------------------------------------------------------- +serverPoolName= + +#----------------------------------------------------------------------------- +# Name : cardinality +# Datatype : Number +# Description : Specify Cardinality for create server pool operation + +# Valid values : any positive Integer value +# Default value : Number of qualified nodes on cluster +# Mandatory : No [Required when a new serverpool need to be created] +#----------------------------------------------------------------------------- +cardinality= + +#----------------------------------------------------------------------------- +# Name : force +# Datatype : Boolean +# Description : Set to true if new server pool need to be created by force +# if this option is specified then the newly created serverpool +# will be assigned server even if no free servers are available. +# This may affect already running database. +# This flag can be specified for Admin managed as well as policy managed db. +# Valid values : TRUE\FALSE +# Default value : FALSE +# Mandatory : No +#----------------------------------------------------------------------------- +force=false + +#----------------------------------------------------------------------------- +# Name : pqPoolName +# Datatype : String +# Description : Only one serverpool name needs to be specified +# if create server pool option is specified. +# Comma-separated list of serverpool names if use +# server pool. This is required to +# create Parallel Query (PQ) database. Applicable to Big Cluster +# Valid values : Parallel Query (PQ) pool name +# Default value : None +# Mandatory : No [required in case of RAC service centric database] +#----------------------------------------------------------------------------- +pqPoolName= + +#----------------------------------------------------------------------------- +# Name : pqCardinality +# Datatype : Number +# Description : Specify Cardinality for create server pool operation. +# Applicable to Big Cluster +# Valid values : any positive Integer value +# Default value : Number of qualified nodes on cluster +# Mandatory : No [Required when a new serverpool need to be created] +#----------------------------------------------------------------------------- +pqCardinality= + +#----------------------------------------------------------------------------- +# Name : createAsContainerDatabase +# Datatype : boolean +# Description : flag to create database as container database +# Valid values : Check Oracle12c Administrator's Guide +# Default value : false +# Mandatory : No +#----------------------------------------------------------------------------- +createAsContainerDatabase=###CONTAINER_DB_FLAG### + +#----------------------------------------------------------------------------- +# Name : numberOfPDBs +# Datatype : Number +# Description : Specify the number of pdb to be created +# Valid values : 0 to 252 +# Default value : 0 +# Mandatory : No +#----------------------------------------------------------------------------- +numberOfPDBs=1 + +#----------------------------------------------------------------------------- +# Name : pdbName +# Datatype : String +# Description : Specify the pdbname/pdbanme prefix if one or more pdb need to be created +# Valid values : Check Oracle12c Administrator's Guide +# Default value : None +# Mandatory : No +#----------------------------------------------------------------------------- +pdbName=###ORACLE_PDB### + +#----------------------------------------------------------------------------- +# Name : useLocalUndoForPDBs +# Datatype : boolean +# Description : Flag to create local undo tablespace for all PDB's. +# Valid values : TRUE\FALSE +# Default value : TRUE +# Mandatory : No +#----------------------------------------------------------------------------- +useLocalUndoForPDBs=true + +#----------------------------------------------------------------------------- +# Name : pdbAdminPassword +# Datatype : String +# Description : PDB Administrator user password +# Valid values : Check Oracle12c Administrator's Guide +# Default value : None +# Mandatory : No +#----------------------------------------------------------------------------- + +pdbAdminPassword=###ORACLE_PWD### + +#----------------------------------------------------------------------------- +# Name : nodelist +# Datatype : String +# Description : Comma-separated list of cluster nodes +# Valid values : Cluster node names +# Default value : None +# Mandatory : No (Yes for RAC database-centric database ) +#----------------------------------------------------------------------------- +nodelist=###PUBLIC_HOSTNAME### + +#----------------------------------------------------------------------------- +# Name : templateName +# Datatype : String +# Description : Name of the template +# Valid values : Template file name +# Default value : None +# Mandatory : Yes +#----------------------------------------------------------------------------- +templateName=/u01/app/oracle/product/18.3.0/dbhome_1/assistants/dbca/templates/General_Purpose.dbc + +#----------------------------------------------------------------------------- +# Name : sysPassword +# Datatype : String +# Description : Password for SYS user +# Valid values : Check Oracle12c Administrator's Guide +# Default value : None +# Mandatory : Yes +#----------------------------------------------------------------------------- +sysPassword=###ORACLE_PWD### + +#----------------------------------------------------------------------------- +# Name : systemPassword +# Datatype : String +# Description : Password for SYSTEM user +# Valid values : Check Oracle12c Administrator's Guide +# Default value : None +# Mandatory : Yes +#----------------------------------------------------------------------------- +systemPassword=###ORACLE_PWD### + +#----------------------------------------------------------------------------- +# Name : serviceUserPassword +# Datatype : String +# Description : Password for Windows Service user +# Default value : None +# Mandatory : If Oracle home is installed with windows service user +#----------------------------------------------------------------------------- +serviceUserPassword= + +#----------------------------------------------------------------------------- +# Name : emConfiguration +# Datatype : String +# Description : Enterprise Manager Configuration Type +# Valid values : CENTRAL|DBEXPRESS|BOTH|NONE +# Default value : NONE +# Mandatory : No +#----------------------------------------------------------------------------- +emConfiguration=DBEXPRESS + +#----------------------------------------------------------------------------- +# Name : emExpressPort +# Datatype : Number +# Description : Enterprise Manager Configuration Type +# Valid values : Check Oracle12c Administrator's Guide +# Default value : NONE +# Mandatory : No, will be picked up from DBEXPRESS_HTTPS_PORT env variable +# or auto generates a free port between 5500 and 5599 +#----------------------------------------------------------------------------- +emExpressPort=5500 + +#----------------------------------------------------------------------------- +# Name : runCVUChecks +# Datatype : Boolean +# Description : Specify whether to run Cluster Verification Utility checks +# periodically in Cluster environment +# Valid values : TRUE\FALSE +# Default value : FALSE +# Mandatory : No +#----------------------------------------------------------------------------- +runCVUChecks=true + +#----------------------------------------------------------------------------- +# Name : dbsnmpPassword +# Datatype : String +# Description : Password for DBSNMP user +# Valid values : Check Oracle12c Administrator's Guide +# Default value : None +# Mandatory : Yes, if emConfiguration is specified or +# the value of runCVUChecks is TRUE +#----------------------------------------------------------------------------- +dbsnmpPassword=###ORACLE_PWD### + +#----------------------------------------------------------------------------- +# Name : omsHost +# Datatype : String +# Description : EM management server host name +# Default value : None +# Mandatory : Yes, if CENTRAL is specified for emConfiguration +#----------------------------------------------------------------------------- +omsHost= + +#----------------------------------------------------------------------------- +# Name : omsPort +# Datatype : Number +# Description : EM management server port number +# Default value : None +# Mandatory : Yes, if CENTRAL is specified for emConfiguration +#----------------------------------------------------------------------------- +omsPort=0 + +#----------------------------------------------------------------------------- +# Name : emUser +# Datatype : String +# Description : EM Admin username to add or modify targets +# Default value : None +# Mandatory : Yes, if CENTRAL is specified for emConfiguration +#----------------------------------------------------------------------------- +emUser= + +#----------------------------------------------------------------------------- +# Name : emPassword +# Datatype : String +# Description : EM Admin user password +# Default value : None +# Mandatory : Yes, if CENTRAL is specified for emConfiguration +#----------------------------------------------------------------------------- +emPassword= + +#----------------------------------------------------------------------------- +# Name : dvConfiguration +# Datatype : Boolean +# Description : Specify "True" to configure and enable Oracle Database vault +# Valid values : True/False +# Default value : False +# Mandatory : No +#----------------------------------------------------------------------------- +dvConfiguration=false + +#----------------------------------------------------------------------------- +# Name : dvUserName +# Datatype : String +# Description : DataVault Owner +# Valid values : Check Oracle12c Administrator's Guide +# Default value : None +# Mandatory : Yes, if DataVault option is chosen +#----------------------------------------------------------------------------- +dvUserName= + +#----------------------------------------------------------------------------- +# Name : dvUserPassword +# Datatype : String +# Description : Password for DataVault Owner +# Valid values : Check Oracle12c Administrator's Guide +# Default value : None +# Mandatory : Yes, if DataVault option is chosen +#----------------------------------------------------------------------------- +dvUserPassword= + +#----------------------------------------------------------------------------- +# Name : dvAccountManagerName +# Datatype : String +# Description : DataVault Account Manager +# Valid values : Check Oracle12c Administrator's Guide +# Default value : None +# Mandatory : No +#----------------------------------------------------------------------------- +dvAccountManagerName= + +#----------------------------------------------------------------------------- +# Name : dvAccountManagerPassword +# Datatype : String +# Description : Password for DataVault Account Manager +# Valid values : Check Oracle12c Administrator's Guide +# Default value : None +# Mandatory : No +#----------------------------------------------------------------------------- +dvAccountManagerPassword= + +#----------------------------------------------------------------------------- +# Name : olsConfiguration +# Datatype : Boolean +# Description : Specify "True" to configure and enable Oracle Label Security +# Valid values : True/False +# Default value : False +# Mandatory : No +#----------------------------------------------------------------------------- +olsConfiguration=false + +#----------------------------------------------------------------------------- +# Name : datafileJarLocation +# Datatype : String +# Description : Location of the data file jar +# Valid values : Directory containing compressed datafile jar +# Default value : None +# Mandatory : No +#----------------------------------------------------------------------------- +datafileJarLocation={ORACLE_HOME}/assistants/dbca/templates/ + +#----------------------------------------------------------------------------- +# Name : datafileDestination +# Datatype : String +# Description : Location of the data file's +# Valid values : Directory for all the database files +# Default value : $ORACLE_BASE/oradata +# Mandatory : No +#----------------------------------------------------------------------------- +datafileDestination=+DATA/{DB_UNIQUE_NAME}/ + +#----------------------------------------------------------------------------- +# Name : recoveryAreaDestination +# Datatype : String +# Description : Location of the data file's +# Valid values : Recovery Area location +# Default value : $ORACLE_BASE/flash_recovery_area +# Mandatory : No +#----------------------------------------------------------------------------- +recoveryAreaDestination= + +#----------------------------------------------------------------------------- +# Name : storageType +# Datatype : String +# Description : Specifies the storage on which the database is to be created +# Valid values : FS (CFS for RAC), ASM +# Default value : FS +# Mandatory : No +#----------------------------------------------------------------------------- +storageType=ASM + +#----------------------------------------------------------------------------- +# Name : diskGroupName +# Datatype : String +# Description : Specifies the disk group name for the storage +# Default value : DATA +# Mandatory : No +#----------------------------------------------------------------------------- +diskGroupName=+DATA/{DB_UNIQUE_NAME}/ + +#----------------------------------------------------------------------------- +# Name : asmsnmpPassword +# Datatype : String +# Description : Password for ASM Monitoring +# Default value : None +# Mandatory : No +#----------------------------------------------------------------------------- +asmsnmpPassword= + +#----------------------------------------------------------------------------- +# Name : recoveryGroupName +# Datatype : String +# Description : Specifies the disk group name for the recovery area +# Default value : RECOVERY +# Mandatory : No +#----------------------------------------------------------------------------- +recoveryGroupName= + +#----------------------------------------------------------------------------- +# Name : characterSet +# Datatype : String +# Description : Character set of the database +# Valid values : Check Oracle12c National Language Support Guide +# Default value : "US7ASCII" +# Mandatory : NO +#----------------------------------------------------------------------------- +characterSet=AL32UTF8 + +#----------------------------------------------------------------------------- +# Name : nationalCharacterSet +# Datatype : String +# Description : National Character set of the database +# Valid values : "UTF8" or "AL16UTF16". For details, check Oracle12c National Language Support Guide +# Default value : "AL16UTF16" +# Mandatory : No +#----------------------------------------------------------------------------- +nationalCharacterSet=AL16UTF16 + +#----------------------------------------------------------------------------- +# Name : registerWithDirService +# Datatype : Boolean +# Description : Specifies whether to register with Directory Service. +# Valid values : TRUE \ FALSE +# Default value : FALSE +# Mandatory : No +#----------------------------------------------------------------------------- +registerWithDirService=false + + +#----------------------------------------------------------------------------- +# Name : dirServiceUserName +# Datatype : String +# Description : Specifies the name of the directory service user +# Mandatory : YES, if the value of registerWithDirService is TRUE +#----------------------------------------------------------------------------- +dirServiceUserName= + +#----------------------------------------------------------------------------- +# Name : dirServicePassword +# Datatype : String +# Description : The password of the directory service user. +# You can also specify the password at the command prompt instead of here. +# Mandatory : YES, if the value of registerWithDirService is TRUE +#----------------------------------------------------------------------------- +dirServicePassword= + +#----------------------------------------------------------------------------- +# Name : walletPassword +# Datatype : String +# Description : The password for wallet to created or modified. +# You can also specify the password at the command prompt instead of here. +# Mandatory : YES, if the value of registerWithDirService is TRUE +#----------------------------------------------------------------------------- +walletPassword= + +#----------------------------------------------------------------------------- +# Name : listeners +# Datatype : String +# Description : Specifies list of listeners to register the database with. +# By default the database is configured for all the listeners specified in the +# $ORACLE_HOME/network/admin/listener.ora +# Valid values : The list should be comma separated like "listener1,listener2". +# Mandatory : NO +#----------------------------------------------------------------------------- +listeners=LISTENER + +#----------------------------------------------------------------------------- +# Name : variablesFile +# Datatype : String +# Description : Location of the file containing variable value pair +# Valid values : A valid file-system file. The variable value pair format in this file +# is =. Each pair should be in a new line. +# Default value : None +# Mandatory : NO +#----------------------------------------------------------------------------- +variablesFile= + +#----------------------------------------------------------------------------- +# Name : variables +# Datatype : String +# Description : comma separated list of name=value pairs. Overrides variables defined in variablefile and templates +# Default value : None +# Mandatory : NO +#----------------------------------------------------------------------------- +variables=DB_UNIQUE_NAME=###ORACLE_SID###,ORACLE_BASE=###DB_BASE###,PDB_NAME=###ORACLE_PDB###,DB_NAME=###ORACLE_SID###,ORACLE_HOME=###DB_HOME###,SID=###ORACLE_SID### + +#----------------------------------------------------------------------------- +# Name : initParams +# Datatype : String +# Description : comma separated list of name=value pairs. Overrides initialization parameters defined in templates +# Default value : None +# Mandatory : NO +#----------------------------------------------------------------------------- +#initParams=family:dw_helper.instance_mode=read-only,processes=640,nls_language=AMERICAN,pga_aggregate_target=2008MB,sga_target=6022MB,dispatchers=(PROTOCOL=TCP) (SERVICE=orclXDB),db_block_size=8192BYTES,orcl1.undo_tablespace=UNDOTBS1,diagnostic_dest={ORACLE_BASE},cluster_database=true,orcl1.thread=1,audit_file_dest={ORACLE_BASE}/admin/{DB_UNIQUE_NAME}/adump,db_create_file_dest=+DATA/{DB_UNIQUE_NAME}/,nls_territory=AMERICA,local_listener=-oraagent-dummy-,compatible=12.2.0,db_name=orcl,audit_trail=db,orcl1.instance_number=1,remote_login_passwordfile=exclusive,open_cursors=300 +initParams=audit_trail=none,audit_sys_operations=false,remote_login_passwordfile=exclusive + +#----------------------------------------------------------------------------- +# Name : sampleSchema +# Datatype : Boolean +# Description : Specifies whether or not to add the Sample Schemas to your database +# Valid values : TRUE \ FALSE +# Default value : FASLE +# Mandatory : No +#----------------------------------------------------------------------------- +sampleSchema=false + +#----------------------------------------------------------------------------- +# Name : memoryPercentage +# Datatype : String +# Description : percentage of physical memory for Oracle +# Default value : None +# Mandatory : NO +#----------------------------------------------------------------------------- +memoryPercentage=40 + +#----------------------------------------------------------------------------- +# Name : databaseType +# Datatype : String +# Description : used for memory distribution when memoryPercentage specified +# Valid values : MULTIPURPOSE|DATA_WAREHOUSING|OLTP +# Default value : MULTIPURPOSE +# Mandatory : NO +#----------------------------------------------------------------------------- +databaseType=MULTIPURPOSE + +#----------------------------------------------------------------------------- +# Name : automaticMemoryManagement +# Datatype : Boolean +# Description : flag to indicate Automatic Memory Management is used +# Valid values : TRUE/FALSE +# Default value : TRUE +# Mandatory : NO +#----------------------------------------------------------------------------- +automaticMemoryManagement=false + +#----------------------------------------------------------------------------- +# Name : totalMemory +# Datatype : String +# Description : total memory in MB to allocate to Oracle +# Valid values : +# Default value : +# Mandatory : NO +#----------------------------------------------------------------------------- +totalMemory=5000 diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/dbca1.rsp b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/dbca1.rsp new file mode 100644 index 0000000000..c3d07dedf0 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/dbca1.rsp @@ -0,0 +1,605 @@ +############################################################################## +## ## +## DBCA response file ## +## ------------------ ## +## Copyright(c) Oracle Corporation 1998,2018. All rights reserved. ## +## ## +## Specify values for the variables listed below to customize ## +## your installation. ## +## ## +## Each variable is associated with a comment. The comment ## +## can help to populate the variables with the appropriate ## +## values. ## +## ## +## IMPORTANT NOTE: This file contains plain text passwords and ## +## should be secured to have read permission only by oracle user ## +## or db administrator who owns this installation. ## +############################################################################## +#------------------------------------------------------------------------------- +# Do not change the following system generated value. +#------------------------------------------------------------------------------- +responseFileVersion=/oracle/assistants/rspfmt_dbca_response_schema_v18.0.0 + +#----------------------------------------------------------------------------- +# Name : gdbName +# Datatype : String +# Description : Global database name of the database +# Valid values : . - when database domain isn't NULL +# - when database domain is NULL +# Default value : None +# Mandatory : Yes +#----------------------------------------------------------------------------- +gdbName=ORCLCDB + +#----------------------------------------------------------------------------- +# Name : sid +# Datatype : String +# Description : System identifier (SID) of the database +# Valid values : Check Oracle12c Administrator's Guide +# Default value : specified in GDBNAME +# Mandatory : No +#----------------------------------------------------------------------------- +sid=ORCLCDB + +#----------------------------------------------------------------------------- +# Name : databaseConfigType +# Datatype : String +# Description : database conf type as Single Instance, Real Application Cluster or Real Application Cluster One Nodes database +# Valid values : SI\RAC\RACONENODE +# Default value : SI +# Mandatory : No +#----------------------------------------------------------------------------- +databaseConfigType=RAC + +#----------------------------------------------------------------------------- +# Name : RACOneNodeServiceName +# Datatype : String +# Description : Service is required by application to connect to RAC One +# Node Database +# Valid values : Service Name +# Default value : None +# Mandatory : No [required in case DATABASECONFTYPE is set to RACONENODE ] +#----------------------------------------------------------------------------- +RACOneNodeServiceName= + +#----------------------------------------------------------------------------- +# Name : policyManaged +# Datatype : Boolean +# Description : Set to true if Database is policy managed and +# set to false if Database is admin managed +# Valid values : TRUE\FALSE +# Default value : FALSE +# Mandatory : No +#----------------------------------------------------------------------------- +policyManaged=false + + +#----------------------------------------------------------------------------- +# Name : createServerPool +# Datatype : Boolean +# Description : Set to true if new server pool need to be created for database +# if this option is specified then the newly created database +# will use this newly created serverpool. +# Multiple serverpoolname can not be specified for database +# Valid values : TRUE\FALSE +# Default value : FALSE +# Mandatory : No +#----------------------------------------------------------------------------- +createServerPool=false + +#----------------------------------------------------------------------------- +# Name : serverPoolName +# Datatype : String +# Description : Only one serverpool name need to be specified +# if Create Server Pool option is specified. +# Comma-separated list of Serverpool names if db need to use +# multiple Server pool +# Valid values : ServerPool name + +# Default value : None +# Mandatory : No [required in case of RAC service centric database] +#----------------------------------------------------------------------------- +serverPoolName= + +#----------------------------------------------------------------------------- +# Name : cardinality +# Datatype : Number +# Description : Specify Cardinality for create server pool operation + +# Valid values : any positive Integer value +# Default value : Number of qualified nodes on cluster +# Mandatory : No [Required when a new serverpool need to be created] +#----------------------------------------------------------------------------- +cardinality= + +#----------------------------------------------------------------------------- +# Name : force +# Datatype : Boolean +# Description : Set to true if new server pool need to be created by force +# if this option is specified then the newly created serverpool +# will be assigned server even if no free servers are available. +# This may affect already running database. +# This flag can be specified for Admin managed as well as policy managed db. +# Valid values : TRUE\FALSE +# Default value : FALSE +# Mandatory : No +#----------------------------------------------------------------------------- +force=false + +#----------------------------------------------------------------------------- +# Name : pqPoolName +# Datatype : String +# Description : Only one serverpool name needs to be specified +# if create server pool option is specified. +# Comma-separated list of serverpool names if use +# server pool. This is required to +# create Parallel Query (PQ) database. Applicable to Big Cluster +# Valid values : Parallel Query (PQ) pool name +# Default value : None +# Mandatory : No [required in case of RAC service centric database] +#----------------------------------------------------------------------------- +pqPoolName= + +#----------------------------------------------------------------------------- +# Name : pqCardinality +# Datatype : Number +# Description : Specify Cardinality for create server pool operation. +# Applicable to Big Cluster +# Valid values : any positive Integer value +# Default value : Number of qualified nodes on cluster +# Mandatory : No [Required when a new serverpool need to be created] +#----------------------------------------------------------------------------- +pqCardinality= + +#----------------------------------------------------------------------------- +# Name : createAsContainerDatabase +# Datatype : boolean +# Description : flag to create database as container database +# Valid values : Check Oracle12c Administrator's Guide +# Default value : false +# Mandatory : No +#----------------------------------------------------------------------------- +createAsContainerDatabase=true + +#----------------------------------------------------------------------------- +# Name : numberOfPDBs +# Datatype : Number +# Description : Specify the number of pdb to be created +# Valid values : 0 to 252 +# Default value : 0 +# Mandatory : No +#----------------------------------------------------------------------------- +numberOfPDBs=1 + +#----------------------------------------------------------------------------- +# Name : pdbName +# Datatype : String +# Description : Specify the pdbname/pdbanme prefix if one or more pdb need to be created +# Valid values : Check Oracle12c Administrator's Guide +# Default value : None +# Mandatory : No +#----------------------------------------------------------------------------- +pdbName=ORCLPDB + +#----------------------------------------------------------------------------- +# Name : useLocalUndoForPDBs +# Datatype : boolean +# Description : Flag to create local undo tablespace for all PDB's. +# Valid values : TRUE\FALSE +# Default value : TRUE +# Mandatory : No +#----------------------------------------------------------------------------- +useLocalUndoForPDBs=true + +#----------------------------------------------------------------------------- +# Name : pdbAdminPassword +# Datatype : String +# Description : PDB Administrator user password +# Valid values : Check Oracle12c Administrator's Guide +# Default value : None +# Mandatory : No +#----------------------------------------------------------------------------- + +pdbAdminPassword=Oracle_12c + +#----------------------------------------------------------------------------- +# Name : nodelist +# Datatype : String +# Description : Comma-separated list of cluster nodes +# Valid values : Cluster node names +# Default value : None +# Mandatory : No (Yes for RAC database-centric database ) +#----------------------------------------------------------------------------- +nodelist=racnode1 + +#----------------------------------------------------------------------------- +# Name : templateName +# Datatype : String +# Description : Name of the template +# Valid values : Template file name +# Default value : None +# Mandatory : Yes +#----------------------------------------------------------------------------- +templateName=/u01/app/oracle/product/18.3.0/dbhome_1/assistants/dbca/templates/General_Purpose.dbc + +#----------------------------------------------------------------------------- +# Name : sysPassword +# Datatype : String +# Description : Password for SYS user +# Valid values : Check Oracle12c Administrator's Guide +# Default value : None +# Mandatory : Yes +#----------------------------------------------------------------------------- +sysPassword=Oracle_12c + +#----------------------------------------------------------------------------- +# Name : systemPassword +# Datatype : String +# Description : Password for SYSTEM user +# Valid values : Check Oracle12c Administrator's Guide +# Default value : None +# Mandatory : Yes +#----------------------------------------------------------------------------- +systemPassword=Oracle_12c + +#----------------------------------------------------------------------------- +# Name : serviceUserPassword +# Datatype : String +# Description : Password for Windows Service user +# Default value : None +# Mandatory : If Oracle home is installed with windows service user +#----------------------------------------------------------------------------- +serviceUserPassword= + +#----------------------------------------------------------------------------- +# Name : emConfiguration +# Datatype : String +# Description : Enterprise Manager Configuration Type +# Valid values : CENTRAL|DBEXPRESS|BOTH|NONE +# Default value : NONE +# Mandatory : No +#----------------------------------------------------------------------------- +emConfiguration=DBEXPRESS + +#----------------------------------------------------------------------------- +# Name : emExpressPort +# Datatype : Number +# Description : Enterprise Manager Configuration Type +# Valid values : Check Oracle12c Administrator's Guide +# Default value : NONE +# Mandatory : No, will be picked up from DBEXPRESS_HTTPS_PORT env variable +# or auto generates a free port between 5500 and 5599 +#----------------------------------------------------------------------------- +emExpressPort=5500 + +#----------------------------------------------------------------------------- +# Name : runCVUChecks +# Datatype : Boolean +# Description : Specify whether to run Cluster Verification Utility checks +# periodically in Cluster environment +# Valid values : TRUE\FALSE +# Default value : FALSE +# Mandatory : No +#----------------------------------------------------------------------------- +runCVUChecks=true + +#----------------------------------------------------------------------------- +# Name : dbsnmpPassword +# Datatype : String +# Description : Password for DBSNMP user +# Valid values : Check Oracle12c Administrator's Guide +# Default value : None +# Mandatory : Yes, if emConfiguration is specified or +# the value of runCVUChecks is TRUE +#----------------------------------------------------------------------------- +dbsnmpPassword=Oracle_12c + +#----------------------------------------------------------------------------- +# Name : omsHost +# Datatype : String +# Description : EM management server host name +# Default value : None +# Mandatory : Yes, if CENTRAL is specified for emConfiguration +#----------------------------------------------------------------------------- +omsHost= + +#----------------------------------------------------------------------------- +# Name : omsPort +# Datatype : Number +# Description : EM management server port number +# Default value : None +# Mandatory : Yes, if CENTRAL is specified for emConfiguration +#----------------------------------------------------------------------------- +omsPort=0 + +#----------------------------------------------------------------------------- +# Name : emUser +# Datatype : String +# Description : EM Admin username to add or modify targets +# Default value : None +# Mandatory : Yes, if CENTRAL is specified for emConfiguration +#----------------------------------------------------------------------------- +emUser= + +#----------------------------------------------------------------------------- +# Name : emPassword +# Datatype : String +# Description : EM Admin user password +# Default value : None +# Mandatory : Yes, if CENTRAL is specified for emConfiguration +#----------------------------------------------------------------------------- +emPassword= + +#----------------------------------------------------------------------------- +# Name : dvConfiguration +# Datatype : Boolean +# Description : Specify "True" to configure and enable Oracle Database vault +# Valid values : True/False +# Default value : False +# Mandatory : No +#----------------------------------------------------------------------------- +dvConfiguration=false + +#----------------------------------------------------------------------------- +# Name : dvUserName +# Datatype : String +# Description : DataVault Owner +# Valid values : Check Oracle12c Administrator's Guide +# Default value : None +# Mandatory : Yes, if DataVault option is chosen +#----------------------------------------------------------------------------- +dvUserName= + +#----------------------------------------------------------------------------- +# Name : dvUserPassword +# Datatype : String +# Description : Password for DataVault Owner +# Valid values : Check Oracle12c Administrator's Guide +# Default value : None +# Mandatory : Yes, if DataVault option is chosen +#----------------------------------------------------------------------------- +dvUserPassword= + +#----------------------------------------------------------------------------- +# Name : dvAccountManagerName +# Datatype : String +# Description : DataVault Account Manager +# Valid values : Check Oracle12c Administrator's Guide +# Default value : None +# Mandatory : No +#----------------------------------------------------------------------------- +dvAccountManagerName= + +#----------------------------------------------------------------------------- +# Name : dvAccountManagerPassword +# Datatype : String +# Description : Password for DataVault Account Manager +# Valid values : Check Oracle12c Administrator's Guide +# Default value : None +# Mandatory : No +#----------------------------------------------------------------------------- +dvAccountManagerPassword= + +#----------------------------------------------------------------------------- +# Name : olsConfiguration +# Datatype : Boolean +# Description : Specify "True" to configure and enable Oracle Label Security +# Valid values : True/False +# Default value : False +# Mandatory : No +#----------------------------------------------------------------------------- +olsConfiguration=false + +#----------------------------------------------------------------------------- +# Name : datafileJarLocation +# Datatype : String +# Description : Location of the data file jar +# Valid values : Directory containing compressed datafile jar +# Default value : None +# Mandatory : No +#----------------------------------------------------------------------------- +datafileJarLocation={ORACLE_HOME}/assistants/dbca/templates/ + +#----------------------------------------------------------------------------- +# Name : datafileDestination +# Datatype : String +# Description : Location of the data file's +# Valid values : Directory for all the database files +# Default value : $ORACLE_BASE/oradata +# Mandatory : No +#----------------------------------------------------------------------------- +datafileDestination=+DATA/{DB_UNIQUE_NAME}/ + +#----------------------------------------------------------------------------- +# Name : recoveryAreaDestination +# Datatype : String +# Description : Location of the data file's +# Valid values : Recovery Area location +# Default value : $ORACLE_BASE/flash_recovery_area +# Mandatory : No +#----------------------------------------------------------------------------- +recoveryAreaDestination= + +#----------------------------------------------------------------------------- +# Name : storageType +# Datatype : String +# Description : Specifies the storage on which the database is to be created +# Valid values : FS (CFS for RAC), ASM +# Default value : FS +# Mandatory : No +#----------------------------------------------------------------------------- +storageType=ASM + +#----------------------------------------------------------------------------- +# Name : diskGroupName +# Datatype : String +# Description : Specifies the disk group name for the storage +# Default value : DATA +# Mandatory : No +#----------------------------------------------------------------------------- +diskGroupName=+DATA/{DB_UNIQUE_NAME}/ + +#----------------------------------------------------------------------------- +# Name : asmsnmpPassword +# Datatype : String +# Description : Password for ASM Monitoring +# Default value : None +# Mandatory : No +#----------------------------------------------------------------------------- +asmsnmpPassword= + +#----------------------------------------------------------------------------- +# Name : recoveryGroupName +# Datatype : String +# Description : Specifies the disk group name for the recovery area +# Default value : RECOVERY +# Mandatory : No +#----------------------------------------------------------------------------- +recoveryGroupName= + +#----------------------------------------------------------------------------- +# Name : characterSet +# Datatype : String +# Description : Character set of the database +# Valid values : Check Oracle12c National Language Support Guide +# Default value : "US7ASCII" +# Mandatory : NO +#----------------------------------------------------------------------------- +characterSet=AL32UTF8 + +#----------------------------------------------------------------------------- +# Name : nationalCharacterSet +# Datatype : String +# Description : National Character set of the database +# Valid values : "UTF8" or "AL16UTF16". For details, check Oracle12c National Language Support Guide +# Default value : "AL16UTF16" +# Mandatory : No +#----------------------------------------------------------------------------- +nationalCharacterSet=AL16UTF16 + +#----------------------------------------------------------------------------- +# Name : registerWithDirService +# Datatype : Boolean +# Description : Specifies whether to register with Directory Service. +# Valid values : TRUE \ FALSE +# Default value : FALSE +# Mandatory : No +#----------------------------------------------------------------------------- +registerWithDirService=false + + +#----------------------------------------------------------------------------- +# Name : dirServiceUserName +# Datatype : String +# Description : Specifies the name of the directory service user +# Mandatory : YES, if the value of registerWithDirService is TRUE +#----------------------------------------------------------------------------- +dirServiceUserName= + +#----------------------------------------------------------------------------- +# Name : dirServicePassword +# Datatype : String +# Description : The password of the directory service user. +# You can also specify the password at the command prompt instead of here. +# Mandatory : YES, if the value of registerWithDirService is TRUE +#----------------------------------------------------------------------------- +dirServicePassword= + +#----------------------------------------------------------------------------- +# Name : walletPassword +# Datatype : String +# Description : The password for wallet to created or modified. +# You can also specify the password at the command prompt instead of here. +# Mandatory : YES, if the value of registerWithDirService is TRUE +#----------------------------------------------------------------------------- +walletPassword= + +#----------------------------------------------------------------------------- +# Name : listeners +# Datatype : String +# Description : Specifies list of listeners to register the database with. +# By default the database is configured for all the listeners specified in the +# $ORACLE_HOME/network/admin/listener.ora +# Valid values : The list should be comma separated like "listener1,listener2". +# Mandatory : NO +#----------------------------------------------------------------------------- +listeners=LISTENER + +#----------------------------------------------------------------------------- +# Name : variablesFile +# Datatype : String +# Description : Location of the file containing variable value pair +# Valid values : A valid file-system file. The variable value pair format in this file +# is =. Each pair should be in a new line. +# Default value : None +# Mandatory : NO +#----------------------------------------------------------------------------- +variablesFile= + +#----------------------------------------------------------------------------- +# Name : variables +# Datatype : String +# Description : comma separated list of name=value pairs. Overrides variables defined in variablefile and templates +# Default value : None +# Mandatory : NO +#----------------------------------------------------------------------------- +variables=DB_UNIQUE_NAME=ORCLCDB,ORACLE_BASE=/u01/app/oracle,PDB_NAME=ORCLPDB,DB_NAME=ORCLCDB,ORACLE_HOME=/u01/app/oracle/product/18.3.0/dbhome_1,SID=ORCLCDB + +#----------------------------------------------------------------------------- +# Name : initParams +# Datatype : String +# Description : comma separated list of name=value pairs. Overrides initialization parameters defined in templates +# Default value : None +# Mandatory : NO +#----------------------------------------------------------------------------- +#initParams=family:dw_helper.instance_mode=read-only,processes=640,nls_language=AMERICAN,pga_aggregate_target=2008MB,sga_target=6022MB,dispatchers=(PROTOCOL=TCP) (SERVICE=orclXDB),db_block_size=8192BYTES,orcl1.undo_tablespace=UNDOTBS1,diagnostic_dest={ORACLE_BASE},cluster_database=true,orcl1.thread=1,audit_file_dest={ORACLE_BASE}/admin/{DB_UNIQUE_NAME}/adump,db_create_file_dest=+DATA/{DB_UNIQUE_NAME}/,nls_territory=AMERICA,local_listener=-oraagent-dummy-,compatible=12.2.0,db_name=orcl,audit_trail=db,orcl1.instance_number=1,remote_login_passwordfile=exclusive,open_cursors=300 +initParams=audit_trail=none,audit_sys_operations=false,remote_login_passwordfile=exclusive + +#----------------------------------------------------------------------------- +# Name : sampleSchema +# Datatype : Boolean +# Description : Specifies whether or not to add the Sample Schemas to your database +# Valid values : TRUE \ FALSE +# Default value : FASLE +# Mandatory : No +#----------------------------------------------------------------------------- +sampleSchema=false + +#----------------------------------------------------------------------------- +# Name : memoryPercentage +# Datatype : String +# Description : percentage of physical memory for Oracle +# Default value : None +# Mandatory : NO +#----------------------------------------------------------------------------- +memoryPercentage=40 + +#----------------------------------------------------------------------------- +# Name : databaseType +# Datatype : String +# Description : used for memory distribution when memoryPercentage specified +# Valid values : MULTIPURPOSE|DATA_WAREHOUSING|OLTP +# Default value : MULTIPURPOSE +# Mandatory : NO +#----------------------------------------------------------------------------- +databaseType=MULTIPURPOSE + +#----------------------------------------------------------------------------- +# Name : automaticMemoryManagement +# Datatype : Boolean +# Description : flag to indicate Automatic Memory Management is used +# Valid values : TRUE/FALSE +# Default value : TRUE +# Mandatory : NO +#----------------------------------------------------------------------------- +automaticMemoryManagement=false + +#----------------------------------------------------------------------------- +# Name : totalMemory +# Datatype : String +# Description : total memory in MB to allocate to Oracle +# Valid values : +# Default value : +# Mandatory : NO +#----------------------------------------------------------------------------- +totalMemory=5000 diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/dbca_19c.rsp b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/dbca_19c.rsp new file mode 100644 index 0000000000..157111d993 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/dbca_19c.rsp @@ -0,0 +1,58 @@ +responseFileVersion=/oracle/assistants/rspfmt_dbca_response_schema_v19.0.0 +gdbName=###ORACLE_SID### +sid=###ORACLE_SID### +databaseConfigType=###DATABASE_CONFIG_TYPE### +RACOneNodeServiceName= +policyManaged=false +createServerPool=false +serverPoolName= +cardinality= +force=false +pqPoolName= +pqCardinality= +createAsContainerDatabase=###CONTAINER_DB_FLAG### +numberOfPDBs=###PDB_COUNT### +pdbName=###ORACLE_PDB### +useLocalUndoForPDBs=true +pdbAdminPassword=###ORACLE_PWD### +nodelist=###DB_NODES### +templateName={ORACLE_HOME}/assistants/dbca/templates/General_Purpose.dbc +sysPassword=###ORACLE_PWD### +systemPassword=###ORACLE_PWD### +oracleHomeUserPassword= +emConfiguration=DBEXPRESS +emExpressPort=5500 +runCVUChecks=true +dbsnmpPassword=###ORACLE_PWD### +omsHost= +omsPort= +emUser= +emPassword= +dvConfiguration=false +dvUserName= +dvUserPassword= +dvAccountManagerName= +dvAccountManagerPassword= +olsConfiguration=false +datafileJarLocation={ORACLE_HOME}/assistants/dbca/templates/ +datafileDestination=+DATA/{DB_UNIQUE_NAME}/ +recoveryAreaDestination= +storageType=ASM +diskGroupName=+DATA/{DB_UNIQUE_NAME}/ +asmsnmpPassword= +recoveryGroupName= +characterSet=AL32UTF8 +nationalCharacterSet=AL16UTF16 +registerWithDirService=false +dirServiceUserName= +dirServicePassword= +walletPassword= +listeners=LISTENER +variablesFile= +variables=DB_UNIQUE_NAME=###ORACLE_SID###,ORACLE_BASE=###DB_BASE###,PDB_NAME=###ORACLE_PDB###,DB_NAME=###ORACLE_SID###,ORACLE_HOME=###DB_HOME###,SID=###ORACLE_SID### +initParams=audit_trail=none,audit_sys_operations=false,remote_login_passwordfile=exclusive +sampleSchema=false +memoryPercentage=40 +databaseType=MULTIPURPOSE +automaticMemoryManagement=false +totalMemory=###TOTAL_MEMORY### diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/dbca_19cv1.rsp b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/dbca_19cv1.rsp new file mode 100644 index 0000000000..c584ed74fe --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/dbca_19cv1.rsp @@ -0,0 +1,604 @@ +############################################################################## +## ## +## DBCA response file ## +## ------------------ ## +## Copyright(c) Oracle Corporation 1998,2019. All rights reserved. ## +## ## +## Specify values for the variables listed below to customize ## +## your installation. ## +## ## +## Each variable is associated with a comment. The comment ## +## can help to populate the variables with the appropriate ## +## values. ## +## ## +## IMPORTANT NOTE: This file contains plain text passwords and ## +## should be secured to have read permission only by oracle user ## +## or db administrator who owns this installation. ## +############################################################################## +#------------------------------------------------------------------------------- +# Do not change the following system generated value. +#------------------------------------------------------------------------------- +responseFileVersion=/oracle/assistants/rspfmt_dbca_response_schema_v19.0.0 + +#----------------------------------------------------------------------------- +# Name : gdbName +# Datatype : String +# Description : Global database name of the database +# Valid values : . - when database domain isn't NULL +# - when database domain is NULL +# Default value : None +# Mandatory : Yes +#----------------------------------------------------------------------------- +gdbName= + +#----------------------------------------------------------------------------- +# Name : sid +# Datatype : String +# Description : System identifier (SID) of the database +# Valid values : Check Oracle19c Administrator's Guide +# Default value : specified in GDBNAME +# Mandatory : No +#----------------------------------------------------------------------------- +sid= + +#----------------------------------------------------------------------------- +# Name : databaseConfigType +# Datatype : String +# Description : database conf type as Single Instance, Real Application Cluster or Real Application Cluster One Nodes database +# Valid values : SI\RAC\RACONENODE +# Default value : SI +# Mandatory : No +#----------------------------------------------------------------------------- +databaseConfigType= + +#----------------------------------------------------------------------------- +# Name : RACOneNodeServiceName +# Datatype : String +# Description : Service is required by application to connect to RAC One +# Node Database +# Valid values : Service Name +# Default value : None +# Mandatory : No [required in case DATABASECONFTYPE is set to RACONENODE ] +#----------------------------------------------------------------------------- +RACOneNodeServiceName= + +#----------------------------------------------------------------------------- +# Name : policyManaged +# Datatype : Boolean +# Description : Set to true if Database is policy managed and +# set to false if Database is admin managed +# Valid values : TRUE\FALSE +# Default value : FALSE +# Mandatory : No +#----------------------------------------------------------------------------- +policyManaged= + + +#----------------------------------------------------------------------------- +# Name : createServerPool +# Datatype : Boolean +# Description : Set to true if new server pool need to be created for database +# if this option is specified then the newly created database +# will use this newly created serverpool. +# Multiple serverpoolname can not be specified for database +# Valid values : TRUE\FALSE +# Default value : FALSE +# Mandatory : No +#----------------------------------------------------------------------------- +createServerPool= + +#----------------------------------------------------------------------------- +# Name : serverPoolName +# Datatype : String +# Description : Only one serverpool name need to be specified +# if Create Server Pool option is specified. +# Comma-separated list of Serverpool names if db need to use +# multiple Server pool +# Valid values : ServerPool name + +# Default value : None +# Mandatory : No [required in case of RAC service centric database] +#----------------------------------------------------------------------------- +serverPoolName= + +#----------------------------------------------------------------------------- +# Name : cardinality +# Datatype : Number +# Description : Specify Cardinality for create server pool operation + +# Valid values : any positive Integer value +# Default value : Number of qualified nodes on cluster +# Mandatory : No [Required when a new serverpool need to be created] +#----------------------------------------------------------------------------- +cardinality= + +#----------------------------------------------------------------------------- +# Name : force +# Datatype : Boolean +# Description : Set to true if new server pool need to be created by force +# if this option is specified then the newly created serverpool +# will be assigned server even if no free servers are available. +# This may affect already running database. +# This flag can be specified for Admin managed as well as policy managed db. +# Valid values : TRUE\FALSE +# Default value : FALSE +# Mandatory : No +#----------------------------------------------------------------------------- +force= + +#----------------------------------------------------------------------------- +# Name : pqPoolName +# Datatype : String +# Description : Only one serverpool name needs to be specified +# if create server pool option is specified. +# Comma-separated list of serverpool names if use +# server pool. This is required to +# create Parallel Query (PQ) database. Applicable to Big Cluster +# Valid values : Parallel Query (PQ) pool name +# Default value : None +# Mandatory : No [required in case of RAC service centric database] +#----------------------------------------------------------------------------- +pqPoolName= + +#----------------------------------------------------------------------------- +# Name : pqCardinality +# Datatype : Number +# Description : Specify Cardinality for create server pool operation. +# Applicable to Big Cluster +# Valid values : any positive Integer value +# Default value : Number of qualified nodes on cluster +# Mandatory : No [Required when a new serverpool need to be created] +#----------------------------------------------------------------------------- +pqCardinality= + +#----------------------------------------------------------------------------- +# Name : createAsContainerDatabase +# Datatype : boolean +# Description : flag to create database as container database +# Valid values : Check Oracle19c Administrator's Guide +# Default value : false +# Mandatory : No +#----------------------------------------------------------------------------- +createAsContainerDatabase= + +#----------------------------------------------------------------------------- +# Name : numberOfPDBs +# Datatype : Number +# Description : Specify the number of pdb to be created +# Valid values : 0 to 4094 +# Default value : 0 +# Mandatory : No +#----------------------------------------------------------------------------- +numberOfPDBs= + +#----------------------------------------------------------------------------- +# Name : pdbName +# Datatype : String +# Description : Specify the pdbname/pdbanme prefix if one or more pdb need to be created +# Valid values : Check Oracle19c Administrator's Guide +# Default value : None +# Mandatory : No +#----------------------------------------------------------------------------- +pdbName= + +#----------------------------------------------------------------------------- +# Name : useLocalUndoForPDBs +# Datatype : boolean +# Description : Flag to create local undo tablespace for all PDB's. +# Valid values : TRUE\FALSE +# Default value : TRUE +# Mandatory : No +#----------------------------------------------------------------------------- +useLocalUndoForPDBs= + +#----------------------------------------------------------------------------- +# Name : pdbAdminPassword +# Datatype : String +# Description : PDB Administrator user password +# Valid values : Check Oracle19c Administrator's Guide +# Default value : None +# Mandatory : No +#----------------------------------------------------------------------------- + +pdbAdminPassword= + +#----------------------------------------------------------------------------- +# Name : nodelist +# Datatype : String +# Description : Comma-separated list of cluster nodes +# Valid values : Cluster node names +# Default value : None +# Mandatory : No (Yes for RAC database-centric database ) +#----------------------------------------------------------------------------- +nodelist= + +#----------------------------------------------------------------------------- +# Name : templateName +# Datatype : String +# Description : Name of the template +# Valid values : Template file name +# Default value : None +# Mandatory : Yes +#----------------------------------------------------------------------------- +templateName= + +#----------------------------------------------------------------------------- +# Name : sysPassword +# Datatype : String +# Description : Password for SYS user +# Valid values : Check Oracle19c Administrator's Guide +# Default value : None +# Mandatory : Yes +#----------------------------------------------------------------------------- +sysPassword= + +#----------------------------------------------------------------------------- +# Name : systemPassword +# Datatype : String +# Description : Password for SYSTEM user +# Valid values : Check Oracle19c Administrator's Guide +# Default value : None +# Mandatory : Yes +#----------------------------------------------------------------------------- +systemPassword= + +#----------------------------------------------------------------------------- +# Name : oracleHomeUserPassword +# Datatype : String +# Description : Password for Windows Service user +# Default value : None +# Mandatory : If Oracle home is installed with windows service user +#----------------------------------------------------------------------------- +oracleHomeUserPassword= + +#----------------------------------------------------------------------------- +# Name : emConfiguration +# Datatype : String +# Description : Enterprise Manager Configuration Type +# Valid values : CENTRAL|DBEXPRESS|BOTH|NONE +# Default value : NONE +# Mandatory : No +#----------------------------------------------------------------------------- +emConfiguration= + +#----------------------------------------------------------------------------- +# Name : emExpressPort +# Datatype : Number +# Description : Enterprise Manager Configuration Type +# Valid values : Check Oracle19c Administrator's Guide +# Default value : NONE +# Mandatory : No, will be picked up from DBEXPRESS_HTTPS_PORT env variable +# or auto generates a free port between 5500 and 5599 +#----------------------------------------------------------------------------- +emExpressPort=5500 + +#----------------------------------------------------------------------------- +# Name : runCVUChecks +# Datatype : Boolean +# Description : Specify whether to run Cluster Verification Utility checks +# periodically in Cluster environment +# Valid values : TRUE\FALSE +# Default value : FALSE +# Mandatory : No +#----------------------------------------------------------------------------- +runCVUChecks= + +#----------------------------------------------------------------------------- +# Name : dbsnmpPassword +# Datatype : String +# Description : Password for DBSNMP user +# Valid values : Check Oracle19c Administrator's Guide +# Default value : None +# Mandatory : Yes, if emConfiguration is specified or +# the value of runCVUChecks is TRUE +#----------------------------------------------------------------------------- +dbsnmpPassword= + +#----------------------------------------------------------------------------- +# Name : omsHost +# Datatype : String +# Description : EM management server host name +# Default value : None +# Mandatory : Yes, if CENTRAL is specified for emConfiguration +#----------------------------------------------------------------------------- +omsHost= + +#----------------------------------------------------------------------------- +# Name : omsPort +# Datatype : Number +# Description : EM management server port number +# Default value : None +# Mandatory : Yes, if CENTRAL is specified for emConfiguration +#----------------------------------------------------------------------------- +omsPort= + +#----------------------------------------------------------------------------- +# Name : emUser +# Datatype : String +# Description : EM Admin username to add or modify targets +# Default value : None +# Mandatory : Yes, if CENTRAL is specified for emConfiguration +#----------------------------------------------------------------------------- +emUser= + +#----------------------------------------------------------------------------- +# Name : emPassword +# Datatype : String +# Description : EM Admin user password +# Default value : None +# Mandatory : Yes, if CENTRAL is specified for emConfiguration +#----------------------------------------------------------------------------- +emPassword= + +#----------------------------------------------------------------------------- +# Name : dvConfiguration +# Datatype : Boolean +# Description : Specify "True" to configure and enable Oracle Database vault +# Valid values : True/False +# Default value : False +# Mandatory : No +#----------------------------------------------------------------------------- +dvConfiguration= + +#----------------------------------------------------------------------------- +# Name : dvUserName +# Datatype : String +# Description : DataVault Owner +# Valid values : Check Oracle19c Administrator's Guide +# Default value : None +# Mandatory : Yes, if DataVault option is chosen +#----------------------------------------------------------------------------- +dvUserName= + +#----------------------------------------------------------------------------- +# Name : dvUserPassword +# Datatype : String +# Description : Password for DataVault Owner +# Valid values : Check Oracle19c Administrator's Guide +# Default value : None +# Mandatory : Yes, if DataVault option is chosen +#----------------------------------------------------------------------------- +dvUserPassword= + +#----------------------------------------------------------------------------- +# Name : dvAccountManagerName +# Datatype : String +# Description : DataVault Account Manager +# Valid values : Check Oracle19c Administrator's Guide +# Default value : None +# Mandatory : No +#----------------------------------------------------------------------------- +dvAccountManagerName= + +#----------------------------------------------------------------------------- +# Name : dvAccountManagerPassword +# Datatype : String +# Description : Password for DataVault Account Manager +# Valid values : Check Oracle19c Administrator's Guide +# Default value : None +# Mandatory : No +#----------------------------------------------------------------------------- +dvAccountManagerPassword= + +#----------------------------------------------------------------------------- +# Name : olsConfiguration +# Datatype : Boolean +# Description : Specify "True" to configure and enable Oracle Label Security +# Valid values : True/False +# Default value : False +# Mandatory : No +#----------------------------------------------------------------------------- +olsConfiguration= + +#----------------------------------------------------------------------------- +# Name : datafileJarLocation +# Datatype : String +# Description : Location of the data file jar +# Valid values : Directory containing compressed datafile jar +# Default value : None +# Mandatory : No +#----------------------------------------------------------------------------- +datafileJarLocation= + +#----------------------------------------------------------------------------- +# Name : datafileDestination +# Datatype : String +# Description : Location of the data file's +# Valid values : Directory for all the database files +# Default value : $ORACLE_BASE/oradata +# Mandatory : No +#----------------------------------------------------------------------------- +datafileDestination= + +#----------------------------------------------------------------------------- +# Name : recoveryAreaDestination +# Datatype : String +# Description : Location of the data file's +# Valid values : Recovery Area location +# Default value : $ORACLE_BASE/flash_recovery_area +# Mandatory : No +#----------------------------------------------------------------------------- +recoveryAreaDestination= + +#----------------------------------------------------------------------------- +# Name : storageType +# Datatype : String +# Description : Specifies the storage on which the database is to be created +# Valid values : FS (CFS for RAC), ASM +# Default value : FS +# Mandatory : No +#----------------------------------------------------------------------------- +storageType= + +#----------------------------------------------------------------------------- +# Name : diskGroupName +# Datatype : String +# Description : Specifies the disk group name for the storage +# Default value : DATA +# Mandatory : No +#----------------------------------------------------------------------------- +diskGroupName= + +#----------------------------------------------------------------------------- +# Name : asmsnmpPassword +# Datatype : String +# Description : Password for ASM Monitoring +# Default value : None +# Mandatory : No +#----------------------------------------------------------------------------- +asmsnmpPassword= + +#----------------------------------------------------------------------------- +# Name : recoveryGroupName +# Datatype : String +# Description : Specifies the disk group name for the recovery area +# Default value : RECOVERY +# Mandatory : No +#----------------------------------------------------------------------------- +recoveryGroupName= + +#----------------------------------------------------------------------------- +# Name : characterSet +# Datatype : String +# Description : Character set of the database +# Valid values : Check Oracle19c National Language Support Guide +# Default value : "US7ASCII" +# Mandatory : NO +#----------------------------------------------------------------------------- +characterSet= + +#----------------------------------------------------------------------------- +# Name : nationalCharacterSet +# Datatype : String +# Description : National Character set of the database +# Valid values : "UTF8" or "AL16UTF16". For details, check Oracle19c National Language Support Guide +# Default value : "AL16UTF16" +# Mandatory : No +#----------------------------------------------------------------------------- +nationalCharacterSet= + +#----------------------------------------------------------------------------- +# Name : registerWithDirService +# Datatype : Boolean +# Description : Specifies whether to register with Directory Service. +# Valid values : TRUE \ FALSE +# Default value : FALSE +# Mandatory : No +#----------------------------------------------------------------------------- +registerWithDirService= + + +#----------------------------------------------------------------------------- +# Name : dirServiceUserName +# Datatype : String +# Description : Specifies the name of the directory service user +# Mandatory : YES, if the value of registerWithDirService is TRUE +#----------------------------------------------------------------------------- +dirServiceUserName= + +#----------------------------------------------------------------------------- +# Name : dirServicePassword +# Datatype : String +# Description : The password of the directory service user. +# You can also specify the password at the command prompt instead of here. +# Mandatory : YES, if the value of registerWithDirService is TRUE +#----------------------------------------------------------------------------- +dirServicePassword= + +#----------------------------------------------------------------------------- +# Name : walletPassword +# Datatype : String +# Description : The password for wallet to created or modified. +# You can also specify the password at the command prompt instead of here. +# Mandatory : YES, if the value of registerWithDirService is TRUE +#----------------------------------------------------------------------------- +walletPassword= + +#----------------------------------------------------------------------------- +# Name : listeners +# Datatype : String +# Description : Specifies list of listeners to register the database with. +# By default the database is configured for all the listeners specified in the +# $ORACLE_HOME/network/admin/listener.ora +# Valid values : The list should be comma separated like "listener1,listener2". +# Mandatory : NO +#----------------------------------------------------------------------------- +listeners= + +#----------------------------------------------------------------------------- +# Name : variablesFile +# Datatype : String +# Description : Location of the file containing variable value pair +# Valid values : A valid file-system file. The variable value pair format in this file +# is =. Each pair should be in a new line. +# Default value : None +# Mandatory : NO +#----------------------------------------------------------------------------- +variablesFile= + +#----------------------------------------------------------------------------- +# Name : variables +# Datatype : String +# Description : comma separated list of name=value pairs. Overrides variables defined in variablefile and templates +# Default value : None +# Mandatory : NO +#----------------------------------------------------------------------------- +variables= + +#----------------------------------------------------------------------------- +# Name : initParams +# Datatype : String +# Description : comma separated list of name=value pairs. Overrides initialization parameters defined in templates +# Default value : None +# Mandatory : NO +#----------------------------------------------------------------------------- +initParams= + +#----------------------------------------------------------------------------- +# Name : sampleSchema +# Datatype : Boolean +# Description : Specifies whether or not to add the Sample Schemas to your database +# Valid values : TRUE \ FALSE +# Default value : FASLE +# Mandatory : No +#----------------------------------------------------------------------------- +sampleSchema= + +#----------------------------------------------------------------------------- +# Name : memoryPercentage +# Datatype : String +# Description : percentage of physical memory for Oracle +# Default value : None +# Mandatory : NO +#----------------------------------------------------------------------------- +memoryPercentage= + +#----------------------------------------------------------------------------- +# Name : databaseType +# Datatype : String +# Description : used for memory distribution when memoryPercentage specified +# Valid values : MULTIPURPOSE|DATA_WAREHOUSING|OLTP +# Default value : MULTIPURPOSE +# Mandatory : NO +#----------------------------------------------------------------------------- +databaseType= + +#----------------------------------------------------------------------------- +# Name : automaticMemoryManagement +# Datatype : Boolean +# Description : flag to indicate Automatic Memory Management is used +# Valid values : TRUE/FALSE +# Default value : TRUE +# Mandatory : NO +#----------------------------------------------------------------------------- +automaticMemoryManagement= + +#----------------------------------------------------------------------------- +# Name : totalMemory +# Datatype : String +# Description : total memory in MB to allocate to Oracle +# Valid values : +# Default value : +# Mandatory : NO +#----------------------------------------------------------------------------- +totalMemory= diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/enableRAC.sh b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/enableRAC.sh new file mode 100755 index 0000000000..ea6147df01 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/enableRAC.sh @@ -0,0 +1,19 @@ +#!/bin/bash +# LICENSE UPL 1.0 +# +# Copyright (c) 2018,2021 Oracle and/or its affiliates. +# +# Since: January, 2018 +# Author: sanjay.singh@oracle.com, paramdeep.saini@oracle.com +# Description: Enable RAC feature in Oracle Software +# +# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS HEADER. +# shellcheck disable=SC1090 +source /home/"${DB_USER}"/.bashrc + +export ORACLE_HOME=${DB_HOME} +export PATH=${ORACLE_HOME}/bin:/bin:/sbin:/usr/bin +export LD_LIBRARY_PATH=${ORACLE_HOME}/lib:/lib:/usr/lib + +make -f "$DB_HOME"/rdbms/lib/ins_rdbms.mk rac_on +make -f "$DB_HOME"/rdbms/lib/ins_rdbms.mk ioracle diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/fixupPreq.sh b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/fixupPreq.sh new file mode 100755 index 0000000000..978f0b49e6 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/fixupPreq.sh @@ -0,0 +1,30 @@ +#!/bin/bash +# LICENSE UPL 1.0 +# +# Copyright (c) 2018,2021 Oracle and/or its affiliates. +# +# Since: January, 2018 +# Author: sanjay.singh@oracle.com, paramdeep.saini@oracle.com +# Description: Setup the Linux kernel parameter inside the container. Note that some parameter need to be set on container host. +# +# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS HEADER. + +rpm -Uvh "$GRID_HOME/cv/rpm/cvuqdisk*" +echo "oracle soft nofile 1024" > /etc/security/limits.conf +echo "oracle hard nofile 65536" >> /etc/security/limits.conf +echo "oracle soft nproc 16384" >> /etc/security/limits.conf +echo "oracle hard nproc 16384" >> /etc/security/limits.conf +echo "oracle soft stack 10240" >> /etc/security/limits.conf +echo "oracle hard stack 32768" >> /etc/security/limits.conf +echo "oracle hard memlock 134217728" >> /etc/security/limits.conf +echo "oracle soft memlock 134217728" >> /etc/security/limits.conf +echo "grid soft nofile 1024" >> /etc/security/limits.conf +echo "grid hard nofile 65536" >> /etc/security/limits.conf +echo "grid soft nproc 16384" >> /etc/security/limits.conf +echo "grid hard nproc 16384" >> /etc/security/limits.conf +echo "grid soft stack 10240" >> /etc/security/limits.conf +echo "grid hard stack 32768" >> /etc/security/limits.conf +echo "grid hard memlock 134217728" >> /etc/security/limits.conf +echo "grid soft memlock 134217728" >> /etc/security/limits.conf +echo "ulimit -S -s 10240" >> /home/grid/.bashrc +echo "ulimit -S -s 10240" >> /home/oracle/.bashrc diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/functions.sh b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/functions.sh new file mode 100755 index 0000000000..5d3f26bfaf --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/functions.sh @@ -0,0 +1,196 @@ +#!/bin/bash +# LICENSE UPL 1.0 +# +# Copyright (c) 2018,2021 Oracle and/or its affiliates. +# +# Since: January, 2018 +# Author: sanjay.singh@oracle.com, paramdeep.saini@oracle.com +# Description: Common Function File +# +# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS HEADER. +# + +export logfile=/tmp/orod.log +export logdir=/tmp +export STD_OUT_FILE="/proc/1/fd/1" +export STD_ERR_FILE="/proc/1/fd/2" +export TOP_PID=$$ + +###### Function Related to printing messages and exit the script if error occurred ################## +error_exit() { + # shellcheck disable=SC2155 +local NOW=$(date +"%m-%d-%Y %T %Z") + # Display error message and exit +# echo "${PROGNAME}: ${1:-"Unknown Error"}" 1>&2 + echo "${NOW} : ${PROGNAME}: ${1:-"Unknown Error"}" | tee -a $logfile > $STD_OUT_FILE + kill -s TERM $TOP_PID +} + +print_message () +{ + # shellcheck disable=SC2155 + local NOW=$(date +"%m-%d-%Y %T %Z") + # Display message and return + echo "${NOW} : ${PROGNAME} : ${1:-"Unknown Message"}" | tee -a $logfile > $STD_OUT_FILE + return $? +} + +##################################################################################################### + +####### Function related to IP Checks ############################################################### + +validating_env_vars () +{ +local stat=3 +local ip="${1}" +local alive="${2}" + +print_message "checking IP is in correct format such as xxx.xxx.xxx.xxx" + +if valid_ip "$ip"; then + print_message "IP $ip format check passed!" +else + error_exit "IP $ip is not in correct format..please check!" +fi + +# Checking if Host is alive + +if [ "${alive}" == "true" ]; then + +print_message "Checking if IP is pingable or not!" + +if host_alive "$ip"; then + print_message "IP $ip is pingable ...check passed!" +else + error_exit "IP $ip is not pingable..check failed!" +fi + +else + +print_message "Checking if IP is pingable or not!" + +if host_alive "$ip"; then + error_exit "IP $ip is already allocated...check failed!" +else + print_message "IP $ip is not pingable..check passed!" +fi + +fi +} + +check_interface () +{ +local ethcard=$1 +local output + +ip link show | grep "$ethcard" + +output=$? + + if [ $output -eq 0 ];then + return 0 + else + return 1 + fi +} + +valid_ip() +{ + local ip=$1 + local stat=1 + if [[ $ip =~ ^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$ ]]; then + OIFS=$IFS + IFS='.' + # shellcheck disable=SC2206 + ip=($ip) + IFS=$OIFS + [[ ${ip[0]} -le 255 && ${ip[1]} -le 255 \ + && ${ip[2]} -le 255 && ${ip[3]} -le 255 ]] + stat=$? + fi + return $stat +} + +host_alive() +{ + + local ip_or_hostname=$1 + local stat=1 +ping -c 1 -W 1 "$ip_or_hostname" >& /dev/null +# shellcheck disable=SC2181 +if [ $? -eq 0 ]; then + stat=0 + return $stat +else + stat=1 + return $stat +fi + +} + +resolveip(){ + + local host="$1" + if [ -z "$host" ] + then + return 1 + else + # shellcheck disable=SC2155,SC2178 + local ip=$( getent hosts "$host" | awk '{print $1}' ) + # shellcheck disable=SC2128 + if [ -z "$ip" ] + then + # shellcheck disable=SC2178 + ip=$( dig +short "$host" ) + # shellcheck disable=SC2128 + if [ -z "$ip" ] + then + print_message "unable to resolve '$host'" + return 1 + else + # shellcheck disable=SC2128 + print_message "$ip" + return 0 + fi + else + # shellcheck disable=SC2128 + print_message "$ip" + return 0 + fi + fi +} + +################################################################################################################## + +############################################Match an Array element####################### +isStringExist () +{ +local checkthestring="$1" +local stringtocheck="$2" +local stat=1 + +IFS=', ' read -r -a string_array <<< "$checkthestring" + +for ((i=0; i < ${#string_array[@]}; ++i)); do + if [ "${stringtocheck}" == "${string_array[i]}" ]; then + stat=0 + fi +done + return $stat +} + + +######################################################################################### + + +##################################################Password function########################## + +setpasswd () +{ + +local user=$1 +local pass=$2 +echo "$pass" | passwd "$user" --stdin +} + +############################################################################################## diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/grid.rsp b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/grid.rsp new file mode 100644 index 0000000000..4baedc896d --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/grid.rsp @@ -0,0 +1,672 @@ +############################################################################### +## Copyright(c) Oracle Corporation 1998,2018. All rights reserved. ## +## ## +## Specify values for the variables listed below to customize ## +## your installation. ## +## ## +## Each variable is associated with a comment. The comment ## +## can help to populate the variables with the appropriate ## +## values. ## +## ## +## IMPORTANT NOTE: This file contains plain text passwords and ## +## should be secured to have read permission only by oracle user ## +## or db administrator who owns this installation. ## +## ## +############################################################################### + +############################################################################### +## ## +## Instructions to fill this response file ## +## To register and configure 'Grid Infrastructure for Cluster' ## +## - Fill out sections A,B,C,D,E,F and G ## +## - Fill out section G if OCR and voting disk should be placed on ASM ## +## ## +## To register and configure 'Grid Infrastructure for Standalone server' ## +## - Fill out sections A,B and G ## +## ## +## To register software for 'Grid Infrastructure' ## +## - Fill out sections A,B and D ## +## - Provide the cluster nodes in section D when choosing CRS_SWONLY as ## +## installation option in section A ## +## ## +## To upgrade clusterware and/or Automatic storage management of earlier ## +## releases ## +## - Fill out sections A,B,C,D and H ## +## ## +## To add more nodes to the cluster ## +## - Fill out sections A and D ## +## - Provide the cluster nodes in section D when choosing CRS_ADDNODE as ## +## installation option in section A ## +## ## +############################################################################### + +#------------------------------------------------------------------------------ +# Do not change the following system generated value. +#------------------------------------------------------------------------------ +oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v18.0.0 + +############################################################################### +# # +# SECTION A - BASIC # +# # +############################################################################### + + +#------------------------------------------------------------------------------- +# Specify the location which holds the inventory files. +# This is an optional parameter if installing on +# Windows based Operating System. +#------------------------------------------------------------------------------- +INVENTORY_LOCATION=###INVENTORY### + +#------------------------------------------------------------------------------- +# Specify the installation option. +# Allowed values: CRS_CONFIG or HA_CONFIG or UPGRADE or CRS_SWONLY or HA_SWONLY +# - CRS_CONFIG : To register home and configure Grid Infrastructure for cluster +# - HA_CONFIG : To register home and configure Grid Infrastructure for stand alone server +# - UPGRADE : To register home and upgrade clusterware software of earlier release +# - CRS_SWONLY : To register Grid Infrastructure Software home (can be configured for cluster +# or stand alone server later) +# - HA_SWONLY : To register Grid Infrastructure Software home (can be configured for stand +# alone server later. This is only supported on Windows.) +# - CRS_ADDNODE : To add more nodes to the cluster +# - CRS_DELETE_NODE : To delete nodes to the cluster +#------------------------------------------------------------------------------- +oracle.install.option=CRS_CONFIG + +#------------------------------------------------------------------------------- +# Specify the complete path of the Oracle Base. +#------------------------------------------------------------------------------- +ORACLE_BASE=###GRID_BASE### + +################################################################################ +# # +# SECTION B - GROUPS # +# # +# The following three groups need to be assigned for all GI installations. # +# OSDBA and OSOPER can be the same or different. OSASM must be different # +# than the other two. # +# The value to be specified for OSDBA, OSOPER and OSASM group is only for # +# Unix based Operating System. # +# These groups are not required for upgrades, as they will be determined # +# from the Oracle home to upgrade. # +# # +################################################################################ +#------------------------------------------------------------------------------- +# The OSDBA_GROUP is the OS group which is to be granted SYSDBA privileges. +#------------------------------------------------------------------------------- +oracle.install.asm.OSDBA=dba + +#------------------------------------------------------------------------------- +# The OSOPER_GROUP is the OS group which is to be granted SYSOPER privileges. +# The value to be specified for OSOPER group is optional. +# Value should not be provided if configuring Client Cluster - i.e. storageOption=CLIENT_ASM_STORAGE. +#------------------------------------------------------------------------------- +oracle.install.asm.OSOPER= + +#------------------------------------------------------------------------------- +# The OSASM_GROUP is the OS group which is to be granted SYSASM privileges. This +# must be different than the previous two. +#------------------------------------------------------------------------------- +oracle.install.asm.OSASM=asmadmin + +################################################################################ +# # +# SECTION C - SCAN # +# # +################################################################################ +#------------------------------------------------------------------------------- +# Specify the type of SCAN configuration for the cluster +# Allowed values : LOCAL_SCAN and SHARED_SCAN +#------------------------------------------------------------------------------- +oracle.install.crs.config.scanType=###SCAN_TYPE### + +#------------------------------------------------------------------------------- +# Applicable only if SHARED_SCAN is being configured for cluster +# Specify the path to the SCAN client data file +#------------------------------------------------------------------------------- +oracle.install.crs.config.SCANClientDataFile=###SHARED_SCAN_FILE### + +#------------------------------------------------------------------------------- +# Specify a name for SCAN +# Applicable if LOCAL_SCAN is being configured for the cluster +# If you choose to configure the cluster with GNS with Auto assigned Node VIPs(DHCP),then the scanName should be specified in the format of 'SCAN name.Cluster name.GNS sub-domain' +#------------------------------------------------------------------------------- +oracle.install.crs.config.gpnp.scanName=###SCAN_NAME### + +#------------------------------------------------------------------------------- +# Specify a unused port number for SCAN service +#------------------------------------------------------------------------------- + +oracle.install.crs.config.gpnp.scanPort=###SCAN_PORT### + +################################################################################ +# # +# SECTION D - CLUSTER & GNS # +# # +################################################################################ +#------------------------------------------------------------------------------- +# Specify the required cluster configuration +# Allowed values: STANDALONE, DOMAIN, MEMBERDB, MEMBERAPP +#------------------------------------------------------------------------------- +oracle.install.crs.config.ClusterConfiguration=###CLUSTER_TYPE### + +#------------------------------------------------------------------------------- +# Specify 'true' if you would like to configure the cluster as Extended, else +# specify 'false' +# +# Applicable only for STANDALONE and DOMAIN cluster configuration +#------------------------------------------------------------------------------- +oracle.install.crs.config.configureAsExtendedCluster=false + + +#------------------------------------------------------------------------------- +# Specify the Member Cluster Manifest file +# +# Applicable only for MEMBERDB and MEMBERAPP cluster configuration +#------------------------------------------------------------------------------- +oracle.install.crs.config.memberClusterManifestFile=###MEMBERDB_FILE### + +#------------------------------------------------------------------------------- +# Specify a name for the Cluster you are creating. +# +# The maximum length allowed for clustername is 15 characters. The name can be +# any combination of lower and uppercase alphabets (A - Z), (0 - 9), hyphen(-) +# and underscore(_). +# +# Applicable only for STANDALONE and DOMAIN cluster configuration +#------------------------------------------------------------------------------- +oracle.install.crs.config.clusterName=###CLUSTER_NAME### + +#------------------------------------------------------------------------------- +# Applicable only for STANDALONE, DOMAIN, MEMBERDB cluster configuration. +# Specify 'true' if you would like to configure Grid Naming Service(GNS), else +# specify 'false' +#------------------------------------------------------------------------------- +oracle.install.crs.config.gpnp.configureGNS=###CONFIGURE_GNS### + +#------------------------------------------------------------------------------- +# Applicable only for STANDALONE and DOMAIN cluster configuration if you choose to configure GNS. +# Specify 'true' if you would like to assign SCAN name VIP and Node VIPs by DHCP +# , else specify 'false' +#------------------------------------------------------------------------------- +oracle.install.crs.config.autoConfigureClusterNodeVIP=###DHCP_CONF### + +#------------------------------------------------------------------------------- +# Applicable only if you choose to configure GNS. +# Specify the type of GNS configuration for cluster +# Allowed values are: CREATE_NEW_GNS and USE_SHARED_GNS +# Only USE_SHARED_GNS value is allowed for MEMBERDB cluster configuration. +#------------------------------------------------------------------------------- +oracle.install.crs.config.gpnp.gnsOption=###GNS_OPTIONS### + +#------------------------------------------------------------------------------- +# Applicable only if SHARED_GNS is being configured for cluster +# Specify the path to the GNS client data file +#------------------------------------------------------------------------------- +oracle.install.crs.config.gpnp.gnsClientDataFile= + +#------------------------------------------------------------------------------- +# Applicable only for STANDALONE and DOMAIN cluster configuration if you choose to +# configure GNS for this cluster oracle.install.crs.config.gpnp.gnsOption=CREATE_NEW_GNS +# Specify the GNS subdomain and an unused virtual hostname for GNS service +#------------------------------------------------------------------------------- +oracle.install.crs.config.gpnp.gnsSubDomain=###GNS_SUBDOMAIN### +oracle.install.crs.config.gpnp.gnsVIPAddress=###GNSVIP_HOSTNAME### + +#------------------------------------------------------------------------------- +# Specify the list of sites - only if configuring an Extended Cluster +#------------------------------------------------------------------------------- +oracle.install.crs.config.sites= + +#------------------------------------------------------------------------------- +# Specify the list of nodes that have to be configured to be part of the cluster. +# +# The list should a comma-separated list of tuples. Each tuple should be a +# colon-separated string that contains +# - 1 field if you have chosen CRS_SWONLY as installation option, or +# - 1 field if configuring an Application Cluster, or +# - 3 fields if configuring a Flex Cluster +# - 3 fields if adding more nodes to the configured cluster, or +# - 4 fields if configuring an Extended Cluster +# +# The fields should be ordered as follows: +# 1. The first field should be the public node name. +# 2. The second field should be the virtual host name +# (Should be specified as AUTO if you have chosen 'auto configure for VIP' +# i.e. autoConfigureClusterNodeVIP=true) +# 3. The third field indicates the role of node (HUB,LEAF). This has to +# be provided only if Flex Cluster is being configured. +# For Extended Cluster only HUB should be specified for all nodes +# 4. The fourth field indicates the site designation for the node. To be specified only if configuring an Extended Cluster. +# The 2nd and 3rd fields are not applicable if you have chosen CRS_SWONLY as installation option +# The 2nd and 3rd fields are not applicable if configuring an Application Cluster +# +# Examples +# For registering GI for a cluster software: oracle.install.crs.config.clusterNodes=node1,node2 +# For adding more nodes to the configured cluster: oracle.install.crs.config.clusterNodes=node1:node1-vip:HUB,node2:node2-vip:LEAF +# For configuring Application Cluster: oracle.install.crs.config.clusterNodes=node1,node2 +# For configuring Flex Cluster: oracle.install.crs.config.clusterNodes=node1:node1-vip:HUB,node2:node2-vip:LEAF +# For configuring Extended Cluster: oracle.install.crs.config.clusterNodes=node1:node1-vip:HUB:site1,node2:node2-vip:HUB:site2 +# You can specify a range of nodes in the tuple using colon separated fields of format +# hostnameprefix:lowerbound-upperbound:hostnamesuffix:vipsuffix:role of node +# +#------------------------------------------------------------------------------- +#oracle.install.crs.config.clusterNodes=###HOSTNAME###:###HOSTNAME_VIP###:HUB +oracle.install.crs.config.clusterNodes=###CRS_CONFIG_NODES### + +#------------------------------------------------------------------------------- +# The value should be a comma separated strings where each string is as shown below +# InterfaceName:SubnetAddress:InterfaceType +# where InterfaceType can be either "1", "2", "3", "4", or "5" +# InterfaceType stand for the following values +# - 1 : PUBLIC +# - 2 : PRIVATE +# - 3 : DO NOT USE +# - 4 : ASM +# - 5 : ASM & PRIVATE +# +# For example: eth0:140.87.24.0:1,eth1:10.2.1.0:2,eth2:140.87.52.0:3 +# +#------------------------------------------------------------------------------- +oracle.install.crs.config.networkInterfaceList=###NETWORK_STRING### + +#------------------------------------------------------------------------------ +# Create a separate ASM DiskGroup to store GIMR data. +# Specify 'true' if you would like to separate GIMR data with clusterware data, +# else specify 'false' +# Value should be 'true' for DOMAIN cluster configurations +# Value can be true/false for STANDALONE cluster configurations. +#------------------------------------------------------------------------------ +oracle.install.asm.configureGIMRDataDG=###GIMR_DG_FLAG### + +################################################################################ +# # +# SECTION E - STORAGE # +# # +################################################################################ + +#------------------------------------------------------------------------------- +# Specify the type of storage to use for Oracle Cluster Registry(OCR) and Voting +# Disks files +# - FLEX_ASM_STORAGE +# - CLIENT_ASM_STORAGE +# +# Applicable only for MEMBERDB cluster configuration +#------------------------------------------------------------------------------- +oracle.install.crs.config.storageOption=###STORAGE_OPTIONS_FOR_MEMBERDB### +################################################################################ +# # +# SECTION F - IPMI # +# # +################################################################################ + +#------------------------------------------------------------------------------- +# Specify 'true' if you would like to configure Intelligent Power Management interface +# (IPMI), else specify 'false' +#------------------------------------------------------------------------------- +oracle.install.crs.config.useIPMI=false + +#------------------------------------------------------------------------------- +# Applicable only if you choose to configure IPMI +# i.e. oracle.install.crs.config.useIPMI=true +# Specify the username and password for using IPMI service +#------------------------------------------------------------------------------- +oracle.install.crs.config.ipmi.bmcUsername= +oracle.install.crs.config.ipmi.bmcPassword= +################################################################################ +# # +# SECTION G - ASM # +# # +################################################################################ + +#------------------------------------------------------------------------------- +# ASM Storage Type +# Allowed values are : ASM and ASM_ON_NAS +# ASM_ON_NAS applicable only if +# oracle.install.crs.config.ClusterConfiguration=STANDALONE +#------------------------------------------------------------------------------- +oracle.install.asm.storageOption=ASM + +#------------------------------------------------------------------------------- +# NAS location to create ASM disk group for storing OCR/VDSK +# Specify the NAS location where you want the ASM disk group to be created +# to be used to store OCR/VDSK files +# Applicable only if oracle.install.asm.storageOption=ASM_ON_NAS +#------------------------------------------------------------------------------- +oracle.install.asmOnNAS.ocrLocation= +#------------------------------------------------------------------------------ +# Create a separate ASM DiskGroup on NAS to store GIMR data +# Specify 'true' if you would like to separate GIMR data with clusterware data, else +# specify 'false' +# Applicable only if oracle.install.asm.storageOption=ASM_ON_NAS +#------------------------------------------------------------------------------ +oracle.install.asmOnNAS.configureGIMRDataDG=false + +#------------------------------------------------------------------------------- +# NAS location to create ASM disk group for storing GIMR data +# Specify the NAS location where you want the ASM disk group to be created +# to be used to store the GIMR database +# Applicable only if oracle.install.asm.storageOption=ASM_ON_NAS +# and oracle.install.asmOnNAS.configureGIMRDataDG=true +#------------------------------------------------------------------------------- +oracle.install.asmOnNAS.gimrLocation= + +#------------------------------------------------------------------------------- +# Password for SYS user of Oracle ASM +#------------------------------------------------------------------------------- +oracle.install.asm.SYSASMPassword=###PASSWORD### + +#------------------------------------------------------------------------------- +# The ASM DiskGroup +# +# Example: oracle.install.asm.diskGroup.name=data +# +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.name=###DB_ASM_DISKGROUP### + +#------------------------------------------------------------------------------- +# Redundancy level to be used by ASM. +# It can be one of the following +# - NORMAL +# - HIGH +# - EXTERNAL +# - FLEX# - EXTENDED (required if oracle.install.crs.config.ClusterConfiguration=EXTENDED) +# Example: oracle.install.asm.diskGroup.redundancy=NORMAL +# +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.redundancy=EXTERNAL + +#------------------------------------------------------------------------------- +# Allocation unit size to be used by ASM. +# It can be one of the following values +# - 1 +# - 2 +# - 4 +# - 8 +# - 16 +# Example: oracle.install.asm.diskGroup.AUSize=4 +# size unit is MB +# +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.AUSize=4 + +#------------------------------------------------------------------------------- +# Failure Groups for the disk group +# If configuring for Extended cluster specify as list of "failure group name:site" +# tuples. +# Else just specify as list of failure group names +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.FailureGroups= + +#------------------------------------------------------------------------------- +# List of disks and their failure groups to create a ASM DiskGroup +# (Use this if each of the disks have an associated failure group) +# Failure Groups are not required if oracle.install.asm.diskGroup.redundancy=EXTERNAL +# Example: +# For Unix based Operating System: +# oracle.install.asm.diskGroup.disksWithFailureGroupNames=/oracle/asm/disk1,FGName,/oracle/asm/disk2,FGName +# For Windows based Operating System: +# oracle.install.asm.diskGroup.disksWithFailureGroupNames=\\.\ORCLDISKDATA0,FGName,\\.\ORCLDISKDATA1,FGName +# +#------------------------------------------------------------------------------- +#oracle.install.asm.diskGroup.disksWithFailureGroupNames=/dev/asm-disk1,,/dev/asm-disk2,,/dev/asm-disk3, +oracle.install.asm.diskGroup.disksWithFailureGroupNames=###ASM_DISKGROUP_FG_DISKS### + +#------------------------------------------------------------------------------- +# List of disks to create a ASM DiskGroup +# (Use this variable only if failure groups configuration is not required) +# Example: +# For Unix based Operating System: +# oracle.install.asm.diskGroup.disks=/oracle/asm/disk1,/oracle/asm/disk2 +# For Windows based Operating System: +# oracle.install.asm.diskGroup.disks=\\.\ORCLDISKDATA0,\\.\ORCLDISKDATA1 +# +#------------------------------------------------------------------------------- +#oracle.install.asm.diskGroup.disks=/dev/asm-disk1,/dev/asm-disk2,/dev/asm-disk3 +oracle.install.asm.diskGroup.disks=###ASM_DISKGROUP_DISKS### + +#------------------------------------------------------------------------------- +# List of failure groups to be marked as QUORUM. +# Quorum failure groups contain only voting disk data, no user data is stored +# Example: +# oracle.install.asm.diskGroup.quorumFailureGroupNames=FGName1,FGName2 +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.quorumFailureGroupNames= + +#------------------------------------------------------------------------------- +# The disk discovery string to be used to discover the disks used create a ASM DiskGroup +# +# Example: +# For Unix based Operating System: +# oracle.install.asm.diskGroup.diskDiscoveryString=/oracle/asm/* +# For Windows based Operating System: +# oracle.install.asm.diskGroup.diskDiscoveryString=\\.\ORCLDISK* +# +#------------------------------------------------------------------------------- +#oracle.install.asm.diskGroup.diskDiscoveryString=/dev/asm* +oracle.install.asm.diskGroup.diskDiscoveryString=###ASM_DISCOVERY_STRING### + +#------------------------------------------------------------------------------- +# Password for ASMSNMP account +# ASMSNMP account is used by Oracle Enterprise Manager to monitor Oracle ASM instances +#------------------------------------------------------------------------------- +oracle.install.asm.monitorPassword=###PASSWORD### + +#------------------------------------------------------------------------------- +# GIMR Storage data ASM DiskGroup +# Applicable only when +# oracle.install.asm.configureGIMRDataDG=true +# Example: oracle.install.asm.GIMRDG.name=MGMT +# +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.name=###GIMR_DG_NAME### + +#------------------------------------------------------------------------------- +# Redundancy level to be used by ASM. +# It can be one of the following +# - NORMAL +# - HIGH +# - EXTERNAL +# - FLEX# - EXTENDED (only if oracle.install.crs.config.ClusterConfiguration=EXTENDED) +# Example: oracle.install.asm.gimrDG.redundancy=NORMAL +# +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.redundancy=###GIMR_DG_REDUNDANCY### + +#------------------------------------------------------------------------------- +# Allocation unit size to be used by ASM. +# It can be one of the following values +# - 1 +# - 2 +# - 4 +# - 8 +# - 16 +# Example: oracle.install.asm.gimrDG.AUSize=4 +# size unit is MB +# +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.AUSize=1 + +#------------------------------------------------------------------------------- +# Failure Groups for the GIMR storage data ASM disk group +# If configuring for Extended cluster specify as list of "failure group name:site" +# tuples. +# Else just specify as list of failure group names +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.FailureGroups=###GIMR_DG_FAILURE_GROUP### + +#------------------------------------------------------------------------------- +# List of disks and their failure groups to create GIMR data ASM DiskGroup +# (Use this if each of the disks have an associated failure group) +# Failure Groups are not required if oracle.install.asm.gimrDG.redundancy=EXTERNAL +# Example: +# For Unix based Operating System: +# oracle.install.asm.gimrDG.disksWithFailureGroupNames=/oracle/asm/disk1,FGName,/oracle/asm/disk2,FGName +# For Windows based Operating System: +# oracle.install.asm.gimrDG.disksWithFailureGroupNames=\\.\ORCLDISKDATA0,FGName,\\.\ORCLDISKDATA1,FGName +# +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.disksWithFailureGroupNames=###GIMR_DISKGROUP_FG_DISKS### + +#------------------------------------------------------------------------------- +# List of disks to create GIMR data ASM DiskGroup +# (Use this variable only if failure groups configuration is not required) +# Example: +# For Unix based Operating System: +# oracle.install.asm.gimrDG.disks=/oracle/asm/disk1,/oracle/asm/disk2 +# For Windows based Operating System: +# oracle.install.asm.gimrDG.disks=\\.\ORCLDISKDATA0,\\.\ORCLDISKDATA1 +# +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.disks=###GIMR_DISKGROUP_DISKS### + +#------------------------------------------------------------------------------- +# List of failure groups to be marked as QUORUM. +# Quorum failure groups contain only voting disk data, no user data is stored +# Example: +# oracle.install.asm.gimrDG.quorumFailureGroupNames=FGName1,FGName2 +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.quorumFailureGroupNames= + +#------------------------------------------------------------------------------- +# Configure AFD - ASM Filter Driver +# Applicable only for FLEX_ASM_STORAGE option +# Specify 'true' if you want to configure AFD, else specify 'false' +#------------------------------------------------------------------------------- +oracle.install.asm.configureAFD=false +#------------------------------------------------------------------------------- +# Configure RHPS - Rapid Home Provisioning Service +# Applicable only for DOMAIN cluster configuration +# Specify 'true' if you want to configure RHP service, else specify 'false' +#------------------------------------------------------------------------------- +oracle.install.crs.configureRHPS=false + +################################################################################ +# # +# SECTION H - UPGRADE # +# # +################################################################################ +#------------------------------------------------------------------------------- +# Specify whether to ignore down nodes during upgrade operation. +# Value should be 'true' to ignore down nodes otherwise specify 'false' +#------------------------------------------------------------------------------- +oracle.install.crs.config.ignoreDownNodes=false +################################################################################ +# # +# MANAGEMENT OPTIONS # +# # +################################################################################ + +#------------------------------------------------------------------------------- +# Specify the management option to use for managing Oracle Grid Infrastructure +# Options are: +# 1. CLOUD_CONTROL - If you want to manage your Oracle Grid Infrastructure with Enterprise Manager Cloud Control. +# 2. NONE -If you do not want to manage your Oracle Grid Infrastructure with Enterprise Manager Cloud Control. +#------------------------------------------------------------------------------- +oracle.install.config.managementOption=NONE + +#------------------------------------------------------------------------------- +# Specify the OMS host to connect to Cloud Control. +# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL +#------------------------------------------------------------------------------- +oracle.install.config.omsHost= + +#------------------------------------------------------------------------------- +# Specify the OMS port to connect to Cloud Control. +# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL +#------------------------------------------------------------------------------- +oracle.install.config.omsPort=0 + +#------------------------------------------------------------------------------- +# Specify the EM Admin user name to use to connect to Cloud Control. +# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL +#------------------------------------------------------------------------------- +oracle.install.config.emAdminUser= + +#------------------------------------------------------------------------------- +# Specify the EM Admin password to use to connect to Cloud Control. +# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL +#------------------------------------------------------------------------------- +oracle.install.config.emAdminPassword= +################################################################################ +# # +# Root script execution configuration # +# # +################################################################################ + +#------------------------------------------------------------------------------------------------------- +# Specify the root script execution mode. +# +# - true : To execute the root script automatically by using the appropriate configuration methods. +# - false : To execute the root script manually. +# +# If this option is selected, password should be specified on the console. +#------------------------------------------------------------------------------------------------------- +oracle.install.crs.rootconfig.executeRootScript=false + +#-------------------------------------------------------------------------------------- +# Specify the configuration method to be used for automatic root script execution. +# +# Following are the possible choices: +# - ROOT +# - SUDO +#-------------------------------------------------------------------------------------- +oracle.install.crs.rootconfig.configMethod=ROOT +#-------------------------------------------------------------------------------------- +# Specify the absolute path of the sudo program. +# +# Applicable only when SUDO configuration method was chosen. +#-------------------------------------------------------------------------------------- +oracle.install.crs.rootconfig.sudoPath= + +#-------------------------------------------------------------------------------------- +# Specify the name of the user who is in the sudoers list. +# +# Applicable only when SUDO configuration method was chosen. +#-------------------------------------------------------------------------------------- +oracle.install.crs.rootconfig.sudoUserName= +#-------------------------------------------------------------------------------------- +# Specify the nodes batch map. +# +# This should be a comma separated list of node:batch pairs. +# During upgrade, you can sequence the automatic execution of root scripts +# by pooling the nodes into batches. +# A maximum of three batches can be specified. +# Installer will execute the root scripts on all the nodes in one batch before +# proceeding to next batch. +# Root script execution on the local node must be in Batch 1. +# Only one type of node role can be used for each batch. +# Root script execution should be done first in all HUB nodes and then, when +# existent, in all the LEAF nodes. +# +# Examples: +# 1. oracle.install.crs.config.batchinfo=HUBNode1:1,HUBNode2:2,HUBNode3:2,LEAFNode4:3 +# 2. oracle.install.crs.config.batchinfo=HUBNode1:1,LEAFNode2:2,LEAFNode3:2,LEAFNode4:2 +# 3. oracle.install.crs.config.batchinfo=HUBNode1:1,HUBNode2:1,LEAFNode3:2,LEAFNode4:3 +# +# Applicable only for UPGRADE install option. +#-------------------------------------------------------------------------------------- +oracle.install.crs.config.batchinfo= +################################################################################ +# # +# APPLICATION CLUSTER OPTIONS # +# # +################################################################################ + +#------------------------------------------------------------------------------- +# Specify the Virtual hostname to configure virtual access for your Application +# The value to be specified for Virtual hostname is optional. +#------------------------------------------------------------------------------- +oracle.install.crs.app.applicationAddress= +################################################################################# +# # +# DELETE NODE OPTIONS # +# # +################################################################################# + +#-------------------------------------------------------------------------------- +# Specify the node names to delete nodes from cluster. +# Delete node will be performed only for the remote nodes from the cluster. +#-------------------------------------------------------------------------------- +oracle.install.crs.deleteNode.nodes= diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/grid1.rsp b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/grid1.rsp new file mode 100644 index 0000000000..ebfc119b01 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/grid1.rsp @@ -0,0 +1,671 @@ +############################################################################### +## Copyright(c) Oracle Corporation 1998,2018. All rights reserved. ## +## ## +## Specify values for the variables listed below to customize ## +## your installation. ## +## ## +## Each variable is associated with a comment. The comment ## +## can help to populate the variables with the appropriate ## +## values. ## +## ## +## IMPORTANT NOTE: This file contains plain text passwords and ## +## should be secured to have read permission only by oracle user ## +## or db administrator who owns this installation. ## +## ## +############################################################################### + +############################################################################### +## ## +## Instructions to fill this response file ## +## To register and configure 'Grid Infrastructure for Cluster' ## +## - Fill out sections A,B,C,D,E,F and G ## +## - Fill out section G if OCR and voting disk should be placed on ASM ## +## ## +## To register and configure 'Grid Infrastructure for Standalone server' ## +## - Fill out sections A,B and G ## +## ## +## To register software for 'Grid Infrastructure' ## +## - Fill out sections A,B and D ## +## - Provide the cluster nodes in section D when choosing CRS_SWONLY as ## +## installation option in section A ## +## ## +## To upgrade clusterware and/or Automatic storage management of earlier ## +## releases ## +## - Fill out sections A,B,C,D and H ## +## ## +## To add more nodes to the cluster ## +## - Fill out sections A and D ## +## - Provide the cluster nodes in section D when choosing CRS_ADDNODE as ## +## installation option in section A ## +## ## +############################################################################### + +#------------------------------------------------------------------------------ +# Do not change the following system generated value. +#------------------------------------------------------------------------------ +oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v18.0.0 + +############################################################################### +# # +# SECTION A - BASIC # +# # +############################################################################### + + +#------------------------------------------------------------------------------- +# Specify the location which holds the inventory files. +# This is an optional parameter if installing on +# Windows based Operating System. +#------------------------------------------------------------------------------- +INVENTORY_LOCATION=/u01/app/oraInventory + +#------------------------------------------------------------------------------- +# Specify the installation option. +# Allowed values: CRS_CONFIG or HA_CONFIG or UPGRADE or CRS_SWONLY or HA_SWONLY +# - CRS_CONFIG : To register home and configure Grid Infrastructure for cluster +# - HA_CONFIG : To register home and configure Grid Infrastructure for stand alone server +# - UPGRADE : To register home and upgrade clusterware software of earlier release +# - CRS_SWONLY : To register Grid Infrastructure Software home (can be configured for cluster +# or stand alone server later) +# - HA_SWONLY : To register Grid Infrastructure Software home (can be configured for stand +# alone server later. This is only supported on Windows.) +# - CRS_ADDNODE : To add more nodes to the cluster +# - CRS_DELETE_NODE : To delete nodes to the cluster +#------------------------------------------------------------------------------- +oracle.install.option=CRS_CONFIG + +#------------------------------------------------------------------------------- +# Specify the complete path of the Oracle Base. +#------------------------------------------------------------------------------- +ORACLE_BASE=/u01/app/grid + +################################################################################ +# # +# SECTION B - GROUPS # +# # +# The following three groups need to be assigned for all GI installations. # +# OSDBA and OSOPER can be the same or different. OSASM must be different # +# than the other two. # +# The value to be specified for OSDBA, OSOPER and OSASM group is only for # +# Unix based Operating System. # +# These groups are not required for upgrades, as they will be determined # +# from the Oracle home to upgrade. # +# # +################################################################################ +#------------------------------------------------------------------------------- +# The OSDBA_GROUP is the OS group which is to be granted SYSDBA privileges. +#------------------------------------------------------------------------------- +oracle.install.asm.OSDBA=dba + +#------------------------------------------------------------------------------- +# The OSOPER_GROUP is the OS group which is to be granted SYSOPER privileges. +# The value to be specified for OSOPER group is optional. +# Value should not be provided if configuring Client Cluster - i.e. storageOption=CLIENT_ASM_STORAGE. +#------------------------------------------------------------------------------- +oracle.install.asm.OSOPER= + +#------------------------------------------------------------------------------- +# The OSASM_GROUP is the OS group which is to be granted SYSASM privileges. This +# must be different than the previous two. +#------------------------------------------------------------------------------- +oracle.install.asm.OSASM=asmadmin + +################################################################################ +# # +# SECTION C - SCAN # +# # +################################################################################ +#------------------------------------------------------------------------------- +# Specify the type of SCAN configuration for the cluster +# Allowed values : LOCAL_SCAN and SHARED_SCAN +#------------------------------------------------------------------------------- +oracle.install.crs.config.scanType=LOCAL_SCAN + +#------------------------------------------------------------------------------- +# Applicable only if SHARED_SCAN is being configured for cluster +# Specify the path to the SCAN client data file +#------------------------------------------------------------------------------- +oracle.install.crs.config.SCANClientDataFile= + +#------------------------------------------------------------------------------- +# Specify a name for SCAN +# Applicable if LOCAL_SCAN is being configured for the cluster +# If you choose to configure the cluster with GNS with Auto assigned Node VIPs(DHCP),then the scanName should be specified in the format of 'SCAN name.Cluster name.GNS sub-domain' +#------------------------------------------------------------------------------- +oracle.install.crs.config.gpnp.scanName=racnode-scan + +#------------------------------------------------------------------------------- +# Specify a unused port number for SCAN service +#------------------------------------------------------------------------------- + +oracle.install.crs.config.gpnp.scanPort=1521 + +################################################################################ +# # +# SECTION D - CLUSTER & GNS # +# # +################################################################################ +#------------------------------------------------------------------------------- +# Specify the required cluster configuration +# Allowed values: STANDALONE, DOMAIN, MEMBERDB, MEMBERAPP +#------------------------------------------------------------------------------- +oracle.install.crs.config.ClusterConfiguration=STANDALONE + +#------------------------------------------------------------------------------- +# Specify 'true' if you would like to configure the cluster as Extended, else +# specify 'false' +# +# Applicable only for STANDALONE and DOMAIN cluster configuration +#------------------------------------------------------------------------------- +oracle.install.crs.config.configureAsExtendedCluster=false + + +#------------------------------------------------------------------------------- +# Specify the Member Cluster Manifest file +# +# Applicable only for MEMBERDB and MEMBERAPP cluster configuration +#------------------------------------------------------------------------------- +oracle.install.crs.config.memberClusterManifestFile= + +#------------------------------------------------------------------------------- +# Specify a name for the Cluster you are creating. +# +# The maximum length allowed for clustername is 15 characters. The name can be +# any combination of lower and uppercase alphabets (A - Z), (0 - 9), hyphen(-) +# and underscore(_). +# +# Applicable only for STANDALONE and DOMAIN cluster configuration +#------------------------------------------------------------------------------- +oracle.install.crs.config.clusterName=rac01cluster + +#------------------------------------------------------------------------------- +# Applicable only for STANDALONE, DOMAIN, MEMBERDB cluster configuration. +# Specify 'true' if you would like to configure Grid Naming Service(GNS), else +# specify 'false' +#------------------------------------------------------------------------------- +oracle.install.crs.config.gpnp.configureGNS= + +#------------------------------------------------------------------------------- +# Applicable only for STANDALONE and DOMAIN cluster configuration if you choose to configure GNS. +# Specify 'true' if you would like to assign SCAN name VIP and Node VIPs by DHCP +# , else specify 'false' +#------------------------------------------------------------------------------- +oracle.install.crs.config.autoConfigureClusterNodeVIP=false + +#------------------------------------------------------------------------------- +# Applicable only if you choose to configure GNS. +# Specify the type of GNS configuration for cluster +# Allowed values are: CREATE_NEW_GNS and USE_SHARED_GNS +# Only USE_SHARED_GNS value is allowed for MEMBERDB cluster configuration. +#------------------------------------------------------------------------------- +oracle.install.crs.config.gpnp.gnsOption= + +#------------------------------------------------------------------------------- +# Applicable only if SHARED_GNS is being configured for cluster +# Specify the path to the GNS client data file +#------------------------------------------------------------------------------- +oracle.install.crs.config.gpnp.gnsClientDataFile= + +#------------------------------------------------------------------------------- +# Applicable only for STANDALONE and DOMAIN cluster configuration if you choose to +# configure GNS for this cluster oracle.install.crs.config.gpnp.gnsOption=CREATE_NEW_GNS +# Specify the GNS subdomain and an unused virtual hostname for GNS service +#------------------------------------------------------------------------------- +oracle.install.crs.config.gpnp.gnsSubDomain= +oracle.install.crs.config.gpnp.gnsVIPAddress= + +#------------------------------------------------------------------------------- +# Specify the list of sites - only if configuring an Extended Cluster +#------------------------------------------------------------------------------- +oracle.install.crs.config.sites= + +#------------------------------------------------------------------------------- +# Specify the list of nodes that have to be configured to be part of the cluster. +# +# The list should a comma-separated list of tuples. Each tuple should be a +# colon-separated string that contains +# - 1 field if you have chosen CRS_SWONLY as installation option, or +# - 1 field if configuring an Application Cluster, or +# - 3 fields if configuring a Flex Cluster +# - 3 fields if adding more nodes to the configured cluster, or +# - 4 fields if configuring an Extended Cluster +# +# The fields should be ordered as follows: +# 1. The first field should be the public node name. +# 2. The second field should be the virtual host name +# (Should be specified as AUTO if you have chosen 'auto configure for VIP' +# i.e. autoConfigureClusterNodeVIP=true) +# 3. The third field indicates the role of node (HUB,LEAF). This has to +# be provided only if Flex Cluster is being configured. +# For Extended Cluster only HUB should be specified for all nodes +# 4. The fourth field indicates the site designation for the node. To be specified only if configuring an Extended Cluster. +# The 2nd and 3rd fields are not applicable if you have chosen CRS_SWONLY as installation option +# The 2nd and 3rd fields are not applicable if configuring an Application Cluster +# +# Examples +# For registering GI for a cluster software: oracle.install.crs.config.clusterNodes=node1,node2 +# For adding more nodes to the configured cluster: oracle.install.crs.config.clusterNodes=node1:node1-vip:HUB,node2:node2-vip:LEAF +# For configuring Application Cluster: oracle.install.crs.config.clusterNodes=node1,node2 +# For configuring Flex Cluster: oracle.install.crs.config.clusterNodes=node1:node1-vip:HUB,node2:node2-vip:LEAF +# For configuring Extended Cluster: oracle.install.crs.config.clusterNodes=node1:node1-vip:HUB:site1,node2:node2-vip:HUB:site2 +# You can specify a range of nodes in the tuple using colon separated fields of format +# hostnameprefix:lowerbound-upperbound:hostnamesuffix:vipsuffix:role of node +# +#------------------------------------------------------------------------------- +oracle.install.crs.config.clusterNodes=racnode1:racnode1-vip:HUB,racnode2:racnode2-vip:HUB + +#------------------------------------------------------------------------------- +# The value should be a comma separated strings where each string is as shown below +# InterfaceName:SubnetAddress:InterfaceType +# where InterfaceType can be either "1", "2", "3", "4", or "5" +# InterfaceType stand for the following values +# - 1 : PUBLIC +# - 2 : PRIVATE +# - 3 : DO NOT USE +# - 4 : ASM +# - 5 : ASM & PRIVATE +# +# For example: eth0:140.87.24.0:1,eth1:10.2.1.0:2,eth2:140.87.52.0:3 +# +#------------------------------------------------------------------------------- +oracle.install.crs.config.networkInterfaceList=eth0:192.168.17.0:5,eth1:172.16.1.0:1 + +#------------------------------------------------------------------------------ +# Create a separate ASM DiskGroup to store GIMR data. +# Specify 'true' if you would like to separate GIMR data with clusterware data, +# else specify 'false' +# Value should be 'true' for DOMAIN cluster configurations +# Value can be true/false for STANDALONE cluster configurations. +#------------------------------------------------------------------------------ +oracle.install.asm.configureGIMRDataDG=false + +################################################################################ +# # +# SECTION E - STORAGE # +# # +################################################################################ + +#------------------------------------------------------------------------------- +# Specify the type of storage to use for Oracle Cluster Registry(OCR) and Voting +# Disks files +# - FLEX_ASM_STORAGE +# - CLIENT_ASM_STORAGE +# +# Applicable only for MEMBERDB cluster configuration +#------------------------------------------------------------------------------- +oracle.install.crs.config.storageOption= +################################################################################ +# # +# SECTION F - IPMI # +# # +################################################################################ + +#------------------------------------------------------------------------------- +# Specify 'true' if you would like to configure Intelligent Power Management interface +# (IPMI), else specify 'false' +#------------------------------------------------------------------------------- +oracle.install.crs.config.useIPMI=false + +#------------------------------------------------------------------------------- +# Applicable only if you choose to configure IPMI +# i.e. oracle.install.crs.config.useIPMI=true +# Specify the username and password for using IPMI service +#------------------------------------------------------------------------------- +oracle.install.crs.config.ipmi.bmcUsername= +oracle.install.crs.config.ipmi.bmcPassword= +################################################################################ +# # +# SECTION G - ASM # +# # +################################################################################ + +#------------------------------------------------------------------------------- +# ASM Storage Type +# Allowed values are : ASM and ASM_ON_NAS +# ASM_ON_NAS applicable only if +# oracle.install.crs.config.ClusterConfiguration=STANDALONE +#------------------------------------------------------------------------------- +oracle.install.asm.storageOption=ASM + +#------------------------------------------------------------------------------- +# NAS location to create ASM disk group for storing OCR/VDSK +# Specify the NAS location where you want the ASM disk group to be created +# to be used to store OCR/VDSK files +# Applicable only if oracle.install.asm.storageOption=ASM_ON_NAS +#------------------------------------------------------------------------------- +oracle.install.asmOnNAS.ocrLocation= +#------------------------------------------------------------------------------ +# Create a separate ASM DiskGroup on NAS to store GIMR data +# Specify 'true' if you would like to separate GIMR data with clusterware data, else +# specify 'false' +# Applicable only if oracle.install.asm.storageOption=ASM_ON_NAS +#------------------------------------------------------------------------------ +oracle.install.asmOnNAS.configureGIMRDataDG=false + +#------------------------------------------------------------------------------- +# NAS location to create ASM disk group for storing GIMR data +# Specify the NAS location where you want the ASM disk group to be created +# to be used to store the GIMR database +# Applicable only if oracle.install.asm.storageOption=ASM_ON_NAS +# and oracle.install.asmOnNAS.configureGIMRDataDG=true +#------------------------------------------------------------------------------- +oracle.install.asmOnNAS.gimrLocation= + +#------------------------------------------------------------------------------- +# Password for SYS user of Oracle ASM +#------------------------------------------------------------------------------- +oracle.install.asm.SYSASMPassword=Oracle_12c + +#------------------------------------------------------------------------------- +# The ASM DiskGroup +# +# Example: oracle.install.asm.diskGroup.name=data +# +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.name=DATA + +#------------------------------------------------------------------------------- +# Redundancy level to be used by ASM. +# It can be one of the following +# - NORMAL +# - HIGH +# - EXTERNAL +# - FLEX# - EXTENDED (required if oracle.install.crs.config.ClusterConfiguration=EXTENDED) +# Example: oracle.install.asm.diskGroup.redundancy=NORMAL +# +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.redundancy=EXTERNAL + +#------------------------------------------------------------------------------- +# Allocation unit size to be used by ASM. +# It can be one of the following values +# - 1 +# - 2 +# - 4 +# - 8 +# - 16 +# Example: oracle.install.asm.diskGroup.AUSize=4 +# size unit is MB +# +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.AUSize=4 + +#------------------------------------------------------------------------------- +# Failure Groups for the disk group +# If configuring for Extended cluster specify as list of "failure group name:site" +# tuples. +# Else just specify as list of failure group names +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.FailureGroups= + +#------------------------------------------------------------------------------- +# List of disks and their failure groups to create a ASM DiskGroup +# (Use this if each of the disks have an associated failure group) +# Failure Groups are not required if oracle.install.asm.diskGroup.redundancy=EXTERNAL +# Example: +# For Unix based Operating System: +# oracle.install.asm.diskGroup.disksWithFailureGroupNames=/oracle/asm/disk1,FGName,/oracle/asm/disk2,FGName +# For Windows based Operating System: +# oracle.install.asm.diskGroup.disksWithFailureGroupNames=\\.\ORCLDISKDATA0,FGName,\\.\ORCLDISKDATA1,FGName +# +#------------------------------------------------------------------------------- +#oracle.install.asm.diskGroup.disksWithFailureGroupNames=/dev/asm-disk1,,/dev/asm-disk2,,/dev/asm-disk3, +oracle.install.asm.diskGroup.disksWithFailureGroupNames= + +#------------------------------------------------------------------------------- +# List of disks to create a ASM DiskGroup +# (Use this variable only if failure groups configuration is not required) +# Example: +# For Unix based Operating System: +# oracle.install.asm.diskGroup.disks=/oracle/asm/disk1,/oracle/asm/disk2 +# For Windows based Operating System: +# oracle.install.asm.diskGroup.disks=\\.\ORCLDISKDATA0,\\.\ORCLDISKDATA1 +# +#------------------------------------------------------------------------------- +#oracle.install.asm.diskGroup.disks=/dev/asm-disk1,/dev/asm-disk2,/dev/asm-disk3 +oracle.install.asm.diskGroup.disks=/oradata/asm_disk01.img,/oradata/asm_disk02.img,/oradata/asm_disk03.img,/oradata/asm_disk04.img,/oradata/asm_disk05.img + +#------------------------------------------------------------------------------- +# List of failure groups to be marked as QUORUM. +# Quorum failure groups contain only voting disk data, no user data is stored +# Example: +# oracle.install.asm.diskGroup.quorumFailureGroupNames=FGName1,FGName2 +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.quorumFailureGroupNames= + +#------------------------------------------------------------------------------- +# The disk discovery string to be used to discover the disks used create a ASM DiskGroup +# +# Example: +# For Unix based Operating System: +# oracle.install.asm.diskGroup.diskDiscoveryString=/oracle/asm/* +# For Windows based Operating System: +# oracle.install.asm.diskGroup.diskDiscoveryString=\\.\ORCLDISK* +# +#------------------------------------------------------------------------------- +#oracle.install.asm.diskGroup.diskDiscoveryString=/dev/asm* +oracle.install.asm.diskGroup.diskDiscoveryString=/oradata/asm_* + +#------------------------------------------------------------------------------- +# Password for ASMSNMP account +# ASMSNMP account is used by Oracle Enterprise Manager to monitor Oracle ASM instances +#------------------------------------------------------------------------------- +oracle.install.asm.monitorPassword=Oracle_12c + +#------------------------------------------------------------------------------- +# GIMR Storage data ASM DiskGroup +# Applicable only when +# oracle.install.asm.configureGIMRDataDG=true +# Example: oracle.install.asm.GIMRDG.name=MGMT +# +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.name= + +#------------------------------------------------------------------------------- +# Redundancy level to be used by ASM. +# It can be one of the following +# - NORMAL +# - HIGH +# - EXTERNAL +# - FLEX# - EXTENDED (only if oracle.install.crs.config.ClusterConfiguration=EXTENDED) +# Example: oracle.install.asm.gimrDG.redundancy=NORMAL +# +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.redundancy= + +#------------------------------------------------------------------------------- +# Allocation unit size to be used by ASM. +# It can be one of the following values +# - 1 +# - 2 +# - 4 +# - 8 +# - 16 +# Example: oracle.install.asm.gimrDG.AUSize=4 +# size unit is MB +# +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.AUSize=1 + +#------------------------------------------------------------------------------- +# Failure Groups for the GIMR storage data ASM disk group +# If configuring for Extended cluster specify as list of "failure group name:site" +# tuples. +# Else just specify as list of failure group names +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.FailureGroups= + +#------------------------------------------------------------------------------- +# List of disks and their failure groups to create GIMR data ASM DiskGroup +# (Use this if each of the disks have an associated failure group) +# Failure Groups are not required if oracle.install.asm.gimrDG.redundancy=EXTERNAL +# Example: +# For Unix based Operating System: +# oracle.install.asm.gimrDG.disksWithFailureGroupNames=/oracle/asm/disk1,FGName,/oracle/asm/disk2,FGName +# For Windows based Operating System: +# oracle.install.asm.gimrDG.disksWithFailureGroupNames=\\.\ORCLDISKDATA0,FGName,\\.\ORCLDISKDATA1,FGName +# +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.disksWithFailureGroupNames= + +#------------------------------------------------------------------------------- +# List of disks to create GIMR data ASM DiskGroup +# (Use this variable only if failure groups configuration is not required) +# Example: +# For Unix based Operating System: +# oracle.install.asm.gimrDG.disks=/oracle/asm/disk1,/oracle/asm/disk2 +# For Windows based Operating System: +# oracle.install.asm.gimrDG.disks=\\.\ORCLDISKDATA0,\\.\ORCLDISKDATA1 +# +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.disks= + +#------------------------------------------------------------------------------- +# List of failure groups to be marked as QUORUM. +# Quorum failure groups contain only voting disk data, no user data is stored +# Example: +# oracle.install.asm.gimrDG.quorumFailureGroupNames=FGName1,FGName2 +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.quorumFailureGroupNames= + +#------------------------------------------------------------------------------- +# Configure AFD - ASM Filter Driver +# Applicable only for FLEX_ASM_STORAGE option +# Specify 'true' if you want to configure AFD, else specify 'false' +#------------------------------------------------------------------------------- +oracle.install.asm.configureAFD=false +#------------------------------------------------------------------------------- +# Configure RHPS - Rapid Home Provisioning Service +# Applicable only for DOMAIN cluster configuration +# Specify 'true' if you want to configure RHP service, else specify 'false' +#------------------------------------------------------------------------------- +oracle.install.crs.configureRHPS=false + +################################################################################ +# # +# SECTION H - UPGRADE # +# # +################################################################################ +#------------------------------------------------------------------------------- +# Specify whether to ignore down nodes during upgrade operation. +# Value should be 'true' to ignore down nodes otherwise specify 'false' +#------------------------------------------------------------------------------- +oracle.install.crs.config.ignoreDownNodes=false +################################################################################ +# # +# MANAGEMENT OPTIONS # +# # +################################################################################ + +#------------------------------------------------------------------------------- +# Specify the management option to use for managing Oracle Grid Infrastructure +# Options are: +# 1. CLOUD_CONTROL - If you want to manage your Oracle Grid Infrastructure with Enterprise Manager Cloud Control. +# 2. NONE -If you do not want to manage your Oracle Grid Infrastructure with Enterprise Manager Cloud Control. +#------------------------------------------------------------------------------- +oracle.install.config.managementOption=NONE + +#------------------------------------------------------------------------------- +# Specify the OMS host to connect to Cloud Control. +# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL +#------------------------------------------------------------------------------- +oracle.install.config.omsHost= + +#------------------------------------------------------------------------------- +# Specify the OMS port to connect to Cloud Control. +# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL +#------------------------------------------------------------------------------- +oracle.install.config.omsPort=0 + +#------------------------------------------------------------------------------- +# Specify the EM Admin user name to use to connect to Cloud Control. +# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL +#------------------------------------------------------------------------------- +oracle.install.config.emAdminUser= + +#------------------------------------------------------------------------------- +# Specify the EM Admin password to use to connect to Cloud Control. +# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL +#------------------------------------------------------------------------------- +oracle.install.config.emAdminPassword= +################################################################################ +# # +# Root script execution configuration # +# # +################################################################################ + +#------------------------------------------------------------------------------------------------------- +# Specify the root script execution mode. +# +# - true : To execute the root script automatically by using the appropriate configuration methods. +# - false : To execute the root script manually. +# +# If this option is selected, password should be specified on the console. +#------------------------------------------------------------------------------------------------------- +oracle.install.crs.rootconfig.executeRootScript=false + +#-------------------------------------------------------------------------------------- +# Specify the configuration method to be used for automatic root script execution. +# +# Following are the possible choices: +# - ROOT +# - SUDO +#-------------------------------------------------------------------------------------- +oracle.install.crs.rootconfig.configMethod=ROOT +#-------------------------------------------------------------------------------------- +# Specify the absolute path of the sudo program. +# +# Applicable only when SUDO configuration method was chosen. +#-------------------------------------------------------------------------------------- +oracle.install.crs.rootconfig.sudoPath= + +#-------------------------------------------------------------------------------------- +# Specify the name of the user who is in the sudoers list. +# +# Applicable only when SUDO configuration method was chosen. +#-------------------------------------------------------------------------------------- +oracle.install.crs.rootconfig.sudoUserName= +#-------------------------------------------------------------------------------------- +# Specify the nodes batch map. +# +# This should be a comma separated list of node:batch pairs. +# During upgrade, you can sequence the automatic execution of root scripts +# by pooling the nodes into batches. +# A maximum of three batches can be specified. +# Installer will execute the root scripts on all the nodes in one batch before +# proceeding to next batch. +# Root script execution on the local node must be in Batch 1. +# Only one type of node role can be used for each batch. +# Root script execution should be done first in all HUB nodes and then, when +# existent, in all the LEAF nodes. +# +# Examples: +# 1. oracle.install.crs.config.batchinfo=HUBNode1:1,HUBNode2:2,HUBNode3:2,LEAFNode4:3 +# 2. oracle.install.crs.config.batchinfo=HUBNode1:1,LEAFNode2:2,LEAFNode3:2,LEAFNode4:2 +# 3. oracle.install.crs.config.batchinfo=HUBNode1:1,HUBNode2:1,LEAFNode3:2,LEAFNode4:3 +# +# Applicable only for UPGRADE install option. +#-------------------------------------------------------------------------------------- +oracle.install.crs.config.batchinfo= +################################################################################ +# # +# APPLICATION CLUSTER OPTIONS # +# # +################################################################################ + +#------------------------------------------------------------------------------- +# Specify the Virtual hostname to configure virtual access for your Application +# The value to be specified for Virtual hostname is optional. +#------------------------------------------------------------------------------- +oracle.install.crs.app.applicationAddress= +################################################################################# +# # +# DELETE NODE OPTIONS # +# # +################################################################################# + +#-------------------------------------------------------------------------------- +# Specify the node names to delete nodes from cluster. +# Delete node will be performed only for the remote nodes from the cluster. +#-------------------------------------------------------------------------------- +oracle.install.crs.deleteNode.nodes= diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/grid_addnode.rsp b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/grid_addnode.rsp new file mode 100644 index 0000000000..d1cbf4fab6 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/grid_addnode.rsp @@ -0,0 +1,672 @@ +############################################################################### +## Copyright(c) Oracle Corporation 1998,2018. All rights reserved. ## +## ## +## Specify values for the variables listed below to customize ## +## your installation. ## +## ## +## Each variable is associated with a comment. The comment ## +## can help to populate the variables with the appropriate ## +## values. ## +## ## +## IMPORTANT NOTE: This file contains plain text passwords and ## +## should be secured to have read permission only by oracle user ## +## or db administrator who owns this installation. ## +## ## +############################################################################### + +############################################################################### +## ## +## Instructions to fill this response file ## +## To register and configure 'Grid Infrastructure for Cluster' ## +## - Fill out sections A,B,C,D,E,F and G ## +## - Fill out section G if OCR and voting disk should be placed on ASM ## +## ## +## To register and configure 'Grid Infrastructure for Standalone server' ## +## - Fill out sections A,B and G ## +## ## +## To register software for 'Grid Infrastructure' ## +## - Fill out sections A,B and D ## +## - Provide the cluster nodes in section D when choosing CRS_SWONLY as ## +## installation option in section A ## +## ## +## To upgrade clusterware and/or Automatic storage management of earlier ## +## releases ## +## - Fill out sections A,B,C,D and H ## +## ## +## To add more nodes to the cluster ## +## - Fill out sections A and D ## +## - Provide the cluster nodes in section D when choosing CRS_ADDNODE as ## +## installation option in section A ## +## ## +############################################################################### + +#------------------------------------------------------------------------------ +# Do not change the following system generated value. +#------------------------------------------------------------------------------ +oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v19.0.0 + +############################################################################### +# # +# SECTION A - BASIC # +# # +############################################################################### + + +#------------------------------------------------------------------------------- +# Specify the location which holds the inventory files. +# This is an optional parameter if installing on +# Windows based Operating System. +#------------------------------------------------------------------------------- +INVENTORY_LOCATION=###INVENTORY### + +#------------------------------------------------------------------------------- +# Specify the installation option. +# Allowed values: CRS_CONFIG or HA_CONFIG or UPGRADE or CRS_SWONLY or HA_SWONLY +# - CRS_CONFIG : To register home and configure Grid Infrastructure for cluster +# - HA_CONFIG : To register home and configure Grid Infrastructure for stand alone server +# - UPGRADE : To register home and upgrade clusterware software of earlier release +# - CRS_SWONLY : To register Grid Infrastructure Software home (can be configured for cluster +# or stand alone server later) +# - HA_SWONLY : To register Grid Infrastructure Software home (can be configured for stand +# alone server later. This is only supported on Windows.) +# - CRS_ADDNODE : To add more nodes to the cluster +# - CRS_DELETE_NODE : To delete nodes to the cluster +#------------------------------------------------------------------------------- +oracle.install.option=CRS_ADDNODE + +#------------------------------------------------------------------------------- +# Specify the complete path of the Oracle Base. +#------------------------------------------------------------------------------- +ORACLE_BASE=###GRID_BASE### + +################################################################################ +# # +# SECTION B - GROUPS # +# # +# The following three groups need to be assigned for all GI installations. # +# OSDBA and OSOPER can be the same or different. OSASM must be different # +# than the other two. # +# The value to be specified for OSDBA, OSOPER and OSASM group is only for # +# Unix based Operating System. # +# These groups are not required for upgrades, as they will be determined # +# from the Oracle home to upgrade. # +# # +################################################################################ +#------------------------------------------------------------------------------- +# The OSDBA_GROUP is the OS group which is to be granted SYSDBA privileges. +#------------------------------------------------------------------------------- +oracle.install.asm.OSDBA=asmdba + +#------------------------------------------------------------------------------- +# The OSOPER_GROUP is the OS group which is to be granted SYSOPER privileges. +# The value to be specified for OSOPER group is optional. +# Value should not be provided if configuring Client Cluster - i.e. storageOption=CLIENT_ASM_STORAGE. +#------------------------------------------------------------------------------- +oracle.install.asm.OSOPER=asmoper + +#------------------------------------------------------------------------------- +# The OSASM_GROUP is the OS group which is to be granted SYSASM privileges. This +# must be different than the previous two. +#------------------------------------------------------------------------------- +oracle.install.asm.OSASM=asmadmin + +################################################################################ +# # +# SECTION C - SCAN # +# # +################################################################################ +#------------------------------------------------------------------------------- +# Specify the type of SCAN configuration for the cluster +# Allowed values : LOCAL_SCAN and SHARED_SCAN +#------------------------------------------------------------------------------- +oracle.install.crs.config.scanType= + +#------------------------------------------------------------------------------- +# Applicable only if SHARED_SCAN is being configured for cluster +# Specify the path to the SCAN client data file +#------------------------------------------------------------------------------- +oracle.install.crs.config.SCANClientDataFile= + +#------------------------------------------------------------------------------- +# Specify a name for SCAN +# Applicable if LOCAL_SCAN is being configured for the cluster +# If you choose to configure the cluster with GNS with Auto assigned Node VIPs(DHCP),then the scanName should be specified in the format of 'SCAN name.Cluster name.GNS sub-domain' +#------------------------------------------------------------------------------- +oracle.install.crs.config.gpnp.scanName= + +#------------------------------------------------------------------------------- +# Specify a unused port number for SCAN service +#------------------------------------------------------------------------------- + +oracle.install.crs.config.gpnp.scanPort= + +################################################################################ +# # +# SECTION D - CLUSTER & GNS # +# # +################################################################################ +#------------------------------------------------------------------------------- +# Specify the required cluster configuration +# Allowed values: STANDALONE, DOMAIN, MEMBERDB, MEMBERAPP +#------------------------------------------------------------------------------- +oracle.install.crs.config.ClusterConfiguration= + +#------------------------------------------------------------------------------- +# Specify 'true' if you would like to configure the cluster as Extended, else +# specify 'false' +# +# Applicable only for STANDALONE and DOMAIN cluster configuration +#------------------------------------------------------------------------------- +oracle.install.crs.config.configureAsExtendedCluster=false + + +#------------------------------------------------------------------------------- +# Specify the Member Cluster Manifest file +# +# Applicable only for MEMBERDB and MEMBERAPP cluster configuration +#------------------------------------------------------------------------------- +oracle.install.crs.config.memberClusterManifestFile= + +#------------------------------------------------------------------------------- +# Specify a name for the Cluster you are creating. +# +# The maximum length allowed for clustername is 15 characters. The name can be +# any combination of lower and uppercase alphabets (A - Z), (0 - 9), hyphen(-) +# and underscore(_). +# +# Applicable only for STANDALONE and DOMAIN cluster configuration +#------------------------------------------------------------------------------- +oracle.install.crs.config.clusterName= + +#------------------------------------------------------------------------------- +# Applicable only for STANDALONE, DOMAIN, MEMBERDB cluster configuration. +# Specify 'true' if you would like to configure Grid Naming Service(GNS), else +# specify 'false' +#------------------------------------------------------------------------------- +oracle.install.crs.config.gpnp.configureGNS=false + +#------------------------------------------------------------------------------- +# Applicable only for STANDALONE and DOMAIN cluster configuration if you choose to configure GNS. +# Specify 'true' if you would like to assign SCAN name VIP and Node VIPs by DHCP +# , else specify 'false' +#------------------------------------------------------------------------------- +oracle.install.crs.config.autoConfigureClusterNodeVIP=false + +#------------------------------------------------------------------------------- +# Applicable only if you choose to configure GNS. +# Specify the type of GNS configuration for cluster +# Allowed values are: CREATE_NEW_GNS and USE_SHARED_GNS +# Only USE_SHARED_GNS value is allowed for MEMBERDB cluster configuration. +#------------------------------------------------------------------------------- +oracle.install.crs.config.gpnp.gnsOption=CREATE_NEW_GNS + +#------------------------------------------------------------------------------- +# Applicable only if SHARED_GNS is being configured for cluster +# Specify the path to the GNS client data file +#------------------------------------------------------------------------------- +oracle.install.crs.config.gpnp.gnsClientDataFile= + +#------------------------------------------------------------------------------- +# Applicable only for STANDALONE and DOMAIN cluster configuration if you choose to +# configure GNS for this cluster oracle.install.crs.config.gpnp.gnsOption=CREATE_NEW_GNS +# Specify the GNS subdomain and an unused virtual hostname for GNS service +#------------------------------------------------------------------------------- +oracle.install.crs.config.gpnp.gnsSubDomain= +oracle.install.crs.config.gpnp.gnsVIPAddress= + +#------------------------------------------------------------------------------- +# Specify the list of sites - only if configuring an Extended Cluster +#------------------------------------------------------------------------------- +oracle.install.crs.config.sites= + +#------------------------------------------------------------------------------- +# Specify the list of nodes that have to be configured to be part of the cluster. +# +# The list should a comma-separated list of tuples. Each tuple should be a +# colon-separated string that contains +# - 1 field if you have chosen CRS_SWONLY as installation option, or +# - 1 field if configuring an Application Cluster, or +# - 3 fields if configuring a Flex Cluster +# - 3 fields if adding more nodes to the configured cluster, or +# - 4 fields if configuring an Extended Cluster +# +# The fields should be ordered as follows: +# 1. The first field should be the public node name. +# 2. The second field should be the virtual host name +# (Should be specified as AUTO if you have chosen 'auto configure for VIP' +# i.e. autoConfigureClusterNodeVIP=true) +# 3. The third field indicates the role of node (HUB,LEAF). This has to +# be provided only if Flex Cluster is being configured. +# For Extended Cluster only HUB should be specified for all nodes +# 4. The fourth field indicates the site designation for the node. To be specified only if configuring an Extended Cluster. +# The 2nd and 3rd fields are not applicable if you have chosen CRS_SWONLY as installation option +# The 2nd and 3rd fields are not applicable if configuring an Application Cluster +# +# Examples +# For registering GI for a cluster software: oracle.install.crs.config.clusterNodes=node1,node2 +# For adding more nodes to the configured cluster: oracle.install.crs.config.clusterNodes=node1:node1-vip:HUB,node2:node2-vip:LEAF +# For configuring Application Cluster: oracle.install.crs.config.clusterNodes=node1,node2 +# For configuring Flex Cluster: oracle.install.crs.config.clusterNodes=node1:node1-vip:HUB,node2:node2-vip:LEAF +# For configuring Extended Cluster: oracle.install.crs.config.clusterNodes=node1:node1-vip:HUB:site1,node2:node2-vip:HUB:site2 +# You can specify a range of nodes in the tuple using colon separated fields of format +# hostnameprefix:lowerbound-upperbound:hostnamesuffix:vipsuffix:role of node +# +#------------------------------------------------------------------------------- +#oracle.install.crs.config.clusterNodes=###PUBLIC_HOSTNAME###:###HOSTNAME_VIP###:HUB +oracle.install.crs.config.clusterNodes=###CRS_CONFIG_NODES### + +#------------------------------------------------------------------------------- +# The value should be a comma separated strings where each string is as shown below +# InterfaceName:SubnetAddress:InterfaceType +# where InterfaceType can be either "1", "2", "3", "4", or "5" +# InterfaceType stand for the following values +# - 1 : PUBLIC +# - 2 : PRIVATE +# - 3 : DO NOT USE +# - 4 : ASM +# - 5 : ASM & PRIVATE +# +# For example: eth0:140.87.24.0:1,eth1:10.2.1.0:2,eth2:140.87.52.0:3 +# +#------------------------------------------------------------------------------- +oracle.install.crs.config.networkInterfaceList= + +#------------------------------------------------------------------------------ +# Create a separate ASM DiskGroup to store GIMR data. +# Specify 'true' if you would like to separate GIMR data with clusterware data, +# else specify 'false' +# Value should be 'true' for DOMAIN cluster configurations +# Value can be true/false for STANDALONE cluster configurations. +#------------------------------------------------------------------------------ +oracle.install.asm.configureGIMRDataDG=false + +################################################################################ +# # +# SECTION E - STORAGE # +# # +################################################################################ + +#------------------------------------------------------------------------------- +# Specify the type of storage to use for Oracle Cluster Registry(OCR) and Voting +# Disks files +# - FLEX_ASM_STORAGE +# - CLIENT_ASM_STORAGE +# +# Applicable only for MEMBERDB cluster configuration +#------------------------------------------------------------------------------- +oracle.install.crs.config.storageOption= +################################################################################ +# # +# SECTION F - IPMI # +# # +################################################################################ + +#------------------------------------------------------------------------------- +# Specify 'true' if you would like to configure Intelligent Power Management interface +# (IPMI), else specify 'false' +#------------------------------------------------------------------------------- +oracle.install.crs.config.useIPMI=false + +#------------------------------------------------------------------------------- +# Applicable only if you choose to configure IPMI +# i.e. oracle.install.crs.config.useIPMI=true +# Specify the username and password for using IPMI service +#------------------------------------------------------------------------------- +oracle.install.crs.config.ipmi.bmcUsername= +oracle.install.crs.config.ipmi.bmcPassword= +################################################################################ +# # +# SECTION G - ASM # +# # +################################################################################ + +#------------------------------------------------------------------------------- +# ASM Storage Type +# Allowed values are : ASM and ASM_ON_NAS +# ASM_ON_NAS applicable only if +# oracle.install.crs.config.ClusterConfiguration=STANDALONE +#------------------------------------------------------------------------------- +oracle.install.asm.storageOption=ASM + +#------------------------------------------------------------------------------- +# NAS location to create ASM disk group for storing OCR/VDSK +# Specify the NAS location where you want the ASM disk group to be created +# to be used to store OCR/VDSK files +# Applicable only if oracle.install.asm.storageOption=ASM_ON_NAS +#------------------------------------------------------------------------------- +oracle.install.asmOnNAS.ocrLocation= +#------------------------------------------------------------------------------ +# Create a separate ASM DiskGroup on NAS to store GIMR data +# Specify 'true' if you would like to separate GIMR data with clusterware data, else +# specify 'false' +# Applicable only if oracle.install.asm.storageOption=ASM_ON_NAS +#------------------------------------------------------------------------------ +oracle.install.asmOnNAS.configureGIMRDataDG=false + +#------------------------------------------------------------------------------- +# NAS location to create ASM disk group for storing GIMR data +# Specify the NAS location where you want the ASM disk group to be created +# to be used to store the GIMR database +# Applicable only if oracle.install.asm.storageOption=ASM_ON_NAS +# and oracle.install.asmOnNAS.configureGIMRDataDG=true +#------------------------------------------------------------------------------- +oracle.install.asmOnNAS.gimrLocation= + +#------------------------------------------------------------------------------- +# Password for SYS user of Oracle ASM +#------------------------------------------------------------------------------- +oracle.install.asm.SYSASMPassword= + +#------------------------------------------------------------------------------- +# The ASM DiskGroup +# +# Example: oracle.install.asm.diskGroup.name=data +# +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.name=DATA + +#------------------------------------------------------------------------------- +# Redundancy level to be used by ASM. +# It can be one of the following +# - NORMAL +# - HIGH +# - EXTERNAL +# - FLEX# - EXTENDED (required if oracle.install.crs.config.ClusterConfiguration=EXTENDED) +# Example: oracle.install.asm.diskGroup.redundancy=NORMAL +# +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.redundancy= + +#------------------------------------------------------------------------------- +# Allocation unit size to be used by ASM. +# It can be one of the following values +# - 1 +# - 2 +# - 4 +# - 8 +# - 16 +# Example: oracle.install.asm.diskGroup.AUSize=4 +# size unit is MB +# +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.AUSize=1 + +#------------------------------------------------------------------------------- +# Failure Groups for the disk group +# If configuring for Extended cluster specify as list of "failure group name:site" +# tuples. +# Else just specify as list of failure group names +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.FailureGroups= + +#------------------------------------------------------------------------------- +# List of disks and their failure groups to create a ASM DiskGroup +# (Use this if each of the disks have an associated failure group) +# Failure Groups are not required if oracle.install.asm.diskGroup.redundancy=EXTERNAL +# Example: +# For Unix based Operating System: +# oracle.install.asm.diskGroup.disksWithFailureGroupNames=/oracle/asm/disk1,FGName,/oracle/asm/disk2,FGName +# For Windows based Operating System: +# oracle.install.asm.diskGroup.disksWithFailureGroupNames=\\.\ORCLDISKDATA0,FGName,\\.\ORCLDISKDATA1,FGName +# +#------------------------------------------------------------------------------- +#oracle.install.asm.diskGroup.disksWithFailureGroupNames=/dev/asm-disk1,,/dev/asm-disk2,,/dev/asm-disk3, +oracle.install.asm.diskGroup.disksWithFailureGroupNames= + +#------------------------------------------------------------------------------- +# List of disks to create a ASM DiskGroup +# (Use this variable only if failure groups configuration is not required) +# Example: +# For Unix based Operating System: +# oracle.install.asm.diskGroup.disks=/oracle/asm/disk1,/oracle/asm/disk2 +# For Windows based Operating System: +# oracle.install.asm.diskGroup.disks=\\.\ORCLDISKDATA0,\\.\ORCLDISKDATA1 +# +#------------------------------------------------------------------------------- +#oracle.install.asm.diskGroup.disks=/dev/asm-disk1,/dev/asm-disk2,/dev/asm-disk3 +oracle.install.asm.diskGroup.disks= + +#------------------------------------------------------------------------------- +# List of failure groups to be marked as QUORUM. +# Quorum failure groups contain only voting disk data, no user data is stored +# Example: +# oracle.install.asm.diskGroup.quorumFailureGroupNames=FGName1,FGName2 +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.quorumFailureGroupNames= + +#------------------------------------------------------------------------------- +# The disk discovery string to be used to discover the disks used create a ASM DiskGroup +# +# Example: +# For Unix based Operating System: +# oracle.install.asm.diskGroup.diskDiscoveryString=/oracle/asm/* +# For Windows based Operating System: +# oracle.install.asm.diskGroup.diskDiscoveryString=\\.\ORCLDISK* +# +#------------------------------------------------------------------------------- +#oracle.install.asm.diskGroup.diskDiscoveryString=/dev/asm* +oracle.install.asm.diskGroup.diskDiscoveryString= + +#------------------------------------------------------------------------------- +# Password for ASMSNMP account +# ASMSNMP account is used by Oracle Enterprise Manager to monitor Oracle ASM instances +#------------------------------------------------------------------------------- +oracle.install.asm.monitorPassword= + +#------------------------------------------------------------------------------- +# GIMR Storage data ASM DiskGroup +# Applicable only when +# oracle.install.asm.configureGIMRDataDG=true +# Example: oracle.install.asm.GIMRDG.name=MGMT +# +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.name= + +#------------------------------------------------------------------------------- +# Redundancy level to be used by ASM. +# It can be one of the following +# - NORMAL +# - HIGH +# - EXTERNAL +# - FLEX# - EXTENDED (only if oracle.install.crs.config.ClusterConfiguration=EXTENDED) +# Example: oracle.install.asm.gimrDG.redundancy=NORMAL +# +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.redundancy= + +#------------------------------------------------------------------------------- +# Allocation unit size to be used by ASM. +# It can be one of the following values +# - 1 +# - 2 +# - 4 +# - 8 +# - 16 +# Example: oracle.install.asm.gimrDG.AUSize=4 +# size unit is MB +# +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.AUSize=1 + +#------------------------------------------------------------------------------- +# Failure Groups for the GIMR storage data ASM disk group +# If configuring for Extended cluster specify as list of "failure group name:site" +# tuples. +# Else just specify as list of failure group names +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.FailureGroups= + +#------------------------------------------------------------------------------- +# List of disks and their failure groups to create GIMR data ASM DiskGroup +# (Use this if each of the disks have an associated failure group) +# Failure Groups are not required if oracle.install.asm.gimrDG.redundancy=EXTERNAL +# Example: +# For Unix based Operating System: +# oracle.install.asm.gimrDG.disksWithFailureGroupNames=/oracle/asm/disk1,FGName,/oracle/asm/disk2,FGName +# For Windows based Operating System: +# oracle.install.asm.gimrDG.disksWithFailureGroupNames=\\.\ORCLDISKDATA0,FGName,\\.\ORCLDISKDATA1,FGName +# +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.disksWithFailureGroupNames= + +#------------------------------------------------------------------------------- +# List of disks to create GIMR data ASM DiskGroup +# (Use this variable only if failure groups configuration is not required) +# Example: +# For Unix based Operating System: +# oracle.install.asm.gimrDG.disks=/oracle/asm/disk1,/oracle/asm/disk2 +# For Windows based Operating System: +# oracle.install.asm.gimrDG.disks=\\.\ORCLDISKDATA0,\\.\ORCLDISKDATA1 +# +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.disks= + +#------------------------------------------------------------------------------- +# List of failure groups to be marked as QUORUM. +# Quorum failure groups contain only voting disk data, no user data is stored +# Example: +# oracle.install.asm.gimrDG.quorumFailureGroupNames=FGName1,FGName2 +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.quorumFailureGroupNames= + +#------------------------------------------------------------------------------- +# Configure AFD - ASM Filter Driver +# Applicable only for FLEX_ASM_STORAGE option +# Specify 'true' if you want to configure AFD, else specify 'false' +#------------------------------------------------------------------------------- +oracle.install.asm.configureAFD=false +#------------------------------------------------------------------------------- +# Configure RHPS - Rapid Home Provisioning Service +# Applicable only for DOMAIN cluster configuration +# Specify 'true' if you want to configure RHP service, else specify 'false' +#------------------------------------------------------------------------------- +oracle.install.crs.configureRHPS=false + +################################################################################ +# # +# SECTION H - UPGRADE # +# # +################################################################################ +#------------------------------------------------------------------------------- +# Specify whether to ignore down nodes during upgrade operation. +# Value should be 'true' to ignore down nodes otherwise specify 'false' +#------------------------------------------------------------------------------- +oracle.install.crs.config.ignoreDownNodes=false +################################################################################ +# # +# MANAGEMENT OPTIONS # +# # +################################################################################ + +#------------------------------------------------------------------------------- +# Specify the management option to use for managing Oracle Grid Infrastructure +# Options are: +# 1. CLOUD_CONTROL - If you want to manage your Oracle Grid Infrastructure with Enterprise Manager Cloud Control. +# 2. NONE -If you do not want to manage your Oracle Grid Infrastructure with Enterprise Manager Cloud Control. +#------------------------------------------------------------------------------- +oracle.install.config.managementOption=NONE + +#------------------------------------------------------------------------------- +# Specify the OMS host to connect to Cloud Control. +# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL +#------------------------------------------------------------------------------- +oracle.install.config.omsHost= + +#------------------------------------------------------------------------------- +# Specify the OMS port to connect to Cloud Control. +# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL +#------------------------------------------------------------------------------- +oracle.install.config.omsPort=0 + +#------------------------------------------------------------------------------- +# Specify the EM Admin user name to use to connect to Cloud Control. +# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL +#------------------------------------------------------------------------------- +oracle.install.config.emAdminUser= + +#------------------------------------------------------------------------------- +# Specify the EM Admin password to use to connect to Cloud Control. +# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL +#------------------------------------------------------------------------------- +oracle.install.config.emAdminPassword= +################################################################################ +# # +# Root script execution configuration # +# # +################################################################################ + +#------------------------------------------------------------------------------------------------------- +# Specify the root script execution mode. +# +# - true : To execute the root script automatically by using the appropriate configuration methods. +# - false : To execute the root script manually. +# +# If this option is selected, password should be specified on the console. +#------------------------------------------------------------------------------------------------------- +oracle.install.crs.rootconfig.executeRootScript=false + +#-------------------------------------------------------------------------------------- +# Specify the configuration method to be used for automatic root script execution. +# +# Following are the possible choices: +# - ROOT +# - SUDO +#-------------------------------------------------------------------------------------- +oracle.install.crs.rootconfig.configMethod=ROOT +#-------------------------------------------------------------------------------------- +# Specify the absolute path of the sudo program. +# +# Applicable only when SUDO configuration method was chosen. +#-------------------------------------------------------------------------------------- +oracle.install.crs.rootconfig.sudoPath= + +#-------------------------------------------------------------------------------------- +# Specify the name of the user who is in the sudoers list. +# +# Applicable only when SUDO configuration method was chosen. +#-------------------------------------------------------------------------------------- +oracle.install.crs.rootconfig.sudoUserName= +#-------------------------------------------------------------------------------------- +# Specify the nodes batch map. +# +# This should be a comma separated list of node:batch pairs. +# During upgrade, you can sequence the automatic execution of root scripts +# by pooling the nodes into batches. +# A maximum of three batches can be specified. +# Installer will execute the root scripts on all the nodes in one batch before +# proceeding to next batch. +# Root script execution on the local node must be in Batch 1. +# Only one type of node role can be used for each batch. +# Root script execution should be done first in all HUB nodes and then, when +# existent, in all the LEAF nodes. +# +# Examples: +# 1. oracle.install.crs.config.batchinfo=HUBNode1:1,HUBNode2:2,HUBNode3:2,LEAFNode4:3 +# 2. oracle.install.crs.config.batchinfo=HUBNode1:1,LEAFNode2:2,LEAFNode3:2,LEAFNode4:2 +# 3. oracle.install.crs.config.batchinfo=HUBNode1:1,HUBNode2:1,LEAFNode3:2,LEAFNode4:3 +# +# Applicable only for UPGRADE install option. +#-------------------------------------------------------------------------------------- +oracle.install.crs.config.batchinfo= +################################################################################ +# # +# APPLICATION CLUSTER OPTIONS # +# # +################################################################################ + +#------------------------------------------------------------------------------- +# Specify the Virtual hostname to configure virtual access for your Application +# The value to be specified for Virtual hostname is optional. +#------------------------------------------------------------------------------- +oracle.install.crs.app.applicationAddress= +################################################################################# +# # +# DELETE NODE OPTIONS # +# # +################################################################################# + +#-------------------------------------------------------------------------------- +# Specify the node names to delete nodes from cluster. +# Delete node will be performed only for the remote nodes from the cluster. +#-------------------------------------------------------------------------------- +oracle.install.crs.deleteNode.nodes= diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/grid_sw_install_19c.rsp b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/grid_sw_install_19c.rsp new file mode 100644 index 0000000000..88e205b33e --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/grid_sw_install_19c.rsp @@ -0,0 +1,668 @@ +############################################################################### +## Copyright(c) Oracle Corporation 1998,2018. All rights reserved. ## +## ## +## Specify values for the variables listed below to customize ## +## your installation. ## +## ## +## Each variable is associated with a comment. The comment ## +## can help to populate the variables with the appropriate ## +## values. ## +## ## +## IMPORTANT NOTE: This file contains plain text passwords and ## +## should be secured to have read permission only by oracle user ## +## or db administrator who owns this installation. ## +## ## +############################################################################### + +############################################################################### +## ## +## Instructions to fill this response file ## +## To register and configure 'Grid Infrastructure for Cluster' ## +## - Fill out sections A,B,C,D,E,F and G ## +## - Fill out section G if OCR and voting disk should be placed on ASM ## +## ## +## To register and configure 'Grid Infrastructure for Standalone server' ## +## - Fill out sections A,B and G ## +## ## +## To register software for 'Grid Infrastructure' ## +## - Fill out sections A,B and D ## +## - Provide the cluster nodes in section D when choosing CRS_SWONLY as ## +## installation option in section A ## +## ## +## To upgrade clusterware and/or Automatic storage management of earlier ## +## releases ## +## - Fill out sections A,B,C,D and H ## +## ## +## To add more nodes to the cluster ## +## - Fill out sections A and D ## +## - Provide the cluster nodes in section D when choosing CRS_ADDNODE as ## +## installation option in section A ## +## ## +############################################################################### + +#------------------------------------------------------------------------------ +# Do not change the following system generated value. +#------------------------------------------------------------------------------ +oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v19.0.0 + +############################################################################### +# # +# SECTION A - BASIC # +# # +############################################################################### + + +#------------------------------------------------------------------------------- +# Specify the location which holds the inventory files. +# This is an optional parameter if installing on +# Windows based Operating System. +#------------------------------------------------------------------------------- +INVENTORY_LOCATION=###INVENTORY### + +#------------------------------------------------------------------------------- +# Specify the installation option. +# Allowed values: CRS_CONFIG or HA_CONFIG or UPGRADE or CRS_SWONLY or HA_SWONLY +# - CRS_CONFIG : To register home and configure Grid Infrastructure for cluster +# - HA_CONFIG : To register home and configure Grid Infrastructure for stand alone server +# - UPGRADE : To register home and upgrade clusterware software of earlier release +# - CRS_SWONLY : To register Grid Infrastructure Software home (can be configured for cluster +# or stand alone server later) +# - HA_SWONLY : To register Grid Infrastructure Software home (can be configured for stand +# alone server later. This is only supported on Windows.) +# - CRS_ADDNODE : To add more nodes to the cluster +# - CRS_DELETE_NODE : To delete nodes to the cluster +#------------------------------------------------------------------------------- +oracle.install.option=###INSTALL_TYPE### + +#------------------------------------------------------------------------------- +# Specify the complete path of the Oracle Base. +#------------------------------------------------------------------------------- +ORACLE_BASE=###GRID_BASE### + +################################################################################ +# # +# SECTION B - GROUPS # +# # +# The following three groups need to be assigned for all GI installations. # +# OSDBA and OSOPER can be the same or different. OSASM must be different # +# than the other two. # +# The value to be specified for OSDBA, OSOPER and OSASM group is only for # +# Unix based Operating System. # +# These groups are not required for upgrades, as they will be determined # +# from the Oracle home to upgrade. # +# # +################################################################################ +#------------------------------------------------------------------------------- +# The OSDBA_GROUP is the OS group which is to be granted SYSDBA privileges. +#------------------------------------------------------------------------------- +oracle.install.asm.OSDBA=asmdba + +#------------------------------------------------------------------------------- +# The OSOPER_GROUP is the OS group which is to be granted SYSOPER privileges. +# The value to be specified for OSOPER group is optional. +# Value should not be provided if configuring Client Cluster - i.e. storageOption=CLIENT_ASM_STORAGE. +#------------------------------------------------------------------------------- +oracle.install.asm.OSOPER=asmoper + +#------------------------------------------------------------------------------- +# The OSASM_GROUP is the OS group which is to be granted SYSASM privileges. This +# must be different than the previous two. +#------------------------------------------------------------------------------- +oracle.install.asm.OSASM=asmadmin + +################################################################################ +# # +# SECTION C - SCAN # +# # +################################################################################ +#------------------------------------------------------------------------------- +# Specify the type of SCAN configuration for the cluster +# Allowed values : LOCAL_SCAN and SHARED_SCAN +#------------------------------------------------------------------------------- +oracle.install.crs.config.scanType=LOCAL_SCAN + +#------------------------------------------------------------------------------- +# Applicable only if SHARED_SCAN is being configured for cluster +# Specify the path to the SCAN client data file +#------------------------------------------------------------------------------- +oracle.install.crs.config.SCANClientDataFile= + +#------------------------------------------------------------------------------- +# Specify a name for SCAN +# Applicable if LOCAL_SCAN is being configured for the cluster +# If you choose to configure the cluster with GNS with Auto assigned Node VIPs(DHCP),then the scanName should be specified in the format of 'SCAN name.Cluster name.GNS sub-domain' +#------------------------------------------------------------------------------- +oracle.install.crs.config.gpnp.scanName= + +#------------------------------------------------------------------------------- +# Specify a unused port number for SCAN service +#------------------------------------------------------------------------------- + +oracle.install.crs.config.gpnp.scanPort= + + +################################################################################ +# # +# SECTION D - CLUSTER & GNS # +# # +################################################################################ +#------------------------------------------------------------------------------- +# Specify the required cluster configuration +# Allowed values: STANDALONE, DOMAIN, MEMBERDB, MEMBERAPP +#------------------------------------------------------------------------------- +oracle.install.crs.config.ClusterConfiguration= + +#------------------------------------------------------------------------------- +# Specify 'true' if you would like to configure the cluster as Extended, else +# specify 'false' +# +# Applicable only for STANDALONE and DOMAIN cluster configuration +#------------------------------------------------------------------------------- +oracle.install.crs.config.configureAsExtendedCluster= + + +#------------------------------------------------------------------------------- +# Specify the Member Cluster Manifest file +# +# Applicable only for MEMBERDB and MEMBERAPP cluster configuration +#------------------------------------------------------------------------------- +oracle.install.crs.config.memberClusterManifestFile= + +#------------------------------------------------------------------------------- +# Specify a name for the Cluster you are creating. +# +# The maximum length allowed for clustername is 15 characters. The name can be +# any combination of lower and uppercase alphabets (A - Z), (0 - 9), hyphen(-) +# and underscore(_). +# +# Applicable only for STANDALONE and DOMAIN cluster configuration +#------------------------------------------------------------------------------- +oracle.install.crs.config.clusterName= + +#------------------------------------------------------------------------------- +# Applicable only for STANDALONE, DOMAIN, MEMBERDB cluster configuration. +# Specify 'true' if you would like to configure Grid Naming Service(GNS), else +# specify 'false' +#------------------------------------------------------------------------------- +oracle.install.crs.config.gpnp.configureGNS=false + +#------------------------------------------------------------------------------- +# Applicable only for STANDALONE and DOMAIN cluster configuration if you choose to configure GNS. +# Specify 'true' if you would like to assign SCAN name VIP and Node VIPs by DHCP +# , else specify 'false' +#------------------------------------------------------------------------------- +oracle.install.crs.config.autoConfigureClusterNodeVIP=false + +#------------------------------------------------------------------------------- +# Applicable only if you choose to configure GNS. +# Specify the type of GNS configuration for cluster +# Allowed values are: CREATE_NEW_GNS and USE_SHARED_GNS +# Only USE_SHARED_GNS value is allowed for MEMBERDB cluster configuration. +#------------------------------------------------------------------------------- +oracle.install.crs.config.gpnp.gnsOption=CREATE_NEW_GNS + +#------------------------------------------------------------------------------- +# Applicable only if SHARED_GNS is being configured for cluster +# Specify the path to the GNS client data file +#------------------------------------------------------------------------------- +oracle.install.crs.config.gpnp.gnsClientDataFile= + +#------------------------------------------------------------------------------- +# Applicable only for STANDALONE and DOMAIN cluster configuration if you choose to +# configure GNS for this cluster oracle.install.crs.config.gpnp.gnsOption=CREATE_NEW_GNS +# Specify the GNS subdomain and an unused virtual hostname for GNS service +#------------------------------------------------------------------------------- +oracle.install.crs.config.gpnp.gnsSubDomain= +oracle.install.crs.config.gpnp.gnsVIPAddress= + +#------------------------------------------------------------------------------- +# Specify the list of sites - only if configuring an Extended Cluster +#------------------------------------------------------------------------------- +oracle.install.crs.config.sites= + +#------------------------------------------------------------------------------- +# Specify the list of nodes that have to be configured to be part of the cluster. +# +# The list should a comma-separated list of tuples. Each tuple should be a +# colon-separated string that contains +# - 1 field if you have chosen CRS_SWONLY as installation option, or +# - 1 field if configuring an Application Cluster, or +# - 3 fields if configuring a Flex Cluster +# - 3 fields if adding more nodes to the configured cluster, or +# - 4 fields if configuring an Extended Cluster +# +# The fields should be ordered as follows: +# 1. The first field should be the public node name. +# 2. The second field should be the virtual host name +# (Should be specified as AUTO if you have chosen 'auto configure for VIP' +# i.e. autoConfigureClusterNodeVIP=true) +# 3. The third field indicates the role of node (HUB,LEAF). This has to +# be provided only if Flex Cluster is being configured. +# For Extended Cluster only HUB should be specified for all nodes +# 4. The fourth field indicates the site designation for the node. To be specified only if configuring an Extended Cluster. +# The 2nd and 3rd fields are not applicable if you have chosen CRS_SWONLY as installation option +# The 2nd and 3rd fields are not applicable if configuring an Application Cluster +# +# Examples +# For registering GI for a cluster software: oracle.install.crs.config.clusterNodes=node1,node2 +# For adding more nodes to the configured cluster: oracle.install.crs.config.clusterNodes=node1:node1-vip:HUB,node2:node2-vip:LEAF +# For configuring Application Cluster: oracle.install.crs.config.clusterNodes=node1,node2 +# For configuring Flex Cluster: oracle.install.crs.config.clusterNodes=node1:node1-vip:HUB,node2:node2-vip:LEAF +# For configuring Extended Cluster: oracle.install.crs.config.clusterNodes=node1:node1-vip:HUB:site1,node2:node2-vip:HUB:site2 +# You can specify a range of nodes in the tuple using colon separated fields of format +# hostnameprefix:lowerbound-upperbound:hostnamesuffix:vipsuffix:role of node +# +#------------------------------------------------------------------------------- +oracle.install.crs.config.clusterNodes=###HOSTNAME### + +#------------------------------------------------------------------------------- +# The value should be a comma separated strings where each string is as shown below +# InterfaceName:SubnetAddress:InterfaceType +# where InterfaceType can be either "1", "2", "3", "4", or "5" +# InterfaceType stand for the following values +# - 1 : PUBLIC +# - 2 : PRIVATE +# - 3 : DO NOT USE +# - 4 : ASM +# - 5 : ASM & PRIVATE +# +# For example: eth0:140.87.24.0:1,eth1:10.2.1.0:2,eth2:140.87.52.0:3 +# +#------------------------------------------------------------------------------- +oracle.install.crs.config.networkInterfaceList= + +#------------------------------------------------------------------------------ +# Create a separate ASM DiskGroup to store GIMR data. +# Specify 'true' if you would like to separate GIMR data with clusterware data, +# else specify 'false' +# Value should be 'true' for DOMAIN cluster configurations +# Value can be true/false for STANDALONE cluster configurations. +#------------------------------------------------------------------------------ +oracle.install.asm.configureGIMRDataDG=false + +################################################################################ +# # +# SECTION E - STORAGE # +# # +################################################################################ + +#------------------------------------------------------------------------------- +# Specify the type of storage to use for Oracle Cluster Registry(OCR) and Voting +# Disks files +# - FLEX_ASM_STORAGE +# - CLIENT_ASM_STORAGE +# +# Applicable only for MEMBERDB cluster configuration +#------------------------------------------------------------------------------- +oracle.install.crs.config.storageOption= +################################################################################ +# # +# SECTION F - IPMI # +# # +################################################################################ + +#------------------------------------------------------------------------------- +# Specify 'true' if you would like to configure Intelligent Power Management interface +# (IPMI), else specify 'false' +#------------------------------------------------------------------------------- +oracle.install.crs.config.useIPMI=false + +#------------------------------------------------------------------------------- +# Applicable only if you choose to configure IPMI +# i.e. oracle.install.crs.config.useIPMI=true +# Specify the username and password for using IPMI service +#------------------------------------------------------------------------------- +oracle.install.crs.config.ipmi.bmcUsername= +oracle.install.crs.config.ipmi.bmcPassword= +################################################################################ +# # +# SECTION G - ASM # +# # +################################################################################ + +#------------------------------------------------------------------------------- +# ASM Storage Type +# Allowed values are : ASM and ASM_ON_NAS +# ASM_ON_NAS applicable only if +# oracle.install.crs.config.ClusterConfiguration=STANDALONE +#------------------------------------------------------------------------------- +oracle.install.asm.storageOption=ASM + +#------------------------------------------------------------------------------- +# NAS location to create ASM disk group for storing OCR/VDSK +# Specify the NAS location where you want the ASM disk group to be created +# to be used to store OCR/VDSK files +# Applicable only if oracle.install.asm.storageOption=ASM_ON_NAS +#------------------------------------------------------------------------------- +oracle.install.asmOnNAS.ocrLocation= +#------------------------------------------------------------------------------ +# Create a separate ASM DiskGroup on NAS to store GIMR data +# Specify 'true' if you would like to separate GIMR data with clusterware data, else +# specify 'false' +# Applicable only if oracle.install.asm.storageOption=ASM_ON_NAS +#------------------------------------------------------------------------------ +oracle.install.asmOnNAS.configureGIMRDataDG=false + +#------------------------------------------------------------------------------- +# NAS location to create ASM disk group for storing GIMR data +# Specify the NAS location where you want the ASM disk group to be created +# to be used to store the GIMR database +# Applicable only if oracle.install.asm.storageOption=ASM_ON_NAS +# and oracle.install.asmOnNAS.configureGIMRDataDG=true +#------------------------------------------------------------------------------- +oracle.install.asmOnNAS.gimrLocation= + +#------------------------------------------------------------------------------- +# Password for SYS user of Oracle ASM +#------------------------------------------------------------------------------- +oracle.install.asm.SYSASMPassword= + +#------------------------------------------------------------------------------- +# The ASM DiskGroup +# +# Example: oracle.install.asm.diskGroup.name=data +# +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.name= + +#------------------------------------------------------------------------------- +# Redundancy level to be used by ASM. +# It can be one of the following +# - NORMAL +# - HIGH +# - EXTERNAL +# - FLEX# - EXTENDED (required if oracle.install.crs.config.ClusterConfiguration=EXTENDED) +# Example: oracle.install.asm.diskGroup.redundancy=NORMAL +# +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.redundancy= + +#------------------------------------------------------------------------------- +# Allocation unit size to be used by ASM. +# It can be one of the following values +# - 1 +# - 2 +# - 4 +# - 8 +# - 16 +# Example: oracle.install.asm.diskGroup.AUSize=4 +# size unit is MB +# +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.AUSize=1 + +#------------------------------------------------------------------------------- +# Failure Groups for the disk group +# If configuring for Extended cluster specify as list of "failure group name:site" +# tuples. +# Else just specify as list of failure group names +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.FailureGroups= + +#------------------------------------------------------------------------------- +# List of disks and their failure groups to create a ASM DiskGroup +# (Use this if each of the disks have an associated failure group) +# Failure Groups are not required if oracle.install.asm.diskGroup.redundancy=EXTERNAL +# Example: +# For Unix based Operating System: +# oracle.install.asm.diskGroup.disksWithFailureGroupNames=/oracle/asm/disk1,FGName,/oracle/asm/disk2,FGName +# For Windows based Operating System: +# oracle.install.asm.diskGroup.disksWithFailureGroupNames=\\.\ORCLDISKDATA0,FGName,\\.\ORCLDISKDATA1,FGName +# +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.disksWithFailureGroupNames= + +#------------------------------------------------------------------------------- +# List of disks to create a ASM DiskGroup +# (Use this variable only if failure groups configuration is not required) +# Example: +# For Unix based Operating System: +# oracle.install.asm.diskGroup.disks=/oracle/asm/disk1,/oracle/asm/disk2 +# For Windows based Operating System: +# oracle.install.asm.diskGroup.disks=\\.\ORCLDISKDATA0,\\.\ORCLDISKDATA1 +# +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.disks= + +#------------------------------------------------------------------------------- +# List of failure groups to be marked as QUORUM. +# Quorum failure groups contain only voting disk data, no user data is stored +# Example: +# oracle.install.asm.diskGroup.quorumFailureGroupNames=FGName1,FGName2 +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.quorumFailureGroupNames= +#------------------------------------------------------------------------------- +# The disk discovery string to be used to discover the disks used create a ASM DiskGroup +# +# Example: +# For Unix based Operating System: +# oracle.install.asm.diskGroup.diskDiscoveryString=/oracle/asm/* +# For Windows based Operating System: +# oracle.install.asm.diskGroup.diskDiscoveryString=\\.\ORCLDISK* +# +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.diskDiscoveryString= + +#------------------------------------------------------------------------------- +# Password for ASMSNMP account +# ASMSNMP account is used by Oracle Enterprise Manager to monitor Oracle ASM instances +#------------------------------------------------------------------------------- +oracle.install.asm.monitorPassword= + +#------------------------------------------------------------------------------- +# GIMR Storage data ASM DiskGroup +# Applicable only when +# oracle.install.asm.configureGIMRDataDG=true +# Example: oracle.install.asm.GIMRDG.name=MGMT +# +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.name= + +#------------------------------------------------------------------------------- +# Redundancy level to be used by ASM. +# It can be one of the following +# - NORMAL +# - HIGH +# - EXTERNAL +# - FLEX# - EXTENDED (only if oracle.install.crs.config.ClusterConfiguration=EXTENDED) +# Example: oracle.install.asm.gimrDG.redundancy=NORMAL +# +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.redundancy= + +#------------------------------------------------------------------------------- +# Allocation unit size to be used by ASM. +# It can be one of the following values +# - 1 +# - 2 +# - 4 +# - 8 +# - 16 +# Example: oracle.install.asm.gimrDG.AUSize=4 +# size unit is MB +# +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.AUSize=1 + +#------------------------------------------------------------------------------- +# Failure Groups for the GIMR storage data ASM disk group +# If configuring for Extended cluster specify as list of "failure group name:site" +# tuples. +# Else just specify as list of failure group names +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.FailureGroups= + +#------------------------------------------------------------------------------- +# List of disks and their failure groups to create GIMR data ASM DiskGroup +# (Use this if each of the disks have an associated failure group) +# Failure Groups are not required if oracle.install.asm.gimrDG.redundancy=EXTERNAL +# Example: +# For Unix based Operating System: +# oracle.install.asm.gimrDG.disksWithFailureGroupNames=/oracle/asm/disk1,FGName,/oracle/asm/disk2,FGName +# For Windows based Operating System: +# oracle.install.asm.gimrDG.disksWithFailureGroupNames=\\.\ORCLDISKDATA0,FGName,\\.\ORCLDISKDATA1,FGName +# +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.disksWithFailureGroupNames= + +#------------------------------------------------------------------------------- +# List of disks to create GIMR data ASM DiskGroup +# (Use this variable only if failure groups configuration is not required) +# Example: +# For Unix based Operating System: +# oracle.install.asm.gimrDG.disks=/oracle/asm/disk1,/oracle/asm/disk2 +# For Windows based Operating System: +# oracle.install.asm.gimrDG.disks=\\.\ORCLDISKDATA0,\\.\ORCLDISKDATA1 +# +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.disks= + +#------------------------------------------------------------------------------- +# List of failure groups to be marked as QUORUM. +# Quorum failure groups contain only voting disk data, no user data is stored +# Example: +# oracle.install.asm.gimrDG.quorumFailureGroupNames=FGName1,FGName2 +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.quorumFailureGroupNames= + +#------------------------------------------------------------------------------- +# Configure AFD - ASM Filter Driver +# Applicable only for FLEX_ASM_STORAGE option +# Specify 'true' if you want to configure AFD, else specify 'false' +#------------------------------------------------------------------------------- +oracle.install.asm.configureAFD=false +#------------------------------------------------------------------------------- +# Configure RHPS - Rapid Home Provisioning Service +# Applicable only for DOMAIN cluster configuration +# Specify 'true' if you want to configure RHP service, else specify 'false' +#------------------------------------------------------------------------------- +oracle.install.crs.configureRHPS=false + +################################################################################ +# # +# SECTION H - UPGRADE # +# # +################################################################################ +#------------------------------------------------------------------------------- +# Specify whether to ignore down nodes during upgrade operation. +# Value should be 'true' to ignore down nodes otherwise specify 'false' +#------------------------------------------------------------------------------- +oracle.install.crs.config.ignoreDownNodes=false +################################################################################ +# # +# MANAGEMENT OPTIONS # +# # +################################################################################ + +#------------------------------------------------------------------------------- +# Specify the management option to use for managing Oracle Grid Infrastructure +# Options are: +# 1. CLOUD_CONTROL - If you want to manage your Oracle Grid Infrastructure with Enterprise Manager Cloud Control. +# 2. NONE -If you do not want to manage your Oracle Grid Infrastructure with Enterprise Manager Cloud Control. +#------------------------------------------------------------------------------- +oracle.install.config.managementOption=NONE + +#------------------------------------------------------------------------------- +# Specify the OMS host to connect to Cloud Control. +# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL +#------------------------------------------------------------------------------- +oracle.install.config.omsHost= + +#------------------------------------------------------------------------------- +# Specify the OMS port to connect to Cloud Control. +# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL +#------------------------------------------------------------------------------- +oracle.install.config.omsPort=0 + +#------------------------------------------------------------------------------- +# Specify the EM Admin user name to use to connect to Cloud Control. +# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL +#------------------------------------------------------------------------------- +oracle.install.config.emAdminUser= + +#------------------------------------------------------------------------------- +# Specify the EM Admin password to use to connect to Cloud Control. +# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL +#------------------------------------------------------------------------------- +oracle.install.config.emAdminPassword= +################################################################################ +# # +# Root script execution configuration # +# # +################################################################################ + +#------------------------------------------------------------------------------------------------------- +# Specify the root script execution mode. +# +# - true : To execute the root script automatically by using the appropriate configuration methods. +# - false : To execute the root script manually. +# +# If this option is selected, password should be specified on the console. +#------------------------------------------------------------------------------------------------------- +oracle.install.crs.rootconfig.executeRootScript=false + +#-------------------------------------------------------------------------------------- +# Specify the configuration method to be used for automatic root script execution. +# +# Following are the possible choices: +# - ROOT +# - SUDO +#-------------------------------------------------------------------------------------- +oracle.install.crs.rootconfig.configMethod= +#-------------------------------------------------------------------------------------- +# Specify the absolute path of the sudo program. +# +# Applicable only when SUDO configuration method was chosen. +#-------------------------------------------------------------------------------------- +oracle.install.crs.rootconfig.sudoPath= + +#-------------------------------------------------------------------------------------- +# Specify the name of the user who is in the sudoers list. +# +# Applicable only when SUDO configuration method was chosen. +#-------------------------------------------------------------------------------------- +oracle.install.crs.rootconfig.sudoUserName= +#-------------------------------------------------------------------------------------- +# Specify the nodes batch map. +# +# This should be a comma separated list of node:batch pairs. +# During upgrade, you can sequence the automatic execution of root scripts +# by pooling the nodes into batches. +# A maximum of three batches can be specified. +# Installer will execute the root scripts on all the nodes in one batch before +# proceeding to next batch. +# Root script execution on the local node must be in Batch 1. +# Only one type of node role can be used for each batch. +# Root script execution should be done first in all HUB nodes and then, when +# existent, in all the LEAF nodes. +# +# Examples: +# 1. oracle.install.crs.config.batchinfo=HUBNode1:1,HUBNode2:2,HUBNode3:2,LEAFNode4:3 +# 2. oracle.install.crs.config.batchinfo=HUBNode1:1,LEAFNode2:2,LEAFNode3:2,LEAFNode4:2 +# 3. oracle.install.crs.config.batchinfo=HUBNode1:1,HUBNode2:1,LEAFNode3:2,LEAFNode4:3 +# +# Applicable only for UPGRADE install option. +#-------------------------------------------------------------------------------------- +oracle.install.crs.config.batchinfo= +################################################################################ +# # +# APPLICATION CLUSTER OPTIONS # +# # +################################################################################ + +#------------------------------------------------------------------------------- +# Specify the Virtual hostname to configure virtual access for your Application +# The value to be specified for Virtual hostname is optional. +#------------------------------------------------------------------------------- +oracle.install.crs.app.applicationAddress= +################################################################################# +# # +# DELETE NODE OPTIONS # +# # +################################################################################# + +#-------------------------------------------------------------------------------- +# Specify the node names to delete nodes from cluster. +# Delete node will be performed only for the remote nodes from the cluster. +#-------------------------------------------------------------------------------- +oracle.install.crs.deleteNode.nodes= diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/gridsetup_19c.rsp b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/gridsetup_19c.rsp new file mode 100644 index 0000000000..7f27a11e65 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/gridsetup_19c.rsp @@ -0,0 +1,63 @@ +oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v19.0.0 +INVENTORY_LOCATION=###INVENTORY### +oracle.install.option=CRS_CONFIG +ORACLE_BASE=###GRID_BASE### +oracle.install.asm.OSOPER= +oracle.install.asm.OSASM=asmadmin +oracle.install.crs.config.scanType=###SCAN_TYPE### +oracle.install.crs.config.SCANClientDataFile=###SHARED_SCAN_FILE### +oracle.install.crs.config.gpnp.scanName=###SCAN_NAME### +oracle.install.crs.config.gpnp.scanPort=###SCAN_PORT### +oracle.install.crs.config.ClusterConfiguration=###CLUSTER_TYPE### +oracle.install.crs.config.configureAsExtendedCluster=false +oracle.install.crs.config.memberClusterManifestFile=###MEMBERDB_FILE### +oracle.install.crs.config.clusterName=###CLUSTER_NAME### +oracle.install.crs.config.gpnp.configureGNS=###CONFIGURE_GNS### +oracle.install.crs.config.autoConfigureClusterNodeVIP=###DHCP_CONF### +oracle.install.crs.config.gpnp.gnsOption=###GNS_OPTIONS### +oracle.install.crs.config.gpnp.gnsClientDataFile= +oracle.install.crs.config.gpnp.gnsSubDomain=###GNS_SUBDOMAIN### +oracle.install.crs.config.gpnp.gnsVIPAddress=###GNSVIP_HOSTNAME### +oracle.install.crs.config.sites= +oracle.install.crs.config.clusterNodes=###CRS_CONFIG_NODES### +oracle.install.crs.config.networkInterfaceList=###NETWORK_STRING### +oracle.install.crs.configureGIMR=###GIMR_FLAG### +oracle.install.asm.configureGIMRDataDG=###GIMR_DG_FLAG### +oracle.install.crs.config.storageOption=###STORAGE_OPTIONS_FOR_MEMBERDB### +oracle.install.crs.config.sharedFileSystemStorage.votingDiskLocations= +oracle.install.crs.config.sharedFileSystemStorage.ocrLocations= +oracle.install.crs.config.useIPMI= +oracle.install.crs.config.ipmi.bmcUsername= +oracle.install.crs.config.ipmi.bmcPassword= +oracle.install.asm.SYSASMPassword=###PASSWORD### +oracle.install.asm.diskGroup.name=###DB_ASM_DISKGROUP### +oracle.install.asm.diskGroup.redundancy=###ASM_REDUNDANCY### +oracle.install.asm.diskGroup.AUSize=4 +oracle.install.asm.diskGroup.FailureGroups=###ASM_DG_FAILURE_GROUP### +oracle.install.asm.diskGroup.disksWithFailureGroupNames=###ASM_DISKGROUP_FG_DISKS### +oracle.install.asm.diskGroup.disks=###ASM_DISKGROUP_DISKS### +oracle.install.asm.diskGroup.quorumFailureGroupNames= +oracle.install.asm.diskGroup.diskDiscoveryString=###ASM_DISCOVERY_STRING### +oracle.install.asm.monitorPassword=###PASSWORD### +oracle.install.asm.gimrDG.name=###GIMR_DG_NAME### +oracle.install.asm.gimrDG.redundancy=###GIMR_DG_REDUNDANCY### +oracle.install.asm.gimrDG.AUSize=1 +oracle.install.asm.gimrDG.FailureGroups=###GIMR_DG_FAILURE_GROUP### +oracle.install.asm.gimrDG.disksWithFailureGroupNames=###GIMR_DISKGROUP_FG_DISKS### +oracle.install.asm.gimrDG.disks=###GIMR_DISKGROUP_DISKS### +oracle.install.asm.gimrDG.quorumFailureGroupNames= +oracle.install.asm.configureAFD=false +oracle.install.crs.configureRHPS=false +oracle.install.crs.config.ignoreDownNodes=false +oracle.install.config.managementOption=NONE +oracle.install.config.omsHost= +oracle.install.config.omsPort=0 +oracle.install.config.emAdminUser= +oracle.install.config.emAdminPassword= +oracle.install.crs.rootconfig.executeRootScript=false +oracle.install.crs.rootconfig.configMethod=ROOT +oracle.install.crs.rootconfig.sudoPath= +oracle.install.crs.rootconfig.sudoUserName= +oracle.install.crs.config.batchinfo= +oracle.install.crs.app.applicationAddress= +oracle.install.crs.deleteNode.nodes= diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/gridsetup_19cv1.rsp b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/gridsetup_19cv1.rsp new file mode 100644 index 0000000000..ac0ff0ba02 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/gridsetup_19cv1.rsp @@ -0,0 +1,653 @@ +############################################################################### +## Copyright(c) Oracle Corporation 1998,2019. All rights reserved. ## +## ## +## Specify values for the variables listed below to customize ## +## your installation. ## +## ## +## Each variable is associated with a comment. The comment ## +## can help to populate the variables with the appropriate ## +## values. ## +## ## +## IMPORTANT NOTE: This file contains plain text passwords and ## +## should be secured to have read permission only by oracle user ## +## or db administrator who owns this installation. ## +## ## +############################################################################### + +############################################################################### +## ## +## Instructions to fill this response file ## +## To register and configure 'Grid Infrastructure for Cluster' ## +## - Fill out sections A,B,C,D,E,F and G ## +## - Fill out section G if OCR and voting disk should be placed on ASM ## +## ## +## To register and configure 'Grid Infrastructure for Standalone server' ## +## - Fill out sections A,B and G ## +## ## +## To register software for 'Grid Infrastructure' ## +## - Fill out sections A,B and D ## +## - Provide the cluster nodes in section D when choosing CRS_SWONLY as ## +## installation option in section A ## +## ## +## To upgrade clusterware and/or Automatic storage management of earlier ## +## releases ## +## - Fill out sections A,B,C,D and H ## +## ## +## To add more nodes to the cluster ## +## - Fill out sections A and D ## +## - Provide the cluster nodes in section D when choosing CRS_ADDNODE as ## +## installation option in section A ## +## ## +############################################################################### + +#------------------------------------------------------------------------------ +# Do not change the following system generated value. +#------------------------------------------------------------------------------ +oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v19.0.0 + +############################################################################### +# # +# SECTION A - BASIC # +# # +############################################################################### + + +#------------------------------------------------------------------------------- +# Specify the location which holds the inventory files. +# This is an optional parameter if installing on +# Windows based Operating System. +#------------------------------------------------------------------------------- +INVENTORY_LOCATION= + +#------------------------------------------------------------------------------- +# Specify the installation option. +# Allowed values: CRS_CONFIG or HA_CONFIG or UPGRADE or CRS_SWONLY or HA_SWONLY +# - CRS_CONFIG : To register home and configure Grid Infrastructure for cluster +# - HA_CONFIG : To register home and configure Grid Infrastructure for stand alone server +# - UPGRADE : To register home and upgrade clusterware software of earlier release +# - CRS_SWONLY : To register Grid Infrastructure Software home (can be configured for cluster +# or stand alone server later) +# - HA_SWONLY : To register Grid Infrastructure Software home (can be configured for stand +# alone server later. This is only supported on Windows.) +# - CRS_ADDNODE : To add more nodes to the cluster +# - CRS_DELETE_NODE : To delete nodes to the cluster +#------------------------------------------------------------------------------- +oracle.install.option= + +#------------------------------------------------------------------------------- +# Specify the complete path of the Oracle Base. +#------------------------------------------------------------------------------- +ORACLE_BASE= + +################################################################################ +# # +# SECTION B - GROUPS # +# # +# The following three groups need to be assigned for all GI installations. # +# OSDBA and OSOPER can be the same or different. OSASM must be different # +# than the other two. # +# The value to be specified for OSDBA, OSOPER and OSASM group is only for # +# Unix based Operating System. # +# These groups are not required for upgrades, as they will be determined # +# from the Oracle home to upgrade. # +# # +################################################################################ +#------------------------------------------------------------------------------- +# The OSDBA_GROUP is the OS group which is to be granted SYSDBA privileges. +#------------------------------------------------------------------------------- +oracle.install.asm.OSDBA= + +#------------------------------------------------------------------------------- +# The OSOPER_GROUP is the OS group which is to be granted SYSOPER privileges. +# The value to be specified for OSOPER group is optional. +# Value should not be provided if configuring Client Cluster - i.e. storageOption=CLIENT_ASM_STORAGE. +#------------------------------------------------------------------------------- +oracle.install.asm.OSOPER= + +#------------------------------------------------------------------------------- +# The OSASM_GROUP is the OS group which is to be granted SYSASM privileges. This +# must be different than the previous two. +#------------------------------------------------------------------------------- +oracle.install.asm.OSASM= + +################################################################################ +# # +# SECTION C - SCAN # +# # +################################################################################ +#------------------------------------------------------------------------------- +# Specify the type of SCAN configuration for the cluster +# Allowed values : LOCAL_SCAN and SHARED_SCAN +#------------------------------------------------------------------------------- +oracle.install.crs.config.scanType= + +#------------------------------------------------------------------------------- +# Applicable only if SHARED_SCAN is being configured for cluster +# Specify the path to the SCAN client data file +#------------------------------------------------------------------------------- +oracle.install.crs.config.SCANClientDataFile= + +#------------------------------------------------------------------------------- +# Specify a name for SCAN +# Applicable if LOCAL_SCAN is being configured for the cluster +# If you choose to configure the cluster with GNS with Auto assigned Node VIPs(DHCP),then the scanName should be specified in the format of 'SCAN name.Cluster name.GNS sub-domain' +#------------------------------------------------------------------------------- +oracle.install.crs.config.gpnp.scanName= + +#------------------------------------------------------------------------------- +# Specify a unused port number for SCAN service +#------------------------------------------------------------------------------- + +oracle.install.crs.config.gpnp.scanPort= + +################################################################################ +# # +# SECTION D - CLUSTER & GNS # +# # +################################################################################ +#------------------------------------------------------------------------------- +# Specify the required cluster configuration +# Allowed values: STANDALONE, DOMAIN, MEMBERDB, MEMBERAPP +#------------------------------------------------------------------------------- +oracle.install.crs.config.ClusterConfiguration= + +#------------------------------------------------------------------------------- +# Specify 'true' if you would like to configure the cluster as Extended, else +# specify 'false' +# +# Applicable only for STANDALONE and DOMAIN cluster configuration +#------------------------------------------------------------------------------- +oracle.install.crs.config.configureAsExtendedCluster= + + +#------------------------------------------------------------------------------- +# Specify the Member Cluster Manifest file +# +# Applicable only for MEMBERDB and MEMBERAPP cluster configuration +#------------------------------------------------------------------------------- +oracle.install.crs.config.memberClusterManifestFile= + +#------------------------------------------------------------------------------- +# Specify a name for the Cluster you are creating. +# +# The maximum length allowed for clustername is 63 characters. The name can be +# any combination of lower and uppercase alphabets (A - Z), (0 - 9) and hyphens (-). +# +# Applicable only for STANDALONE and DOMAIN cluster configuration +#------------------------------------------------------------------------------- +oracle.install.crs.config.clusterName= + +#------------------------------------------------------------------------------- +# Applicable only for STANDALONE, DOMAIN, MEMBERDB cluster configuration. +# Specify 'true' if you would like to configure Grid Naming Service(GNS), else +# specify 'false' +#------------------------------------------------------------------------------- +oracle.install.crs.config.gpnp.configureGNS= + +#------------------------------------------------------------------------------- +# Applicable only for STANDALONE and DOMAIN cluster configuration if you choose to configure GNS. +# Specify 'true' if you would like to assign SCAN name VIP and Node VIPs by DHCP +# , else specify 'false' +#------------------------------------------------------------------------------- +oracle.install.crs.config.autoConfigureClusterNodeVIP= + +#------------------------------------------------------------------------------- +# Applicable only if you choose to configure GNS. +# Specify the type of GNS configuration for cluster +# Allowed values are: CREATE_NEW_GNS and USE_SHARED_GNS +# Only USE_SHARED_GNS value is allowed for MEMBERDB cluster configuration. +#------------------------------------------------------------------------------- +oracle.install.crs.config.gpnp.gnsOption= + +#------------------------------------------------------------------------------- +# Applicable only if SHARED_GNS is being configured for cluster +# Specify the path to the GNS client data file +#------------------------------------------------------------------------------- +oracle.install.crs.config.gpnp.gnsClientDataFile= + +#------------------------------------------------------------------------------- +# Applicable only for STANDALONE and DOMAIN cluster configuration if you choose to +# configure GNS for this cluster oracle.install.crs.config.gpnp.gnsOption=CREATE_NEW_GNS +# Specify the GNS subdomain and an unused virtual hostname for GNS service +#------------------------------------------------------------------------------- +oracle.install.crs.config.gpnp.gnsSubDomain= +oracle.install.crs.config.gpnp.gnsVIPAddress= + +#------------------------------------------------------------------------------- +# Specify the list of sites - only if configuring an Extended Cluster +#------------------------------------------------------------------------------- +oracle.install.crs.config.sites= + +#------------------------------------------------------------------------------- +# Specify the list of nodes that have to be configured to be part of the cluster. +# +# The list should a comma-separated list of tuples. Each tuple should be a +# colon-separated string that contains +# - 1 field if you have chosen CRS_SWONLY as installation option, or +# - 1 field if configuring an Application Cluster, or +# - 3 fields if configuring a Flex Cluster +# - 3 fields if adding more nodes to the configured cluster, or +# - 4 fields if configuring an Extended Cluster +# +# The fields should be ordered as follows: +# 1. The first field should be the public node name. +# 2. The second field should be the virtual host name +# (Should be specified as AUTO if you have chosen 'auto configure for VIP' +# i.e. autoConfigureClusterNodeVIP=true) +# 3. The third field indicates the site designation for the node. To be specified only if configuring an Extended Cluster. +# Only the 1st field is applicable if you have chosen CRS_SWONLY as installation option +# Only the 1st field is applicable if configuring an Application Cluster +# +# Examples +# For registering GI for a cluster software: oracle.install.crs.config.clusterNodes=node1,node2 +# For adding more nodes to the configured cluster: oracle.install.crs.config.clusterNodes=node1:node1-vip,node2:node2-vip +# For configuring Application Cluster: oracle.install.crs.config.clusterNodes=node1,node2 +# For configuring Flex Cluster: oracle.install.crs.config.clusterNodes=node1:node1-vip,node2:node2-vip +# For configuring Extended Cluster: oracle.install.crs.config.clusterNodes=node1:node1-vip:site1,node2:node2-vip:site2 +# You can specify a range of nodes in the tuple using colon separated fields of format +# hostnameprefix:lowerbound-upperbound:hostnamesuffix:vipsuffix:role of node +# +#------------------------------------------------------------------------------- +oracle.install.crs.config.clusterNodes= + +#------------------------------------------------------------------------------- +# The value should be a comma separated strings where each string is as shown below +# InterfaceName:SubnetAddress:InterfaceType +# where InterfaceType can be either "1", "2", "3", "4", or "5" +# InterfaceType stand for the following values +# - 1 : PUBLIC +# - 2 : PRIVATE +# - 3 : DO NOT USE +# - 4 : ASM +# - 5 : ASM & PRIVATE +# +# For example: eth0:140.87.24.0:1,eth1:10.2.1.0:2,eth2:140.87.52.0:3 +# +#------------------------------------------------------------------------------- +oracle.install.crs.config.networkInterfaceList= + +#------------------------------------------------------------------------------ +# Specify 'true' if you would like to configure Grid Infrastructure Management +# Repository (GIMR), else specify 'false'. +# This option is only applicable when CRS_CONFIG is chosen as install option, +# and STANDALONE is chosen as cluster configuration. +#------------------------------------------------------------------------------ +oracle.install.crs.configureGIMR= + +#------------------------------------------------------------------------------ +# Create a separate ASM DiskGroup to store GIMR data. +# Specify 'true' if you would like to separate GIMR data with clusterware data, +# else specify 'false' +# Value should be 'true' for DOMAIN cluster configurations +# Value can be true/false for STANDALONE cluster configurations. +#------------------------------------------------------------------------------ +oracle.install.asm.configureGIMRDataDG= + +################################################################################ +# # +# SECTION E - STORAGE # +# # +################################################################################ + +#------------------------------------------------------------------------------- +# Specify the type of storage to use for Oracle Cluster Registry(OCR) and Voting +# Disks files. Only applicable for Standalone and MemberDB cluster. +# - FLEX_ASM_STORAGE +# - CLIENT_ASM_STORAGE +# - FILE_SYSTEM_STORAGE +# +# Option FILE_SYSTEM_STORAGE is only for STANDALONE cluster configuration. +#------------------------------------------------------------------------------- +oracle.install.crs.config.storageOption= + +#------------------------------------------------------------------------------- +# These properties are applicable only if FILE_SYSTEM_STORAGE is chosen for +# storing OCR and voting disk +# Specify the location(s) for OCR and voting disks +# Three(3) or one(1) location(s) should be specified for OCR and voting disk, +# separated by commas. +# Example: +# For Unix based Operating System: +# oracle.install.crs.config.sharedFileSystemStorage.votingDiskLocations=/oradbocfs/storage/vdsk1,/oradbocfs/storage/vdsk2,/oradbocfs/storage/vdsk3 +# oracle.install.crs.config.sharedFileSystemStorage.ocrLocations=/oradbocfs/storage/ocr1,/oradbocfs/storage/ocr2,/oradbocfs/storage/ocr3 +# For Windows based Operating System OCR/VDSK on shared storage is not supported. +#------------------------------------------------------------------------------- +oracle.install.crs.config.sharedFileSystemStorage.votingDiskLocations= +oracle.install.crs.config.sharedFileSystemStorage.ocrLocations= +################################################################################ +# # +# SECTION F - IPMI # +# # +################################################################################ + +#------------------------------------------------------------------------------- +# Specify 'true' if you would like to configure Intelligent Power Management interface +# (IPMI), else specify 'false' +#------------------------------------------------------------------------------- +oracle.install.crs.config.useIPMI= + +#------------------------------------------------------------------------------- +# Applicable only if you choose to configure IPMI +# i.e. oracle.install.crs.config.useIPMI=true +# Specify the username and password for using IPMI service +#------------------------------------------------------------------------------- +oracle.install.crs.config.ipmi.bmcUsername= +oracle.install.crs.config.ipmi.bmcPassword= +################################################################################ +# # +# SECTION G - ASM # +# # +################################################################################ + + +#------------------------------------------------------------------------------- +# Password for SYS user of Oracle ASM +#------------------------------------------------------------------------------- +oracle.install.asm.SYSASMPassword= + +#------------------------------------------------------------------------------- +# The ASM DiskGroup +# +# Example: oracle.install.asm.diskGroup.name=data +# +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.name= + +#------------------------------------------------------------------------------- +# Redundancy level to be used by ASM. +# It can be one of the following +# - NORMAL +# - HIGH +# - EXTERNAL +# - FLEX# - EXTENDED (required if oracle.install.crs.config.ClusterConfiguration=EXTENDED) +# Example: oracle.install.asm.diskGroup.redundancy=NORMAL +# +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.redundancy= + +#------------------------------------------------------------------------------- +# Allocation unit size to be used by ASM. +# It can be one of the following values +# - 1 +# - 2 +# - 4 +# - 8 +# - 16 +# Example: oracle.install.asm.diskGroup.AUSize=4 +# size unit is MB +# +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.AUSize= + +#------------------------------------------------------------------------------- +# Failure Groups for the disk group +# If configuring for Extended cluster specify as list of "failure group name:site" +# tuples. +# Else just specify as list of failure group names +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.FailureGroups= + +#------------------------------------------------------------------------------- +# List of disks and their failure groups to create a ASM DiskGroup +# (Use this if each of the disks have an associated failure group) +# Failure Groups are not required if oracle.install.asm.diskGroup.redundancy=EXTERNAL +# Example: +# For Unix based Operating System: +# oracle.install.asm.diskGroup.disksWithFailureGroupNames=/oracle/asm/disk1,FGName,/oracle/asm/disk2,FGName +# For Windows based Operating System: +# oracle.install.asm.diskGroup.disksWithFailureGroupNames=\\.\ORCLDISKDATA0,FGName,\\.\ORCLDISKDATA1,FGName +# +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.disksWithFailureGroupNames= + +#------------------------------------------------------------------------------- +# List of disks to create a ASM DiskGroup +# (Use this variable only if failure groups configuration is not required) +# Example: +# For Unix based Operating System: +# oracle.install.asm.diskGroup.disks=/oracle/asm/disk1,/oracle/asm/disk2 +# For Windows based Operating System: +# oracle.install.asm.diskGroup.disks=\\.\ORCLDISKDATA0,\\.\ORCLDISKDATA1 +# +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.disks= + +#------------------------------------------------------------------------------- +# List of failure groups to be marked as QUORUM. +# Quorum failure groups contain only voting disk data, no user data is stored +# Example: +# oracle.install.asm.diskGroup.quorumFailureGroupNames=FGName1,FGName2 +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.quorumFailureGroupNames= +#------------------------------------------------------------------------------- +# The disk discovery string to be used to discover the disks used create a ASM DiskGroup +# +# Example: +# For Unix based Operating System: +# oracle.install.asm.diskGroup.diskDiscoveryString=/oracle/asm/* +# For Windows based Operating System: +# oracle.install.asm.diskGroup.diskDiscoveryString=\\.\ORCLDISK* +# +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.diskDiscoveryString= + +#------------------------------------------------------------------------------- +# Password for ASMSNMP account +# ASMSNMP account is used by Oracle Enterprise Manager to monitor Oracle ASM instances +#------------------------------------------------------------------------------- +oracle.install.asm.monitorPassword= + +#------------------------------------------------------------------------------- +# GIMR Storage data ASM DiskGroup +# Applicable only when +# oracle.install.asm.configureGIMRDataDG=true +# Example: oracle.install.asm.GIMRDG.name=MGMT +# +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.name= + +#------------------------------------------------------------------------------- +# Redundancy level to be used by ASM. +# It can be one of the following +# - NORMAL +# - HIGH +# - EXTERNAL +# - FLEX# - EXTENDED (only if oracle.install.crs.config.ClusterConfiguration=EXTENDED) +# Example: oracle.install.asm.gimrDG.redundancy=NORMAL +# +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.redundancy= + +#------------------------------------------------------------------------------- +# Allocation unit size to be used by ASM. +# It can be one of the following values +# - 1 +# - 2 +# - 4 +# - 8 +# - 16 +# Example: oracle.install.asm.gimrDG.AUSize=4 +# size unit is MB +# +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.AUSize= + +#------------------------------------------------------------------------------- +# Failure Groups for the GIMR storage data ASM disk group +# If configuring for Extended cluster specify as list of "failure group name:site" +# tuples. +# Else just specify as list of failure group names +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.FailureGroups= + +#------------------------------------------------------------------------------- +# List of disks and their failure groups to create GIMR data ASM DiskGroup +# (Use this if each of the disks have an associated failure group) +# Failure Groups are not required if oracle.install.asm.gimrDG.redundancy=EXTERNAL +# Example: +# For Unix based Operating System: +# oracle.install.asm.gimrDG.disksWithFailureGroupNames=/oracle/asm/disk1,FGName,/oracle/asm/disk2,FGName +# For Windows based Operating System: +# oracle.install.asm.gimrDG.disksWithFailureGroupNames=\\.\ORCLDISKDATA0,FGName,\\.\ORCLDISKDATA1,FGName +# +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.disksWithFailureGroupNames= + +#------------------------------------------------------------------------------- +# List of disks to create GIMR data ASM DiskGroup +# (Use this variable only if failure groups configuration is not required) +# Example: +# For Unix based Operating System: +# oracle.install.asm.gimrDG.disks=/oracle/asm/disk1,/oracle/asm/disk2 +# For Windows based Operating System: +# oracle.install.asm.gimrDG.disks=\\.\ORCLDISKDATA0,\\.\ORCLDISKDATA1 +# +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.disks= + +#------------------------------------------------------------------------------- +# List of failure groups to be marked as QUORUM. +# Quorum failure groups contain only voting disk data, no user data is stored +# Example: +# oracle.install.asm.gimrDG.quorumFailureGroupNames=FGName1,FGName2 +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.quorumFailureGroupNames= + +#------------------------------------------------------------------------------- +# Configure AFD - ASM Filter Driver +# Applicable only for FLEX_ASM_STORAGE option +# Specify 'true' if you want to configure AFD, else specify 'false' +#------------------------------------------------------------------------------- +oracle.install.asm.configureAFD= +#------------------------------------------------------------------------------- +# Configure RHPS - Rapid Home Provisioning Service +# Applicable only for DOMAIN cluster configuration +# Specify 'true' if you want to configure RHP service, else specify 'false' +#------------------------------------------------------------------------------- +oracle.install.crs.configureRHPS= + +################################################################################ +# # +# SECTION H - UPGRADE # +# # +################################################################################ +#------------------------------------------------------------------------------- +# Specify whether to ignore down nodes during upgrade operation. +# Value should be 'true' to ignore down nodes otherwise specify 'false' +#------------------------------------------------------------------------------- +oracle.install.crs.config.ignoreDownNodes= +################################################################################ +# # +# MANAGEMENT OPTIONS # +# # +################################################################################ + +#------------------------------------------------------------------------------- +# Specify the management option to use for managing Oracle Grid Infrastructure +# Options are: +# 1. CLOUD_CONTROL - If you want to manage your Oracle Grid Infrastructure with Enterprise Manager Cloud Control. +# 2. NONE -If you do not want to manage your Oracle Grid Infrastructure with Enterprise Manager Cloud Control. +#------------------------------------------------------------------------------- +oracle.install.config.managementOption= + +#------------------------------------------------------------------------------- +# Specify the OMS host to connect to Cloud Control. +# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL +#------------------------------------------------------------------------------- +oracle.install.config.omsHost= + +#------------------------------------------------------------------------------- +# Specify the OMS port to connect to Cloud Control. +# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL +#------------------------------------------------------------------------------- +oracle.install.config.omsPort= + +#------------------------------------------------------------------------------- +# Specify the EM Admin user name to use to connect to Cloud Control. +# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL +#------------------------------------------------------------------------------- +oracle.install.config.emAdminUser= + +#------------------------------------------------------------------------------- +# Specify the EM Admin password to use to connect to Cloud Control. +# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL +#------------------------------------------------------------------------------- +oracle.install.config.emAdminPassword= +################################################################################ +# # +# Root script execution configuration # +# # +################################################################################ + +#------------------------------------------------------------------------------------------------------- +# Specify the root script execution mode. +# +# - true : To execute the root script automatically by using the appropriate configuration methods. +# - false : To execute the root script manually. +# +# If this option is selected, password should be specified on the console. +#------------------------------------------------------------------------------------------------------- +oracle.install.crs.rootconfig.executeRootScript= + +#-------------------------------------------------------------------------------------- +# Specify the configuration method to be used for automatic root script execution. +# +# Following are the possible choices: +# - ROOT +# - SUDO +#-------------------------------------------------------------------------------------- +oracle.install.crs.rootconfig.configMethod= +#-------------------------------------------------------------------------------------- +# Specify the absolute path of the sudo program. +# +# Applicable only when SUDO configuration method was chosen. +#-------------------------------------------------------------------------------------- +oracle.install.crs.rootconfig.sudoPath= + +#-------------------------------------------------------------------------------------- +# Specify the name of the user who is in the sudoers list. +# Applicable only when SUDO configuration method was chosen. +# Note:For Grid Infrastructure for Standalone server installations,the sudo user name must be the username of the user performing the installation. +#-------------------------------------------------------------------------------------- +oracle.install.crs.rootconfig.sudoUserName= +#-------------------------------------------------------------------------------------- +# Specify the nodes batch map. +# +# This should be a comma separated list of node:batch pairs. +# During upgrade, you can sequence the automatic execution of root scripts +# by pooling the nodes into batches. +# A maximum of three batches can be specified. +# Installer will execute the root scripts on all the nodes in one batch before +# proceeding to next batch. +# Root script execution on the local node must be in Batch 1. +# +# Examples: +# 1. oracle.install.crs.config.batchinfo=Node1:1,Node2:2,Node3:2,Node4:3 +# 2. oracle.install.crs.config.batchinfo=Node1:1,Node2:2,Node3:2,Node4:2 +# 3. oracle.install.crs.config.batchinfo=Node1:1,Node2:1,Node3:2,Node4:3 +# +# Applicable only for UPGRADE install option. +#-------------------------------------------------------------------------------------- +oracle.install.crs.config.batchinfo= +################################################################################ +# # +# APPLICATION CLUSTER OPTIONS # +# # +################################################################################ + +#------------------------------------------------------------------------------- +# Specify the Virtual hostname to configure virtual access for your Application +# The value to be specified for Virtual hostname is optional. +#------------------------------------------------------------------------------- +oracle.install.crs.app.applicationAddress= +################################################################################# +# # +# DELETE NODE OPTIONS # +# # +################################################################################# + +#-------------------------------------------------------------------------------- +# Specify the node names to delete nodes from cluster. +# Delete node will be performed only for the remote nodes from the cluster. +#-------------------------------------------------------------------------------- +oracle.install.crs.deleteNode.nodes= diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/initsh b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/initsh new file mode 100755 index 0000000000..27f753d46b --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/initsh @@ -0,0 +1,10 @@ +#!/bin/bash +# Copyright (c) 2022, Oracle and/or its affiliates +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl/ + +echo "Creating env variables file /etc/rac_env_vars" +/bin/bash -c "cat /proc/1/environ | tr '\0' '\n' > /etc/rac_env_vars" +/bin/bash -c "sed -i -e 's/^/export /' /etc/rac_env_vars" + +echo "Starting Systemd" +exec /lib/systemd/systemd diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/installDBBinaries.sh b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/installDBBinaries.sh new file mode 100755 index 0000000000..cf92dd559c --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/installDBBinaries.sh @@ -0,0 +1,65 @@ +#!/bin/bash +# LICENSE UPL 1.0 +# +# Copyright (c) 2018,2021 Oracle and/or its affiliates. +# +# Since: December, 2018 +# Author: sanjay.singh@oracle.com, paramdeep.saini@oracle.com +# Description:Installing Oracle DB software +# +# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS HEADER. +# + +EDITION=$1 + +# Check whether edition has been passed on +if [ "$EDITION" == "" ]; then + echo "ERROR: No edition has been passed on!" + echo "Please specify the correct edition!" + exit 1; +fi; + +# Check whether correct edition has been passed on +# shellcheck disable=SC2166 +if [ "$EDITION" != "EE" -a "$EDITION" != "SE2" ]; then + echo "ERROR: Wrong edition has been passed on!" + echo "Edition $EDITION is no a valid edition!" + exit 1; +fi; + +# Check whether DB_BASE is set +if [ "$DB_BASE" == "" ]; then + echo "ERROR: DB_BASE has not been set!" + echo "You have to have the DB_BASE environment variable set to a valid value!" + exit 1; +fi; + +# Check whether DB_HOME is set +if [ "$DB_HOME" == "" ]; then + echo "ERROR: DB_HOME has not been set!" + echo "You have to have the DB_HOME environment variable set to a valid value!" + exit 1; +fi; + +# Replace place holders +# --------------------- +sed -i -e "s|###ORACLE_EDITION###|$EDITION|g" "$INSTALL_SCRIPTS"/"$DB_INSTALL_RSP" && \ +sed -i -e "s|###DB_BASE###|$DB_BASE|g" "$INSTALL_SCRIPTS"/"$DB_INSTALL_RSP" && \ +sed -i -e "s|###DB_HOME###|$DB_HOME|g" "$INSTALL_SCRIPTS"/"$DB_INSTALL_RSP" && \ +sed -i -e "s|###INVENTORY###|$INVENTORY|g" "$INSTALL_SCRIPTS"/"$DB_INSTALL_RSP" + +export ORACLE_HOME=${DB_HOME} +export PATH=${ORACLE_HOME}/bin:/bin:/sbin:/usr/bin +export LD_LIBRARY_PATH=${ORACLE_HOME}/lib:/lib:/usr/lib + +# Install Oracle binaries +if [ "${DB_USER}" != "${GRID_USER}" ]; then +mkdir -p /home/"${DB_USER}"/.ssh && \ +chmod 700 /home/"${DB_USER}"/.ssh +fi + + +# Install Oracle binaries +# shellcheck disable=SC2015 +unzip -q "$INSTALL_SCRIPTS"/"$INSTALL_FILE_2" -d "$DB_HOME" && \ +"$DB_HOME"/runInstaller -silent -force -waitforcompletion -responsefile "$INSTALL_SCRIPTS"/"$DB_INSTALL_RSP" -ignorePrereqFailure || true diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/installGridBinaries.sh b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/installGridBinaries.sh new file mode 100755 index 0000000000..15616d5f82 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/installGridBinaries.sh @@ -0,0 +1,59 @@ +#!/bin/bash +# LICENSE UPL 1.0 +# +# Copyright (c) 2018,2021 Oracle and/or its affiliates. +# +# Since: December, 2018 +# Author: sanjay.singh@oracle.com, paramdeep.saini@oracle.com +# Description: Install grid software inside the container. +# +# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS HEADER. +# + +EDITION=$1 +# shellcheck disable=SC2034 +PATCH_NUMBER=$2 + +# Check whether edition has been passed on +if [ "$EDITION" == "" ]; then + echo "ERROR: No edition has been passed on!" + echo "Please specify the correct edition!" + exit 1; +fi; + +# Check whether correct edition has been passed on +if [ "$EDITION" != "EE" ]; then + echo "ERROR: Wrong edition has been passed on!" + echo "Edition $EDITION is no a valid edition!" + exit 1; +fi; + +# Check whether GRID_BASE is set +if [ "$GRID_BASE" == "" ]; then + echo "ERROR: GRID_BASE has not been set!" + echo "You have to have the GRID_BASE environment variable set to a valid value!" + exit 1; +fi; + +# Check whether GRID_HOME is set +if [ "$GRID_HOME" == "" ]; then + echo "ERROR: GRID_HOME has not been set!" + echo "You have to have the GRID_HOME environment variable set to a valid value!" + exit 1; +fi; + + +temp_var1=`hostname` + +# Replace place holders +# --------------------- +sed -i -e "s|###HOSTNAME###|$temp_var1|g" "$INSTALL_SCRIPTS"/"$GRID_SW_INSTALL_RSP" && \ +sed -i -e "s|###INSTALL_TYPE###|CRS_SWONLY|g" "$INSTALL_SCRIPTS"/"$GRID_SW_INSTALL_RSP" && \ +sed -i -e "s|###GRID_BASE###|$GRID_BASE|g" "$INSTALL_SCRIPTS"/"$GRID_SW_INSTALL_RSP" && \ +sed -i -e "s|###INVENTORY###|$INVENTORY|g" "$INSTALL_SCRIPTS"/"$GRID_SW_INSTALL_RSP" + +# Install Oracle binaries +mkdir -p /home/grid/.ssh && \ +chmod 700 /home/grid/.ssh && \ +unzip -q "$INSTALL_SCRIPTS"/"$INSTALL_FILE_1" -d "$GRID_HOME" && \ +"$GRID_HOME"/gridSetup.sh -silent -responseFile "$INSTALL_SCRIPTS"/"$GRID_SW_INSTALL_RSP" -ignorePrereqFailure || true diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/runOracle.sh b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/runOracle.sh new file mode 100755 index 0000000000..d371df39fb --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/runOracle.sh @@ -0,0 +1,40 @@ +#!/bin/bash +# LICENSE UPL 1.0 +# +# Copyright (c) 2018,2022 Oracle and/or its affiliates. +# +# Since: January, 2018 +# Author: sanjay.singh@oracle.com, paramdeep.saini@oracle.com +# Description: Runs the Oracle RAC Database inside the container +# +# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS HEADER. +# + +if [ -f /etc/rac_env_vars ]; then +source /etc/rac_env_vars +fi + +################################### +# !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! # +############# MAIN ################ +# !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! # +################################### + +if [ -z ${BASE_DIR} ]; then + BASE_DIR=/opt/scripts/startup/scripts +else + BASE_DIR=$SCRIPT_DIR/scripts +fi + +if [ -z ${MAIN_SCRIPT} ]; then + SCRIPT_NAME="main.py" +fi + +if [ -z ${EXECUTOR} ]; then + EXECUTOR="python3" +fi +# shellcheck disable=SC2164 +cd $BASE_DIR +$EXECUTOR $SCRIPT_NAME + +# Tail on alert log and wait (otherwise container will exit) diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/sample_19c.ccf b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/sample_19c.ccf new file mode 100644 index 0000000000..e8118c918f --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/sample_19c.ccf @@ -0,0 +1,53 @@ +# +# Cluster nodes configuration specification file +# +# Format: +# node [vip] [role-identifier] [site-name] +# +# node - Node's public host name +# vip - Node's virtual host name +# role-identifier - Node's role with "/" prefix - should be "/HUB" or "/LEAF" +# site-name - Node's assigned site +# +# Specify details of one node per line. +# Lines starting with '#' will be skipped. +# +# (1) vip and role are not required for Oracle Grid Infrastructure software only +# installs and Oracle Member cluster for Applications +# (2) vip should be specified as AUTO if Node Virtual host names are Dynamically +# assigned +# (3) role-identifier can be specified as "/LEAF" only for "Oracle Standalone Cluster" +# (4) site-name should be specified only when configuring Oracle Grid Infrastructure with "Extended Cluster" option +# +# Examples: +# -------- +# For installing GI software only on a cluster: +# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +# node1 +# node2 +# +# For Standalone Cluster: +# ^^^^^^^^^^^^^^^^^^^^^^ +# node1 node1-vip /HUB +# node2 node2-vip /LEAF +# +# For Standalone Extended Cluster: +# ^^^^^^^^^^^^^^^^^^^^^^ +# node1 node1-vip /HUB sitea +# node2 node2-vip /LEAF siteb +# +# For Domain Services Cluster: +# ^^^^^^^^^^^^^^^^^^^^^^^^^^^ +# node1 node1-vip /HUB +# node2 node2-vip /HUB +# +# For Member Cluster for Oracle Database: +# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +# node1 node1-vip /HUB +# node2 node2-vip /HUB +# +# For Member Cluster for Applications: +# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +# node1 +# node2 +# diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/setupDB.sh b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/setupDB.sh new file mode 100755 index 0000000000..8673d5d113 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/setupDB.sh @@ -0,0 +1,42 @@ +#!/bin/bash +# LICENSE UPL 1.0 +# +# Copyright (c) 2018,2021 Oracle and/or its affiliates. +# +# Since: January, 2018 +# Author: paramdeep.saini@oracle.com +# Description: Sets up the unix environment for DB installation. +# +# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS HEADER. +# + +# Create Directories +if [ "${SLIMMING}x" != 'truex' ]; then + mkdir -p "$DB_BASE" + mkdir -p "$DB_HOME" +fi + +usermod -g oinstall -G oinstall,dba,oper,backupdba,dgdba,kmdba,asmdba,asmoper,racdba,asmadmin "${DB_USER}" + +chmod 775 "$INSTALL_SCRIPTS" + + +if [ "${SLIMMING}x" != 'truex' ]; then + chown -R "${DB_USER}":oinstall "$DB_BASE" + chown -R "${DB_USER}":oinstall "$DB_HOME" + chown -R "${DB_USER}":oinstall "$INSTALL_SCRIPTS" + echo "export PATH=$DB_PATH" >> /home/"${DB_USER}"/.bashrc + echo "export LD_LIBRARY_PATH=$DB_LD_LIBRARY_PATH" >> /home/"${DB_USER}"/.bashrc + echo "export SCRIPT_DIR=$SCRIPT_DIR" >> /home/"${DB_USER}"/.bashrc + echo "export GRID_HOME=$GRID_HOME" >> /home/"${DB_USER}"/.bashrc + echo "export DB_BASE=$DB_BASE" >> /home/"${DB_USER}"/.bashrc + echo "export DB_HOME=$DB_HOME" >> /home/"${DB_USER}"/.bashrc +fi + +if [ "${SLIMMING}x" != 'truex' ]; then + if [ "${DB_USER}" == "${GRID_USER}" ]; then + sed -i '/PATH=/d' /home/"${DB_USER}"/.bashrc + echo "export PATH=$GRID_HOME/bin:$DB_PATH" >> /home/"${DB_USER}"/.bashrc + echo "export LD_LIBRARY_PATH=$GRID_HOME/lib:$DB_LD_LIBRARY_PATH" >> /home/"${DB_USER}"/.bashrc + fi +fi diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/setupGrid.sh b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/setupGrid.sh new file mode 100755 index 0000000000..20643bd7b9 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/setupGrid.sh @@ -0,0 +1,43 @@ +#!/bin/bash +# LICENSE UPL 1.0 +# +# Copyright (c) 2018,2021 Oracle and/or its affiliates. +# +# Since: January, 2018 +# Author: paramdeep.saini@oracle.com +# Description: Sets up the unix environment for Grid installation. +# +# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS HEADER. +# +# shellcheck disable=SC2034 +EDITION=$1 + +# Create Directories +if [ "${SLIMMING}x" != 'truex' ] ; then + mkdir -p "$GRID_BASE" + mkdir -p "$GRID_HOME" +fi + +groupadd -g 54334 asmadmin +groupadd -g 54335 asmdba +groupadd -g 54336 asmoper +useradd -u 54332 -g oinstall -G oinstall,asmadmin,asmdba,asmoper,racdba,dba "${GRID_USER}" + +chmod 666 /etc/sudoers +echo "${DB_USER} ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers +echo "${GRID_USER} ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers +chmod 440 /etc/sudoers + +if [ "${SLIMMING}x" != 'truex' ] ; then + chown -R "${GRID_USER}":oinstall "$GRID_BASE" + chown -R "${GRID_USER}":oinstall "$GRID_HOME" + mkdir -p "$INVENTORY" + chown -R "${GRID_USER}":oinstall "$INVENTORY" + # shellcheck disable=SC2129 + echo "export PATH=$GRID_PATH" >> /home/"${GRID_USER}"/.bashrc + echo "export LD_LIBRARY_PATH=$GRID_LD_LIBRARY_PATH" >> /home/"${GRID_USER}"/.bashrc + echo "export SCRIPT_DIR=$SCRIPT_DIR" >> /home/"${GRID_USER}"/.bashrc + echo "export GRID_HOME=$GRID_HOME" >> /home/"${GRID_USER}"/.bashrc + echo "export GRID_BASE=$GRID_BASE" >> /home/"${GRID_USER}"/.bashrc + echo "export DB_HOME=$DB_HOME" >> /home/"${GRID_USER}"/.bashrc +fi diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/setupLinuxEnv.sh b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/setupLinuxEnv.sh new file mode 100755 index 0000000000..c2a9729d29 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/setupLinuxEnv.sh @@ -0,0 +1,28 @@ +#!/bin/bash +# LICENSE UPL 1.0 +# +# Copyright (c) 2018,2021 Oracle and/or its affiliates. +# +# Since: January, 2018 +# Author: paramdeep.saini@oracle.com +# Description: Sets up the unix environment for DB installation. +# +# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS HEADER. +# + +# Setup filesystem and oracle user +# Adjust file permissions, go to /opt/oracle as user 'oracle' to proceed with Oracle installation +# ------------------------------------------------------------ +## Use OCI yum repos on OCI instead of public yum +region=$(curl --noproxy '*' -sfm 3 -H "Authorization: Bearer Oracle" http://169.254.169.254/opc/v2/instance/ | sed -nE 's/^ *"regionIdentifier": "([^"]+)".*/\1/p') +if [ -n "$region" ]; then + echo "Detected OCI Region: $region" + for proxy in $(printenv | grep -i _proxy | cut -d= -f1); do unset $proxy; done + echo "-$region" > /etc/yum/vars/ociregion +fi + +mkdir /asmdisks && \ +mkdir /responsefiles && \ +chmod ug+x /opt/scripts/startup/*.sh && \ +yum -y install systemd oracle-database-preinstall-19c net-tools which zip unzip tar openssl expect e2fsprogs openssh-server vim-minimal passwd which sudo hostname policycoreutils-python-utils python3 lsof rsync && \ +yum clean all diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/setupSSH.expect b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/setupSSH.expect new file mode 100644 index 0000000000..2e0537b190 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/setupSSH.expect @@ -0,0 +1,45 @@ +#!/usr/bin/expect -f +# LICENSE UPL 1.0 +# +# Copyright (c) 2018,2021 Oracle and/or its affiliates. +# +# Since: January, 2018 +# Author: sanjay.singh@oracle.com, paramdeep.saini@oracle.com +# Description: Setup SSH between nodes +# +# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS HEADER. +# + +set username [lindex $argv 0]; +set script_loc [lindex $argv 1]; +set cluster_nodes [lindex $argv 2]; +set ssh_pass [lindex $argv 3]; + +set timeout 120 + +# Procedure to setup ssh from server +proc sshproc { ssh_pass } { + expect { + # Send password at 'Password' prompt and tell expect to continue(i.e. exp_continue) + -re "\[P|p]assword:" { exp_send "$ssh_pass\r" + exp_continue } + # Tell expect stay in this 'expect' block and for each character that SCP prints while doing the copy + # reset the timeout counter back to 0. + -re . { exp_continue } + timeout { return 1 } + eof { return 0 } + } +} + +# Execute sshUserSetup.sh Script +set ssh_cmd "$script_loc/sshUserSetup.sh -user $username -hosts \"${cluster_nodes}\" -logfile /tmp/${username}_SetupSSH.log -advanced -exverify -noPromptPassphrase -confirm" + +eval spawn $ssh_cmd +set ssh_results [sshproc $ssh_pass] + +if { $ssh_results == 0 } { + exit 0 +} + +# Error attempting SSH, so exit with non-zero status +exit 1 diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/tempfile b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/tempfile new file mode 100644 index 0000000000..e69de29bb2 diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/Checksum b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/Checksum new file mode 100644 index 0000000000..4d576785e0 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/Checksum @@ -0,0 +1,2 @@ +8ac915a800800ddf16a382506d3953db LINUX.X64_213000_db_home.zip +b3fbdb7621ad82cbd4f40943effdd1be LINUX.X64_213000_grid_home.zip diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/Containerfile b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/Containerfile new file mode 100644 index 0000000000..852cbf7a81 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/Containerfile @@ -0,0 +1,267 @@ +# LICENSE UPL 1.0 +# +# Copyright (c) 2018,2022 Oracle and/or its affiliates. +# +# ORACLE DOCKERFILES PROJECT +# -------------------------- +# This is the Dockerfile for Oracle Database 21c Release 3 Real Application Clusters to build the container image +# +# REQUIRED FILES TO BUILD THIS IMAGE +# ---------------------------------- +# (1) LINUX.X64_213000_db_home.zip +# (2) LINUX.X64_213000_grid_home.zip +# Download Oracle Grid 21c Release 3 Enterprise Edition for Linux x64 +# Download Oracle Database 21c Release 3 Enterprise Edition for Linux x64 +# from http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html +# +# HOW TO BUILD THIS IMAGE +# ----------------------- +# Run: +# $ docker build -t oracle/database:21.3.0-rac . + + +ARG BASE_OL_IMAGE=oraclelinux:8 +ARG SLIMMING=false +# Pull base image +# --------------- +# hadolint ignore=DL3006,DL3025 +FROM $BASE_OL_IMAGE AS base +ARG SLIMMING=false +ARG VERSION +# Labels +# ------ +LABEL "provider"="Oracle" \ + "issues"="https://github.com/oracle/docker-images/issues" \ + "volume.setup.location1"="/opt/scripts" \ + "volume.startup.location1"="/opt/scripts/startup" \ + "port.listener"="1521" \ + "port.oemexpress"="5500" + +# Argument to control removal of components not needed after db software installation +ARG INSTALL_FILE_1="LINUX.X64_213000_grid_home.zip" +ARG INSTALL_FILE_2="LINUX.X64_213000_db_home.zip" +ARG DB_EDITION="EE" +ARG USER="root" +ARG WORKDIR="/rac-work-dir" +ARG IGNORE_PREREQ=false + +# Environment variables required for this build (do NOT change) +# ------------------------------------------------------------- +# Linux Env Variable +# hadolint ignore=DL3044 +ENV SETUP_LINUX_FILE="setupLinuxEnv.sh" \ + INSTALL_DIR=/opt/scripts \ +# Grid Env variables + GRID_INSTALL_RSP="gridsetup_21c.rsp" \ + GRID_SW_INSTALL_RSP="grid_sw_install_21c.rsp" \ + GRID_SETUP_FILE="setupGrid.sh" \ + INITSH="initsh" \ + WORKDIR=$WORKDIR \ + FIXUP_PREQ_FILE="fixupPreq.sh" \ + INSTALL_GRID_BINARIES_FILE="installGridBinaries.sh" \ + INSTALL_GRID_PATCH="applyGridPatch.sh" \ + INVENTORY=/u01/app/oraInventory \ + INSTALL_FILE_1=$INSTALL_FILE_1 \ + INSTALL_FILE_2=$INSTALL_FILE_2 \ + DB_EDITION=$DB_EDITION \ + CONFIGGRID="configGrid.sh" \ + ADDNODE="AddNode.sh" \ + DELNODE="DelNode.sh" \ + ADDNODE_RSP="grid_addnode_21c.rsp" \ + SETUPSSH="setupSSH.expect" \ + DOCKERORACLEINIT="dockeroracleinit" \ + GRID_USER_HOME="/home/grid" \ + SETUPGRIDENV="setupGridEnv.sh" \ + ASM_DISCOVERY_DIR="/dev" \ + RESET_OS_PASSWORD="resetOSPassword.sh" \ + MULTI_NODE_INSTALL="MultiNodeInstall.py" \ +# RAC DB Env Variables + DB_INSTALL_RSP="db_sw_install_21c.rsp" \ + DBCA_RSP="dbca_21c.rsp" \ + DB_SETUP_FILE="setupDB.sh" \ + PWD_FILE="setPassword.sh" \ + RUN_FILE="runOracle.sh" \ + STOP_FILE="stopOracle.sh" \ + ENABLE_RAC_FILE="enableRAC.sh" \ + CHECK_DB_FILE="checkDBStatus.sh" \ + USER_SCRIPTS_FILE="runUserScripts.sh" \ + REMOTE_LISTENER_FILE="remoteListener.sh" \ + INSTALL_DB_BINARIES_FILE="installDBBinaries.sh" \ + GRID_HOME_CLEANUP="GridHomeCleanup.sh" \ + ORACLE_HOME_CLEANUP="OracleHomeCleanup.sh" \ + DB_USER="oracle" \ + GRID_USER="grid" \ + SLIMMING=$SLIMMING \ + container="true" \ + FUNCTIONS="functions.sh" \ + COMMON_SCRIPTS="/common_scripts" \ + CHECK_SPACE_FILE="checkSpace.sh" \ + RESET_FAILED_UNITS="resetFailedUnits.sh" \ + SET_CRONTAB="setCrontab.sh" \ + CRONTAB_ENTRY="crontabEntry" \ + EXPECT="/usr/bin/expect" \ + BIN="/usr/sbin" \ + IGNORE_PREREQ=$IGNORE_PREREQ + +############################################# +# ------------------------------------------- +# Start new stage for Non-Slim Image +# ------------------------------------------- +############################################# + +FROM base AS rac-image-slim-false +ARG SLIMMING +ARG VERSION +# Environment variables required for this build (do NOT change) +# ------------------------------------------------------------- +# Linux Env Variable +ENV GRID_BASE=/u01/app/grid \ + GRID_HOME=/u01/app/21c/grid \ + DB_BASE=/u01/app/oracle \ + DB_HOME=/u01/app/oracle/product/21c/dbhome_1 +# Use second ENV so that variable get substituted +# hadolint ignore=DL3044 +ENV INSTALL_SCRIPTS=$INSTALL_DIR/install \ + PATH=/bin:/usr/bin:/sbin:/usr/sbin \ + SCRIPT_DIR=$INSTALL_DIR/startup \ + RAC_SCRIPTS_DIR="scripts" \ + GRID_PATH=$GRID_HOME/bin:$GRID_HOME/OPatch/:$GRID_HOME/perl/bin:/usr/sbin:/bin:/sbin \ + DB_PATH=$DB_HOME/bin:$DB_HOME/OPatch/:$DB_HOME/perl/bin:/usr/sbin:/bin:/sbin \ + GRID_LD_LIBRARY_PATH=$GRID_HOME/lib:/usr/lib:/lib \ + DB_LD_LIBRARY_PATH=$DB_HOME/lib:/usr/lib:/lib + +# Copy binaries +# ------------- +# COPY Binaries +COPY $VERSION/$SETUP_LINUX_FILE $VERSION/$GRID_SETUP_FILE $VERSION/$DB_SETUP_FILE $VERSION/$CHECK_SPACE_FILE $VERSION/$FIXUP_PREQ_FILE $INSTALL_SCRIPTS/ + +# Setup Scripts +COPY $VERSION/$RUN_FILE $VERSION/$ADDNODE $VERSION/$ADDNODE_RSP $VERSION/$SETUPSSH $VERSION/$FUNCTIONS $VERSION/$CONFIGGRID $VERSION/$GRID_INSTALL_RSP $VERSION/$DBCA_RSP $VERSION/$PWD_FILE $VERSION/$CHECK_DB_FILE $VERSION/$USER_SCRIPTS_FILE $VERSION/$STOP_FILE $VERSION/$CHECK_DB_FILE $VERSION/$REMOTE_LISTENER_FILE $VERSION/$SETUPGRIDENV $VERSION/$DELNODE $VERSION/$INITSH $VERSION/$RESET_OS_PASSWORD $VERSION/$MULTI_NODE_INSTALL $SCRIPT_DIR/ + +COPY $RAC_SCRIPTS_DIR $SCRIPT_DIR/scripts +# hadolint ignore=SC2086 +RUN chmod 755 $INSTALL_SCRIPTS/*.sh && \ + sync && \ + $INSTALL_DIR/install/$CHECK_SPACE_FILE && \ + $INSTALL_DIR/install/$SETUP_LINUX_FILE && \ + $INSTALL_DIR/install/$GRID_SETUP_FILE && \ + $INSTALL_DIR/install/$DB_SETUP_FILE && \ + sync + +############################################# +# ------------------------------------------- +# Start new stage for slim image +# ------------------------------------------- +############################################# +FROM base AS rac-image-slim-true +ARG SLIMMING +ARG VERSION +# Environment variables required for this build (do NOT change) +# ------------------------------------------------------------- +# Linux Env Variable +ENV INSTALL_SCRIPTS=$INSTALL_DIR/install \ + PATH=/bin:/usr/bin:/sbin:/usr/sbin \ + SCRIPT_DIR=$INSTALL_DIR/startup \ + RAC_SCRIPTS_DIR="scripts" + +# Copy binaries +# ------------- +# COPY Binaries +COPY $VERSION/$SETUP_LINUX_FILE $VERSION/$GRID_SETUP_FILE $VERSION/$DB_SETUP_FILE $VERSION/$CHECK_SPACE_FILE $VERSION/$FIXUP_PREQ_FILE $INSTALL_SCRIPTS/ + +# Setup Scripts +COPY $VERSION/$RUN_FILE $VERSION/$SETUPSSH $VERSION/$USER_SCRIPTS_FILE $VERSION/$STOP_FILE $VERSION/$CHECK_DB_FILE $VERSION/$REMOTE_LISTENER_FILE $VERSION/$INITSH $VERSION/$RESET_OS_PASSWORD $SCRIPT_DIR/ + +COPY $RAC_SCRIPTS_DIR $SCRIPT_DIR/scripts +# hadolint ignore=SC2086 +RUN chmod 755 $INSTALL_SCRIPTS/*.sh && \ + sync && \ + $INSTALL_DIR/install/$CHECK_SPACE_FILE && \ + $INSTALL_DIR/install/$SETUP_LINUX_FILE && \ + $INSTALL_DIR/install/$GRID_SETUP_FILE && \ + $INSTALL_DIR/install/$DB_SETUP_FILE && \ + sync + + +############################################# +# ------------------------------------------- +# Start new stage for installing the grid and DB +# ------------------------------------------- +############################################# +# hadolint ignore=DL3006 +FROM rac-image-slim-${SLIMMING} AS builder +ARG SLIMMING +# hadolint ignore=DL3006 +ARG VERSION +COPY $VERSION/$INSTALL_GRID_BINARIES_FILE $VERSION/$GRID_SW_INSTALL_RSP $VERSION/$DB_SETUP_FILE $VERSION/$DB_INSTALL_RSP $VERSION/$INSTALL_DB_BINARIES_FILE $VERSION/$ENABLE_RAC_FILE $VERSION/$GRID_HOME_CLEANUP $VERSION/$ORACLE_HOME_CLEANUP $VERSION/$INSTALL_FILE_1* $VERSION/$INSTALL_FILE_2* $INSTALL_SCRIPTS/ +# hadolint ignore=SC2086 +RUN chmod 755 $INSTALL_SCRIPTS/*.sh +## Install software if SLIMMING is false +# hadolint ignore=SC2086 +RUN if [ "${SLIMMING}x" != 'truex' ]; then \ + sed -e '/hard *memlock/s/^/#/g' -i /etc/security/limits.d/oracle-database-preinstall-21c.conf && \ + sed -e '/ *nofile /s/^/#/g' -i /etc/security/limits.d/oracle-database-preinstall-21c.conf && \ + su $GRID_USER -c "$INSTALL_DIR/install/$INSTALL_GRID_BINARIES_FILE EE $PATCH_NUMBER" && \ + $INVENTORY/orainstRoot.sh && \ + $GRID_HOME/root.sh && \ + su $DB_USER -c "$INSTALL_DIR/install/$INSTALL_DB_BINARIES_FILE EE" && \ + su $DB_USER -c "$INSTALL_DIR/install/$ENABLE_RAC_FILE" && \ + $INVENTORY/orainstRoot.sh && \ + $DB_HOME/root.sh && \ + su $GRID_USER -c "$INSTALL_SCRIPTS/$GRID_HOME_CLEANUP" && \ + su $DB_USER -c "$INSTALL_SCRIPTS/$ORACLE_HOME_CLEANUP" && \ + :; \ + fi +# hadolint ignore=SC3014 +RUN if [ "${SLIMMING}x" == 'truex' ]; then \ + mkdir /u01 && \ + :; \ + fi +# hadolint ignore=SC2086 +RUN rm -f $INSTALL_DIR/install/* && \ + sync + +############################################# +# ------------------------------------------- +# Start new layer for grid & database runtime +# ------------------------------------------- +############################################# +# hadolint ignore=DL3006 +FROM rac-image-slim-${SLIMMING} AS final +# hadolint ignore=DL3006 +COPY --from=builder /u01 /u01 +# hadolint ignore=SC2086 +RUN if [ "${SLIMMING}x" != 'truex' ]; then \ + $INVENTORY/orainstRoot.sh && \ + $GRID_HOME/root.sh && \ + $DB_HOME/root.sh && \ + chmod 666 $SCRIPT_DIR/*.rsp && \ + :; \ + fi && \ + $INSTALL_DIR/install/$FIXUP_PREQ_FILE && \ + sync && \ + chmod 755 $SCRIPT_DIR/*.sh && \ + chmod 755 $SCRIPT_DIR/scripts/*.py && \ + chmod 755 $SCRIPT_DIR/scripts/cmdExec && \ + chmod 755 $SCRIPT_DIR/scripts/*.expect && \ + echo "nohup $SCRIPT_DIR/runOracle.sh &" >> /etc/rc.local && \ + rm -f /etc/rc.d/init.d/oracle-database-preinstall-21c-firstboot && \ + chmod +x /etc/rc.d/rc.local && \ + cp $SCRIPT_DIR/$INITSH /usr/bin/$INITSH && \ + setcap 'cap_net_admin,cap_net_raw+ep' /usr/bin/ping && \ + chmod 755 /usr/bin/$INITSH && \ + rm -f /etc/sysctl.d/99-oracle-database-preinstall-21c-sysctl.conf && \ + rm -f /etc/sysctl.d/99-sysctl.conf && \ + rm -f $INSTALL_DIR/install/* && \ + sync + +USER ${USER} +VOLUME ["/common_scripts"] +WORKDIR $WORKDIR + +HEALTHCHECK --interval=2m --start-period=30m \ + CMD "$SCRIPT_DIR/scripts/main.py --checkracinst=true" >/dev/null || exit 1 + +# Define default command to start Oracle Grid and RAC Database setup. +# hadolint ignore=DL3025 +ENTRYPOINT /usr/bin/$INITSH diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/Dockerfile.ORG b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/Dockerfile.ORG new file mode 100644 index 0000000000..9baec43dc5 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/Dockerfile.ORG @@ -0,0 +1,146 @@ +# LICENSE UPL 1.0 +# +# Copyright (c) 2018,2021 Oracle and/or its affiliates. +# +# ORACLE DOCKERFILES PROJECT +# -------------------------- +# This is the Dockerfile for Oracle Database 21c Release 3 Real Application Clusters to build the container image +# +# REQUIRED FILES TO BUILD THIS IMAGE +# ---------------------------------- +# (1) LINUX.X64_213000_db_home.zip +# (2) LINUX.X64_213000_grid_home.zip +# Download Oracle Grid 21c Release 3 Enterprise Edition for Linux x64 +# Download Oracle Database 21c Release 3 Enterprise Edition for Linux x64 +# from http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html +# +# HOW TO BUILD THIS IMAGE +# ----------------------- +# Run: +# $ docker build -t oracle/database:21.3.0-rac . +# +# Pull base image +# --------------- +FROM oraclelinux:7-slim + +# Maintainer +# ---------- +MAINTAINER Paramdeep Saini + +# Environment variables required for this build (do NOT change) +# ------------------------------------------------------------- +# Linux Env Variable +ENV SETUP_LINUX_FILE="setupLinuxEnv.sh" \ + INSTALL_DIR=/opt/scripts \ +# Grid Env variables + GRID_BASE=/u01/app/grid \ + GRID_HOME=/u01/app/21.3.0/grid \ + INSTALL_FILE_1="LINUX.X64_213000_grid_home.zip" \ + GRID_INSTALL_RSP="gridsetup_21c.rsp" \ + GRID_SW_INSTALL_RSP="grid_sw_install_21c.rsp" \ + GRID_SETUP_FILE="setupGrid.sh" \ + FIXUP_PREQ_FILE="fixupPreq.sh" \ + INSTALL_GRID_BINARIES_FILE="installGridBinaries.sh" \ + INSTALL_GRID_PATCH="applyGridPatch.sh" \ + INVENTORY=/u01/app/oraInventory \ + CONFIGGRID="configGrid.sh" \ + ADDNODE="AddNode.sh" \ + DELNODE="DelNode.sh" \ + ADDNODE_RSP="grid_addnode_21c.rsp" \ + SETUPSSH="setupSSH.expect" \ + DOCKERORACLEINIT="dockeroracleinit" \ + GRID_USER_HOME="/home/grid" \ + SETUPGRIDENV="setupGridEnv.sh" \ + ASM_DISCOVERY_DIR="/dev" \ + RESET_OS_PASSWORD="resetOSPassword.sh" \ + MULTI_NODE_INSTALL="MultiNodeInstall.py" \ +# RAC DB Env Variables + DB_BASE=/u01/app/oracle \ + DB_HOME=/u01/app/oracle/product/21.3.0/dbhome_1 \ + INSTALL_FILE_2="LINUX.X64_213000_db_home.zip" \ + DB_INSTALL_RSP="db_sw_install_21c.rsp" \ + DBCA_RSP="dbca_21c.rsp" \ + DB_SETUP_FILE="setupDB.sh" \ + PWD_FILE="setPassword.sh" \ + RUN_FILE="runOracle.sh" \ + STOP_FILE="stopOracle.sh" \ + ENABLE_RAC_FILE="enableRAC.sh" \ + CHECK_DB_FILE="checkDBStatus.sh" \ + USER_SCRIPTS_FILE="runUserScripts.sh" \ + REMOTE_LISTENER_FILE="remoteListener.sh" \ + INSTALL_DB_BINARIES_FILE="installDBBinaries.sh" \ + GRID_HOME_CLEANUP="GridHomeCleanup.sh" \ + ORACLE_HOME_CLEANUP="OracleHomeCleanup.sh" \ + DB_USER="oracle" \ + GRID_USER="grid" \ +# COMMON ENV Variable + FUNCTIONS="functions.sh" \ + COMMON_SCRIPTS="/common_scripts" \ + CHECK_SPACE_FILE="checkSpace.sh" \ + RESET_FAILED_UNITS="resetFailedUnits.sh" \ + SET_CRONTAB="setCrontab.sh" \ + CRONTAB_ENTRY="crontabEntry" \ + EXPECT="/usr/bin/expect" \ + BIN="/usr/sbin" \ + container="true" +# Use second ENV so that variable get substituted +ENV INSTALL_SCRIPTS=$INSTALL_DIR/install \ + PATH=/bin:/usr/bin:/sbin:/usr/sbin \ + SCRIPT_DIR=$INSTALL_DIR/startup \ + GRID_PATH=$GRID_HOME/bin:$GRID_HOME/OPatch/:$GRID_HOME/perl/bin:/usr/sbin:/bin:/sbin \ + DB_PATH=$DB_HOME/bin:$DB_HOME/OPatch/:$DB_HOME/perl/bin:/usr/sbin:/bin:/sbin \ + GRID_LD_LIBRARY_PATH=$GRID_HOME/lib:/usr/lib:/lib \ + DB_LD_LIBRARY_PATH=$DB_HOME/lib:/usr/lib:/lib + +# Copy binaries +# ------------- +# COPY Binaries +COPY $GRID_SW_INSTALL_RSP $INSTALL_GRID_PATCH $SETUP_LINUX_FILE $GRID_SETUP_FILE $INSTALL_GRID_BINARIES_FILE $FIXUP_PREQ_FILE $DB_SETUP_FILE $CHECK_SPACE_FILE $DB_INSTALL_RSP $INSTALL_DB_BINARIES_FILE $ENABLE_RAC_FILE $GRID_HOME_CLEANUP $ORACLE_HOME_CLEANUP $INSTALL_FILE_1 $INSTALL_FILE_2 $INSTALL_SCRIPTS/ + +# Setup Scripts +COPY $RUN_FILE $ADDNODE $ADDNODE_RSP $SETUPSSH $FUNCTIONS $CONFIGGRID $GRID_INSTALL_RSP $DBCA_RSP $PWD_FILE $CHECK_DB_FILE $USER_SCRIPTS_FILE $STOP_FILE $CHECK_DB_FILE $REMOTE_LISTENER_FILE $SETUPGRIDENV $DELNODE $RESET_OS_PASSWORD $MULTI_NODE_INSTALL $SCRIPT_DIR/ + +RUN chmod 755 $INSTALL_SCRIPTS/*.sh && \ + sync && \ + $INSTALL_DIR/install/$CHECK_SPACE_FILE && \ + $INSTALL_DIR/install/$SETUP_LINUX_FILE && \ + $INSTALL_DIR/install/$GRID_SETUP_FILE && \ + $INSTALL_DIR/install/$DB_SETUP_FILE && \ + sed -e '/hard *memlock/s/^/#/g' -i /etc/security/limits.d/oracle-database-preinstall-21c.conf && \ + sed -e '/ *nofile /s/^/#/g' -i /etc/security/limits.d/oracle-database-preinstall-21c.conf && \ + su $GRID_USER -c "$INSTALL_DIR/install/$INSTALL_GRID_BINARIES_FILE EE $PATCH_NUMBER" && \ + $INVENTORY/orainstRoot.sh && \ + $GRID_HOME/root.sh && \ + su $DB_USER -c "$INSTALL_DIR/install/$INSTALL_DB_BINARIES_FILE EE" && \ + su $DB_USER -c "$INSTALL_DIR/install/$ENABLE_RAC_FILE" && \ + $INVENTORY/orainstRoot.sh && \ + $DB_HOME/root.sh && \ + su $GRID_USER -c "$INSTALL_SCRIPTS/$GRID_HOME_CLEANUP" && \ + su $DB_USER -c "$INSTALL_SCRIPTS/$ORACLE_HOME_CLEANUP" && \ + $INSTALL_DIR/install/$FIXUP_PREQ_FILE && \ + rm -rf $INSTALL_DIR/install && \ + rm -rf $INSTALL_DIR/install && \ + sync && \ + chmod 755 $SCRIPT_DIR/*.sh && \ + chmod 755 $SCRIPT_DIR/*.expect && \ + chmod 666 $SCRIPT_DIR/*.rsp && \ + echo "nohup $SCRIPT_DIR/runOracle.sh &" >> /etc/rc.local && \ + rm -f /etc/rc.d/init.d/oracle-database-preinstall-21c-firstboot && \ + mkdir -p $GRID_HOME/dockerinit && \ + cp $GRID_HOME/bin/$DOCKERORACLEINIT $GRID_HOME/dockerinit/ && \ + chown $GRID_USER:oinstall $GRID_HOME/dockerinit && \ + chown root:oinstall $GRID_HOME/dockerinit/$DOCKERORACLEINIT && \ + chmod 4755 $GRID_HOME/dockerinit/$DOCKERORACLEINIT && \ + ln -s $GRID_HOME/dockerinit/$DOCKERORACLEINIT /usr/sbin/oracleinit && \ + chmod +x /etc/rc.d/rc.local && \ + rm -f /etc/sysctl.d/99-oracle-database-preinstall-21c-sysctl.conf && \ + rm -f /etc/sysctl.d/99-sysctl.conf && \ + sync + +USER grid +WORKDIR /home/grid +VOLUME ["/common_scripts"] + +# Define default command to start Oracle Grid and RAC Database setup. + +CMD ["/usr/sbin/oracleinit"] diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/GridHomeCleanup.sh b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/GridHomeCleanup.sh new file mode 100755 index 0000000000..434c40db42 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/GridHomeCleanup.sh @@ -0,0 +1,59 @@ +#!/bin/bash +# LICENSE UPL 1.0 +# +# Copyright (c) 2019,2021 Oracle and/or its affiliates. +# +# Since: January, 2019 +# Author: paramdeep.saini@oracle.com +# Description: Cleanup the $GRID_HOME and ORACLE_BASE after Grid confguration in the image +# +# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS HEADER. +# + +# Image Cleanup Script +# shellcheck disable=SC1090 +source /home/"${GRID_USER}"/.bashrc +# shellcheck disable=SC2034 +ORACLE_HOME=${GRID_HOME} + +rm -rf /u01/app/grid/* +rm -rf "$GRID_HOME"/log +rm -rf "$GRID_HOME"/logs +rm -rf "$GRID_HOME"/crs/init +rm -rf "$GRID_HOME"/crs/install/rhpdata +rm -rf "$GRID_HOME"/crs/log +rm -rf "$GRID_HOME"/racg/dump +rm -rf "$GRID_HOME"/srvm/log +rm -rf "$GRID_HOME"/cv/log +rm -rf "$GRID_HOME"/cdata +rm -rf "$GRID_HOME"/bin/core* +rm -rf "$GRID_HOME"/bin/diagsnap.pl +rm -rf "$GRID_HOME"/cfgtoollogs/* +rm -rf "$GRID_HOME"/network/admin/listener.ora +rm -rf "$GRID_HOME"/crf +rm -rf "$GRID_HOME"/ologgerd/init +rm -rf "$GRID_HOME"/osysmond/init +rm -rf "$GRID_HOME"/ohasd/init +rm -rf "$GRID_HOME"/ctss/init +rm -rf "$GRID_HOME"/dbs/.*.dat +rm -rf "$GRID_HOME"/oc4j/j2ee/home/log +rm -rf "$GRID_HOME"/inventory/Scripts/ext/bin/log +rm -rf "$GRID_HOME"/inventory/backup/* +rm -rf "$GRID_HOME"/mdns/init +rm -rf "$GRID_HOME"/gnsd/init +rm -rf "$GRID_HOME"/evm/init +rm -rf "$GRID_HOME"/gipc/init +rm -rf "$GRID_HOME"/gpnp/gpnp_bcp.* +rm -rf "$GRID_HOME"/gpnp/init +rm -rf "$GRID_HOME"/auth +rm -rf "$GRID_HOME"/tfa +rm -rf "$GRID_HOME"/suptools/tfa/release/diag +rm -rf "$GRID_HOME"/rdbms/audit/* +rm -rf "$GRID_HOME"/rdbms/log/* +rm -rf "$GRID_HOME"/network/log/* +rm -rf "$GRID_HOME"/inventory/Scripts/comps.xml.* +rm -rf "$GRID_HOME"/inventory/Scripts/oraclehomeproperties.xml.* +rm -rf "$GRID_HOME"/inventory/Scripts/oraInst.loc.* +rm -rf "$GRID_HOME"/inventory/Scripts/inventory.xml.* +rm -rf "$GRID_HOME"/log_file_client.log +rm -rf "$INVENTORY"/logs/* diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/MultiNodeInstall.py b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/MultiNodeInstall.py new file mode 100644 index 0000000000..45144061a4 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/MultiNodeInstall.py @@ -0,0 +1,324 @@ +#!/usr/bin/python +#!/usr/bin/env python + +########################################################################################################### + +# LICENSE UPL 1.0 +# Copyright (c) 2019,2021, Oracle and/or its affiliates. +# Since: January, 2019 +# NAME +# buildImage.py - +# +# DESCRIPTION +# +# +# NOTES + + +# Global Variables +Period = '.' + + +# Import standard python libraries +import subprocess +import sys +import time +import datetime +import os +import commands +import getopt +import shlex +import json +import logging +import socket + + +etchostfile="/etc/hosts" +racenvfile="/etc/rac_env_vars" +domain="none" + +def Usage(): + pass + +def Update_Envfile(common_params): + global racenvfile + global domain + filedata1 = None + f1 = open(racenvfile, 'r') + filedata1 = f1.read() + f1.close + + for keys in common_params.keys(): + if keys == 'domain': + domain = common_params[keys] + + env_var_str = "export " + keys.upper() + "=" + common_params[keys] + Redirect_To_File("Env vars for RAC Env set to " + env_var_str, "INFO") + filedata1 = filedata1 + "\n" + env_var_str + + Write_To_File(filedata1,racenvfile) + return "Env file updated sucesfully" + + +def Update_Hostfile(node_list): + counter=0 + global etchostfile + global domain + filedata = None + filedata1 = None + f = open(etchostfile, 'r') + filedata = f.read() + f.close + + global racenvfile + filedata1 = None + f1 = open(racenvfile, 'r') + filedata1 = f1.read() + f1.close + host_name=socket.gethostname() + + if domain == 'none': + fqdn_hostname=socket.getfqdn() + domain=fqdn_hostname.split(".")[1] + if not host_name: + Redirect_To_File("Unable to get the container host name! Exiting..", "INFO") + else: + Redirect_To_File("Container Hostname and Domain name : " + host_name + " " + domain, "INFO") + +# Replace and add the target string + for dict_list in node_list: + print dict_list + if "public_hostname" in dict_list.keys(): + pubhost = dict_list['public_hostname'] + if host_name == pubhost: + Redirect_To_File("PUBLIC Hostname set to" + pubhost, "INFO") + PUBLIC_HOSTNAME=pubhost + if counter == 0: + CRS_NODES = pubhost + CRS_CONFIG_NODES = pubhost + counter = counter + 1 + else: + CRS_NODES = CRS_NODES + "," + pubhost + CRS_CONFIG_NODES = CRS_CONFIG_NODES + "," + pubhost + counter = counter + 1 + else: + return "Error: Did not find the key public_hostname" + if "public_ip" in dict_list.keys(): + pubip = dict_list['public_ip'] + if host_name == pubhost: + Redirect_To_File("PUBLIC IP set to" + pubip, "INFO") + PUBLIC_IP=pubip + else: + return "Error: Did not find the key public_ip" + if "private_ip" in dict_list.keys(): + privip = dict_list['private_ip'] + if host_name == pubhost: + Redirect_To_File("Private IP set to" + privip, "INFO") + PRIV_IP=privip + else: + return "Error: Did not find the key private_ip" + if "private_hostname" in dict_list.keys(): + privhost = dict_list['private_hostname'] + if host_name == pubhost: + Redirect_To_File("Private HOSTNAME set to" + privhost, "INFO") + PRIV_HOSTNAME=privhost + else: + return "Error: Did not find the key private_hostname" + if "vip_hostname" in dict_list.keys(): + viphost = dict_list['vip_hostname'] + CRS_CONFIG_NODES = CRS_CONFIG_NODES + ":" + viphost + ":" + "HUB" + if host_name == pubhost: + Redirect_To_File("VIP HOSTNAME set to" + viphost, "INFO") + VIP_HOSTNAME=viphost + else: + return "Error: Did not find the key vip_hostname" + if "vip_ip" in dict_list.keys(): + vipip = dict_list['vip_ip'] + if host_name == pubhost: + Redirect_To_File("NODE VIP set to" + vipip, "INFO") + NODE_VIP=vipip + else: + return "Error: Did not find the key vip_ip" + + delete_entry = [pubhost, privhost, viphost, pubip, privip, vipip] + for hostentry in delete_entry: + print "Processing " + hostentry + cmd=cmd= '""' + "sed " + "'" + "/" + hostentry + "/d" + "'" + " <<<" + '"' + filedata + '"' + '""' + output,retcode=Execute_Single_Command(cmd,'None','') + filedata=output + print "New Contents of Host file " + filedata + + # Removing Empty Lines + cmd=cmd= '""' + "sed " + "'" + "/^$/d" + "'" + " <<<" + '"' + filedata + '"' + '""' + output,retcode=Execute_Single_Command(cmd,'None','') + filedata=output + print "New Contents of Host file " + filedata + + delete_entry [:] + + if pubhost not in filedata: + if pubip not in filedata: + hoststring='%s %s %s' %(pubip, pubhost + "." + domain, pubhost) + Redirect_To_File(hoststring, "INFO") + filedata = filedata + '\n' + hoststring + + if privhost not in filedata: + if privip not in filedata: + hoststring='%s %s %s' %(privip, privhost + "." + domain, privhost) + Redirect_To_File(hoststring, "INFO") + filedata = filedata + '\n' + hoststring + + if viphost not in filedata: + if vipip not in filedata: + hoststring='%s %s %s' %(vipip, viphost + "." + domain, viphost) + Redirect_To_File(hoststring, "INFO") + filedata = filedata + '\n' + hoststring + print filedata + + Write_To_File(filedata,etchostfile) + if CRS_NODES: + Redirect_To_File("Cluster Nodes set to " + CRS_NODES, "INFO") + filedata1 = filedata1 + '\n' + 'export CRS_NODES=' + CRS_NODES + if CRS_CONFIG_NODES: + Redirect_To_File("CRS CONFIG Variable set to " + CRS_CONFIG_NODES, "INFO") + filedata1 = filedata1 + '\n' + 'export CRS_CONFIG_NODES=' + CRS_CONFIG_NODES + if NODE_VIP: + filedata1 = filedata1 + '\n' + 'export NODE_VIP=' + NODE_VIP + if PRIV_IP: + filedata1 = filedata1 + '\n' + 'export PRIV_IP=' + PRIV_IP + if PUBLIC_HOSTNAME: + filedata1 = filedata1 + '\n' + 'export PUBLIC_HOSTNAME=' + PUBLIC_HOSTNAME + if PUBLIC_IP: + filedata1 = filedata1 + '\n' + 'export PUBLIC_IP=' + PUBLIC_IP + if VIP_HOSTNAME: + filedata1 = filedata1 + '\n' + 'export VIP_HOSTNAME=' + VIP_HOSTNAME + if PRIV_HOSTNAME: + filedata1 = filedata1 + '\n' + 'export PRIV_HOSTNAME=' + PRIV_HOSTNAME + + Write_To_File(filedata1,racenvfile) + return "Host and Env file updated sucesfully" + + +def Write_To_File(text,filename): + f = open(filename,'w') + f.write(text) + f.close() + +def Setup_Operation(op_type): + if op_type == 'installrac': + cmd="sudo /opt/scripts/startup/runOracle.sh" + + if op_type == 'addnode': + cmd="sudo /opt/scripts/startup/runOracle.sh" + + if op_type == 'delnode': + cmd="sudo /opt/scripts/startup/DelNode.sh" + + output,retcode=Execute_Single_Command(cmd,'None','') + if retcode != 0: + return "Error occuurred in setting up env" + else: + return "setup operation completed sucessfully!" + + +def Execute_Single_Command(cmd,env,dir): + try: + if not dir: + dir=os.getcwd() + print shlex.split(cmd) + out = subprocess.Popen(cmd, shell=True, cwd=dir, stdout=subprocess.PIPE) + output, retcode = out.communicate()[0],out.returncode + return output,retcode + except: + Redirect_To_File("Error Occurred in Execute_Single_Command block! Please Check", "ERROR") + sys.exit(2) + +def Redirect_To_File(text,level): + original = sys.stdout + sys.stdout = open('/proc/1/fd/1', 'w') + root = logging.getLogger() + if not root.handlers: + root.setLevel(logging.INFO) + ch = logging.StreamHandler(sys.stdout) + ch.setLevel(logging.INFO) + formatter = logging.Formatter('%(asctime)s :%(message)s', "%Y-%m-%d %T %Z") + ch.setFormatter(formatter) + root.addHandler(ch) + message = os.path.basename(__file__) + " : " + text + root.info(' %s ' % message ) + sys.stdout = original + + +#BEGIN : TO check whether valid arguments are passed for the container ceation or not +def main(argv): + version= '' + type= '' + dir='' + script='' + Redirect_To_File("Passed Parameters " + str(sys.argv[1:]), "INFO") + try: + opts, args = getopt.getopt(sys.argv[1:], '', ['setuptype=','nodeparams=','comparams=','help']) + + except getopt.GetoptError: + Usage() + sys.exit(2) + #Redirect_To_File("Option Arguments are : " + opts , "INFO") + for opt, arg in opts: + if opt in ('--help'): + Usage() + sys.exit(2) + elif opt in ('--nodeparams'): + nodeparams = arg + elif opt in ('--comparams'): + comparams = arg + elif opt in ('--setuptype'): + setuptype = arg + else: + Usage() + sys.exit(2) + + if setuptype == 'installrac': + Redirect_To_File("setup type parameter is set to installrac", "INFO") + elif setuptype == 'addnode': + Redirect_To_File("setup type parameter is set to addnode", "INFO") + elif setuptype == 'delnode': + Redirect_To_File("setup type parameter is set to delnode", "INFO") + else: + setupUsage() + sys.exit(2) + if not nodeparams: + Redirect_To_File("Node Parameters for the Cluster not specified", "Error") + sys.exit(2) + if not comparams: + Redirect_To_File("Common Parameter for the Cluster not specified", "Error") + sys.exit(2) + + + Redirect_To_File("NodeParams set to" + nodeparams , "INFO" ) + Redirect_To_File("Comparams set to" + comparams , "INFO" ) + + + comparams = comparams.replace('\\"','"') + Redirect_To_File("Comparams set to" + comparams , "INFO" ) + envfile_status=Update_Envfile(json.loads(comparams)) + if 'Error' in envfile_status: + Redirect_To_File(envfile_status, "ERROR") + return sys.exit(2) + + nodeparams = nodeparams.replace('\\"','"') + Redirect_To_File("NodeParams set to" + nodeparams , "INFO" ) + hostfile_status=Update_Hostfile(json.loads(nodeparams)) + if 'Error' in hostfile_status: + Redirect_To_File(hostfile_status, "ERROR") + return sys.exit(2) + + Redirect_To_File("Executing operation" + setuptype, "INFO") + setup_op=Setup_Operation(setuptype) + if 'Error' in setup_op: + Redirect_To_File(setup_op, "ERROR") + return sys.exit(2) + + sys.exit(0) + +if __name__ == '__main__': + main(sys.argv) diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/OracleHomeCleanup.sh b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/OracleHomeCleanup.sh new file mode 100755 index 0000000000..56127ff745 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/OracleHomeCleanup.sh @@ -0,0 +1,35 @@ +#!/bin/bash +# LICENSE UPL 1.0 +# +# Copyright (c) 2019,2021 Oracle and/or its affiliates. +# +# Since: January, 2019 +# Author: sanjay.singh@oracle.com, paramdeep.saini@oracle.com +# Description: Cleanup the $ORACLE_HOME and ORACLE_BASE after Grid confguration in the image +# +# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS HEADER. +# + +# Image Cleanup Script +# shellcheck disable=SC1090 +source /home/"${DB_USER}"/.bashrc +ORACLE_HOME=${DB_HOME} + +rm -rf "$ORACLE_HOME"/bin/extjob +rm -rf "$ORACLE_HOME"/PAF +rm -rf "$ORACLE_HOME"/install/oratab +rm -rf "$ORACLE_HOME"/install/make.log +rm -rf "$ORACLE_HOME"/network/admin/listener.ora +rm -rf "$ORACLE_HOME"/network/admin/tnsnames.ora +rm -rf "$ORACLE_HOME"/bin/nmo +rm -rf "$ORACLE_HOME"/bin/nmb +rm -rf "$ORACLE_HOME"/bin/nmhs +rm -rf "$ORACLE_HOME"/log/.* +rm -rf "$ORACLE_HOME"/oc4j/j2ee/oc4j_applications/applications/em/em/images/chartCache/* +rm -rf "$ORACLE_HOME"/rdbms/audit/* +rm -rf "$ORACLE_HOME"/cfgtoollogs/* +rm -rf "$ORACLE_HOME"/inventory/Scripts/comps.xml.* +rm -rf "$ORACLE_HOME"/inventory/Scripts/oraclehomeproperties.xml.* +rm -rf "$ORACLE_HOME"/inventory/Scripts/oraInst.loc.* +rm -rf "$ORACLE_HOME"/inventory/Scripts/inventory.xml.* +rm -rf "$INVENTORY"/logs/* \ No newline at end of file diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/applyGridPatch.sh b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/applyGridPatch.sh new file mode 100755 index 0000000000..af451a6e68 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/applyGridPatch.sh @@ -0,0 +1,43 @@ +#!/bin/bash +# LICENSE UPL 1.0 +# +# Copyright (c) 2018,2021 Oracle and/or its affiliates. +# +# Since: January, 2018 +# Author: sanjay.singh@oracle.com, paramdeep.saini@oracle.com +# Description: Apply Patch for Oracle Grid and Databas. +# +# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS HEADER. +# + +PATCH=$1 + +# Check whether edition has been passed on +if [ "$PATCH" == "" ]; then + echo "ERROR: No Patch has been passed on!" + echo "Please specify the correct PATCH!" + exit 1; +fi; + +# Check whether GRID_BASE is set +if [ "$GRID_BASE" == "" ]; then + echo "ERROR: GRID_BASE has not been set!" + echo "You have to have the GRID_BASE environment variable set to a valid value!" + exit 1; +fi; + +# Check whether GRID_HOME is set +if [ "$GRID_HOME" == "" ]; then + echo "ERROR: GRID_HOME has not been set!" + echo "You have to have the GRID_HOME environment variable set to a valid value!" + exit 1; +fi; + +# Install Oracle binaries +# shellcheck disable=SC2115 +unzip -q "$INSTALL_SCRIPTS"/"$PATCH" -d "$GRID_USER_HOME" && \ +rm -f "$INSTALL_SCRIPTS"/"$GRID_PATCH" && \ +cd "$GRID_USER_HOME"/"$PATCH_NUMBER"/"$PATCH_NUMBER" && \ +"$GRID_HOME"/OPatch/opatch napply -silent -local -oh "$GRID_HOME" -id "$PATCH_NUMBER" && \ +cd "$GRID_USER_HOME" && \ +rm -rf "$GRID_USER_HOME"/"$PATCH_NUMBER" diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/checkSpace.sh b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/checkSpace.sh new file mode 100755 index 0000000000..0480158b95 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/checkSpace.sh @@ -0,0 +1,23 @@ +#!/bin/bash +# LICENSE UPL 1.0 +# +# Copyright (c) 2018,2021 Oracle and/or its affiliates. +# +# Since: January, 2018 +# Author: sanjay.singh@oracle.com, paramdeep.saini@oracle.com +# Description: Checks the available space of the system. +# +# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS HEADER. +# + +REQUIRED_SPACE_GB=35 +AVAILABLE_SPACE_GB=`df -PB 1G / | tail -n 1 | awk '{print $4}'` + +if [ $AVAILABLE_SPACE_GB -lt $REQUIRED_SPACE_GB ]; then + script_name=`basename "$0"` + echo "!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!" + echo "$script_name: ERROR - There is not enough space available in the docker container." + echo "$script_name: The container needs at least $REQUIRED_SPACE_GB GB , but only $AVAILABLE_SPACE_GB available." + echo "!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!" + exit 1; +fi; diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/db_inst.rsp b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/db_inst.rsp new file mode 100644 index 0000000000..90ff555e5d --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/db_inst.rsp @@ -0,0 +1,125 @@ +#################################################################### +## Copyright(c) Oracle Corporation 1998,2018. All rights reserved.## +## ## +## Specify values for the variables listed below to customize ## +## your installation. ## +## ## +## Each variable is associated with a comment. The comment ## +## can help to populate the variables with the appropriate ## +## values. ## +## ## +## IMPORTANT NOTE: This file contains plain text passwords and ## +## should be secured to have read permission only by oracle user ## +## or db administrator who owns this installation. ## +## ## +#################################################################### + + +#------------------------------------------------------------------------------- +# Do not change the following system generated value. +#------------------------------------------------------------------------------- +oracle.install.responseFileVersion=/oracle/install/rspfmt_dbinstall_response_schema_v18.0.0 + +#------------------------------------------------------------------------------- +# Specify the installation option. +# It can be one of the following: +# - INSTALL_DB_SWONLY +# - INSTALL_DB_AND_CONFIG +#------------------------------------------------------------------------------- +oracle.install.option=INSTALL_DB_SWONLY + +#------------------------------------------------------------------------------- +# Specify the Unix group to be set for the inventory directory. +#------------------------------------------------------------------------------- +UNIX_GROUP_NAME=oinstall + +#------------------------------------------------------------------------------- +# Specify the location which holds the inventory files. +# This is an optional parameter if installing on +# Windows based Operating System. +#------------------------------------------------------------------------------- +INVENTORY_LOCATION=/u01/app/oraInventory +#------------------------------------------------------------------------------- +# Specify the complete path of the Oracle Home. +#------------------------------------------------------------------------------- +ORACLE_HOME=/u01/app/oracle/product/18.3.0/dbhome_1 + +#------------------------------------------------------------------------------- +# Specify the complete path of the Oracle Base. +#------------------------------------------------------------------------------- +ORACLE_BASE=/u01/app/oracle + +#------------------------------------------------------------------------------- +# Specify the installation edition of the component. +# +# The value should contain only one of these choices. +# - EE : Enterprise Edition +# - SE2 : Standard Edition 2 + + +#------------------------------------------------------------------------------- + +oracle.install.db.InstallEdition=EE +############################################################################### +# # +# PRIVILEGED OPERATING SYSTEM GROUPS # +# ------------------------------------------ # +# Provide values for the OS groups to which SYSDBA and SYSOPER privileges # +# needs to be granted. If the install is being performed as a member of the # +# group "dba", then that will be used unless specified otherwise below. # +# # +# The value to be specified for OSDBA and OSOPER group is only for UNIX based # +# Operating System. # +# # +############################################################################### + +#------------------------------------------------------------------------------ +# The OSDBA_GROUP is the OS group which is to be granted SYSDBA privileges. +#------------------------------------------------------------------------------- +oracle.install.db.OSDBA_GROUP=dba + +#------------------------------------------------------------------------------ +# The OSOPER_GROUP is the OS group which is to be granted SYSOPER privileges. +# The value to be specified for OSOPER group is optional. +#------------------------------------------------------------------------------ +oracle.install.db.OSOPER_GROUP=oper + +#------------------------------------------------------------------------------ +# The OSBACKUPDBA_GROUP is the OS group which is to be granted SYSBACKUP privileges. +#------------------------------------------------------------------------------ +oracle.install.db.OSBACKUPDBA_GROUP=backupdba + +#------------------------------------------------------------------------------ +# The OSDGDBA_GROUP is the OS group which is to be granted SYSDG privileges. +#------------------------------------------------------------------------------ +oracle.install.db.OSDGDBA_GROUP=dgdba + +#------------------------------------------------------------------------------ +# The OSKMDBA_GROUP is the OS group which is to be granted SYSKM privileges. +#------------------------------------------------------------------------------ +oracle.install.db.OSKMDBA_GROUP=kmdba + +#------------------------------------------------------------------------------ +# The OSRACDBA_GROUP is the OS group which is to be granted SYSRAC privileges. +#------------------------------------------------------------------------------ +oracle.install.db.OSRACDBA_GROUP=racdba +#------------------------------------------------------------------------------ +# Specify whether to enable the user to set the password for +# My Oracle Support credentials. The value can be either true or false. +# If left blank it will be assumed to be false. +# +# Example : SECURITY_UPDATES_VIA_MYORACLESUPPORT=true +#------------------------------------------------------------------------------ +SECURITY_UPDATES_VIA_MYORACLESUPPORT=false + +#------------------------------------------------------------------------------ +# Specify whether user doesn't want to configure Security Updates. +# The value for this variable should be true if you don't want to configure +# Security Updates, false otherwise. +# +# The value can be either true or false. If left blank it will be assumed +# to be true. +# +# Example : DECLINE_SECURITY_UPDATES=false +#------------------------------------------------------------------------------ +DECLINE_SECURITY_UPDATES=true diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/db_install_21cv1.rsp b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/db_install_21cv1.rsp new file mode 100644 index 0000000000..e67829179e --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/db_install_21cv1.rsp @@ -0,0 +1,356 @@ +#################################################################### +## Copyright(c) Oracle Corporation 1998,2019. All rights reserved.## +## ## +## Specify values for the variables listed below to customize ## +## your installation. ## +## ## +## Each variable is associated with a comment. The comment ## +## can help to populate the variables with the appropriate ## +## values. ## +## ## +## IMPORTANT NOTE: This file contains plain text passwords and ## +## should be secured to have read permission only by oracle user ## +## or db administrator who owns this installation. ## +## ## +#################################################################### + + +#------------------------------------------------------------------------------ +# Do not change the following system generated value. +#------------------------------------------------------------------------------ +oracle.install.responseFileVersion=/oracle/install/rspfmt_dbinstall_response_schema_v21.0.0 + +#------------------------------------------------------------------------------- +# Specify the installation option. +# It can be one of the following: +# - INSTALL_DB_SWONLY +# - INSTALL_DB_AND_CONFIG +#------------------------------------------------------------------------------- +oracle.install.option= + +#------------------------------------------------------------------------------- +# Specify the Unix group to be set for the inventory directory. +#------------------------------------------------------------------------------- +UNIX_GROUP_NAME= + +#------------------------------------------------------------------------------- +# Specify the location which holds the inventory files. +# This is an optional parameter if installing on +# Windows based Operating System. +#------------------------------------------------------------------------------- +INVENTORY_LOCATION= +#------------------------------------------------------------------------------- +# Specify the complete path of the Oracle Home. +#------------------------------------------------------------------------------- +ORACLE_HOME= + +#------------------------------------------------------------------------------- +# Specify the complete path of the Oracle Base. +#------------------------------------------------------------------------------- +ORACLE_BASE= + +#------------------------------------------------------------------------------- +# Specify the installation edition of the component. +# +# The value should contain only one of these choices. +# - EE : Enterprise Edition +# - SE2 : Standard Edition 2 + + +#------------------------------------------------------------------------------- + +oracle.install.db.InstallEdition= +############################################################################### +# # +# PRIVILEGED OPERATING SYSTEM GROUPS # +# ------------------------------------------ # +# Provide values for the OS groups to which SYSDBA and SYSOPER privileges # +# needs to be granted. If the install is being performed as a member of the # +# group "dba", then that will be used unless specified otherwise below. # +# # +# The value to be specified for OSDBA and OSOPER group is only for UNIX based # +# Operating System. # +# # +############################################################################### + +#------------------------------------------------------------------------------ +# The OSDBA_GROUP is the OS group which is to be granted SYSDBA privileges. +#------------------------------------------------------------------------------- +oracle.install.db.OSDBA_GROUP= + +#------------------------------------------------------------------------------ +# The OSOPER_GROUP is the OS group which is to be granted SYSOPER privileges. +# The value to be specified for OSOPER group is optional. +#------------------------------------------------------------------------------ +oracle.install.db.OSOPER_GROUP= + +#------------------------------------------------------------------------------ +# The OSBACKUPDBA_GROUP is the OS group which is to be granted SYSBACKUP privileges. +#------------------------------------------------------------------------------ +oracle.install.db.OSBACKUPDBA_GROUP= + +#------------------------------------------------------------------------------ +# The OSDGDBA_GROUP is the OS group which is to be granted SYSDG privileges. +#------------------------------------------------------------------------------ +oracle.install.db.OSDGDBA_GROUP= + +#------------------------------------------------------------------------------ +# The OSKMDBA_GROUP is the OS group which is to be granted SYSKM privileges. +#------------------------------------------------------------------------------ +oracle.install.db.OSKMDBA_GROUP= + +#------------------------------------------------------------------------------ +# The OSRACDBA_GROUP is the OS group which is to be granted SYSRAC privileges. +#------------------------------------------------------------------------------ +oracle.install.db.OSRACDBA_GROUP= +################################################################################ +# # +# Root script execution configuration # +# # +################################################################################ + +#------------------------------------------------------------------------------------------------------- +# Specify the root script execution mode. +# +# - true : To execute the root script automatically by using the appropriate configuration methods. +# - false : To execute the root script manually. +# +# If this option is selected, password should be specified on the console. +#------------------------------------------------------------------------------------------------------- +oracle.install.db.rootconfig.executeRootScript= + +#-------------------------------------------------------------------------------------- +# Specify the configuration method to be used for automatic root script execution. +# +# Following are the possible choices: +# - ROOT +# - SUDO +#-------------------------------------------------------------------------------------- +oracle.install.db.rootconfig.configMethod= +#-------------------------------------------------------------------------------------- +# Specify the absolute path of the sudo program. +# +# Applicable only when SUDO configuration method was chosen. +#-------------------------------------------------------------------------------------- +oracle.install.db.rootconfig.sudoPath= + +#-------------------------------------------------------------------------------------- +# Specify the name of the user who is in the sudoers list. +# Applicable only when SUDO configuration method was chosen. +# Note:For Single Instance database installations,the sudo user name must be the username of the user installing the database. +#-------------------------------------------------------------------------------------- +oracle.install.db.rootconfig.sudoUserName= + +############################################################################### +# # +# Grid Options # +# # +############################################################################### + +#------------------------------------------------------------------------------ +# Value is required only if the specified install option is INSTALL_DB_SWONLY +# +# Specify the cluster node names selected during the installation. +# +# Example : oracle.install.db.CLUSTER_NODES=node1,node2 +#------------------------------------------------------------------------------ +oracle.install.db.CLUSTER_NODES= + +############################################################################### +# # +# Database Configuration Options # +# # +############################################################################### + +#------------------------------------------------------------------------------- +# Specify the type of database to create. +# It can be one of the following: +# - GENERAL_PURPOSE +# - DATA_WAREHOUSE +# GENERAL_PURPOSE: A starter database designed for general purpose use or transaction-heavy applications. +# DATA_WAREHOUSE : A starter database optimized for data warehousing applications. +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.type= + +#------------------------------------------------------------------------------- +# Specify the Starter Database Global Database Name. +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.globalDBName= + +#------------------------------------------------------------------------------- +# Specify the Starter Database SID. +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.SID= + +#------------------------------------------------------------------------------- +# Specify whether the database should be configured as a Container database. +# The value can be either "true" or "false". If left blank it will be assumed +# to be "false". +#------------------------------------------------------------------------------- +oracle.install.db.ConfigureAsContainerDB= + +#------------------------------------------------------------------------------- +# Specify the Pluggable Database name for the pluggable database in Container Database. +#------------------------------------------------------------------------------- +oracle.install.db.config.PDBName= + +#------------------------------------------------------------------------------- +# Specify the Starter Database character set. +# +# One of the following +# AL32UTF8, WE8ISO8859P15, WE8MSWIN1252, EE8ISO8859P2, +# EE8MSWIN1250, NE8ISO8859P10, NEE8ISO8859P4, BLT8MSWIN1257, +# BLT8ISO8859P13, CL8ISO8859P5, CL8MSWIN1251, AR8ISO8859P6, +# AR8MSWIN1256, EL8ISO8859P7, EL8MSWIN1253, IW8ISO8859P8, +# IW8MSWIN1255, JA16EUC, JA16EUCTILDE, JA16SJIS, JA16SJISTILDE, +# KO16MSWIN949, ZHS16GBK, TH8TISASCII, ZHT32EUC, ZHT16MSWIN950, +# ZHT16HKSCS, WE8ISO8859P9, TR8MSWIN1254, VN8MSWIN1258 +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.characterSet= + +#------------------------------------------------------------------------------ +# This variable should be set to true if Automatic Memory Management +# in Database is desired. +# If Automatic Memory Management is not desired, and memory allocation +# is to be done manually, then set it to false. +#------------------------------------------------------------------------------ +oracle.install.db.config.starterdb.memoryOption= + +#------------------------------------------------------------------------------- +# Specify the total memory allocation for the database. Value(in MB) should be +# at least 256 MB, and should not exceed the total physical memory available +# on the system. +# Example: oracle.install.db.config.starterdb.memoryLimit=512 +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.memoryLimit= + +#------------------------------------------------------------------------------- +# This variable controls whether to load Example Schemas onto +# the starter database or not. +# The value can be either "true" or "false". If left blank it will be assumed +# to be "false". +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.installExampleSchemas= + +############################################################################### +# # +# Passwords can be supplied for the following four schemas in the # +# starter database: # +# SYS # +# SYSTEM # +# DBSNMP (used by Enterprise Manager) # +# # +# Same password can be used for all accounts (not recommended) # +# or different passwords for each account can be provided (recommended) # +# # +############################################################################### + +#------------------------------------------------------------------------------ +# This variable holds the password that is to be used for all schemas in the +# starter database. +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.password.ALL= + +#------------------------------------------------------------------------------- +# Specify the SYS password for the starter database. +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.password.SYS= + +#------------------------------------------------------------------------------- +# Specify the SYSTEM password for the starter database. +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.password.SYSTEM= + +#------------------------------------------------------------------------------- +# Specify the DBSNMP password for the starter database. +# Applicable only when oracle.install.db.config.starterdb.managementOption=CLOUD_CONTROL +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.password.DBSNMP= + +#------------------------------------------------------------------------------- +# Specify the PDBADMIN password required for creation of Pluggable Database in the Container Database. +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.password.PDBADMIN= + +#------------------------------------------------------------------------------- +# Specify the management option to use for managing the database. +# Options are: +# 1. CLOUD_CONTROL - If you want to manage your database with Enterprise Manager Cloud Control along with Database Express. +# 2. DEFAULT -If you want to manage your database using the default Database Express option. +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.managementOption= + +#------------------------------------------------------------------------------- +# Specify the OMS host to connect to Cloud Control. +# Applicable only when oracle.install.db.config.starterdb.managementOption=CLOUD_CONTROL +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.omsHost= + +#------------------------------------------------------------------------------- +# Specify the OMS port to connect to Cloud Control. +# Applicable only when oracle.install.db.config.starterdb.managementOption=CLOUD_CONTROL +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.omsPort= + +#------------------------------------------------------------------------------- +# Specify the EM Admin user name to use to connect to Cloud Control. +# Applicable only when oracle.install.db.config.starterdb.managementOption=CLOUD_CONTROL +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.emAdminUser= + +#------------------------------------------------------------------------------- +# Specify the EM Admin password to use to connect to Cloud Control. +# Applicable only when oracle.install.db.config.starterdb.managementOption=CLOUD_CONTROL +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.emAdminPassword= + +############################################################################### +# # +# SPECIFY RECOVERY OPTIONS # +# ------------------------------------ # +# Recovery options for the database can be mentioned using the entries below # +# # +############################################################################### + +#------------------------------------------------------------------------------ +# This variable is to be set to false if database recovery is not required. Else +# this can be set to true. +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.enableRecovery= + +#------------------------------------------------------------------------------- +# Specify the type of storage to use for the database. +# It can be one of the following: +# - FILE_SYSTEM_STORAGE +# - ASM_STORAGE +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.storageType= + +#------------------------------------------------------------------------------- +# Specify the database file location which is a directory for datafiles, control +# files, redo logs. +# +# Applicable only when oracle.install.db.config.starterdb.storage=FILE_SYSTEM_STORAGE +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.fileSystemStorage.dataLocation= + +#------------------------------------------------------------------------------- +# Specify the recovery location. +# +# Applicable only when oracle.install.db.config.starterdb.storage=FILE_SYSTEM_STORAGE +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.fileSystemStorage.recoveryLocation= + +#------------------------------------------------------------------------------- +# Specify the existing ASM disk groups to be used for storage. +# +# Applicable only when oracle.install.db.config.starterdb.storageType=ASM_STORAGE +#------------------------------------------------------------------------------- +oracle.install.db.config.asm.diskGroup= + +#------------------------------------------------------------------------------- +# Specify the password for ASMSNMP user of the ASM instance. +# +# Applicable only when oracle.install.db.config.starterdb.storage=ASM_STORAGE +#------------------------------------------------------------------------------- +oracle.install.db.config.asm.ASMSNMPPassword= diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/db_sw_install_21c.rsp b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/db_sw_install_21c.rsp new file mode 100644 index 0000000000..7d5123c853 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/db_sw_install_21c.rsp @@ -0,0 +1,41 @@ +oracle.install.responseFileVersion=/oracle/install/rspfmt_dbinstall_response_schema_v21.0.0 +oracle.install.option=INSTALL_DB_SWONLY +UNIX_GROUP_NAME=oinstall +INVENTORY_LOCATION=/u01/app/oraInventory +ORACLE_HOME=/u01/app/oracle/product/21c/dbhome_1 +ORACLE_BASE=/u01/app/oracle +oracle.install.db.InstallEdition=EE +oracle.install.db.OSDBA_GROUP=dba +oracle.install.db.OSOPER_GROUP=oper +oracle.install.db.OSBACKUPDBA_GROUP=backupdba +oracle.install.db.OSDGDBA_GROUP=dgdba +oracle.install.db.OSKMDBA_GROUP=kmdba +oracle.install.db.OSRACDBA_GROUP=racdba +oracle.install.db.rootconfig.executeRootScript= +oracle.install.db.rootconfig.configMethod= +oracle.install.db.rootconfig.sudoPath= +oracle.install.db.rootconfig.sudoUserName= +oracle.install.db.CLUSTER_NODES= +oracle.install.db.config.starterdb.type= +oracle.install.db.config.starterdb.globalDBName= +oracle.install.db.config.starterdb.SID= +oracle.install.db.config.PDBName= +oracle.install.db.config.starterdb.characterSet= +oracle.install.db.config.starterdb.memoryOption= +oracle.install.db.config.starterdb.memoryLimit= +oracle.install.db.config.starterdb.password.ALL= +oracle.install.db.config.starterdb.password.SYS= +oracle.install.db.config.starterdb.password.SYSTEM= +oracle.install.db.config.starterdb.password.DBSNMP= +oracle.install.db.config.starterdb.password.PDBADMIN= +oracle.install.db.config.starterdb.managementOption= +oracle.install.db.config.starterdb.omsHost= +oracle.install.db.config.starterdb.omsPort= +oracle.install.db.config.starterdb.emAdminUser= +oracle.install.db.config.starterdb.emAdminPassword= +oracle.install.db.config.starterdb.enableRecovery= +oracle.install.db.config.starterdb.storageType= +oracle.install.db.config.starterdb.fileSystemStorage.dataLocation= +oracle.install.db.config.starterdb.fileSystemStorage.recoveryLocation= +oracle.install.db.config.asm.diskGroup= +oracle.install.db.config.asm.ASMSNMPPassword= diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/db_sw_install_21cv1.rsp b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/db_sw_install_21cv1.rsp new file mode 100644 index 0000000000..a3be38d7fd --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/db_sw_install_21cv1.rsp @@ -0,0 +1,341 @@ +#################################################################### +## Copyright(c) Oracle Corporation 1998,2020. All rights reserved.## +## ## +## Specify values for the variables listed below to customize ## +## your installation. ## +## ## +## Each variable is associated with a comment. The comment ## +## can help to populate the variables with the appropriate ## +## values. ## +## ## +## IMPORTANT NOTE: This file contains plain text passwords and ## +## should be secured to have read permission only by oracle user ## +## or db administrator who owns this installation. ## +## ## +#################################################################### + + +#------------------------------------------------------------------------------ +# Do not change the following system generated value. +#------------------------------------------------------------------------------ +oracle.install.responseFileVersion=/oracle/install/rspfmt_dbinstall_response_schema_v21.0.0 + +#------------------------------------------------------------------------------- +# Specify the installation option. +# It can be one of the following: +# - INSTALL_DB_SWONLY +# - INSTALL_DB_AND_CONFIG +#------------------------------------------------------------------------------- +oracle.install.option= + +#------------------------------------------------------------------------------- +# Specify the Unix group to be set for the inventory directory. +#------------------------------------------------------------------------------- +UNIX_GROUP_NAME= + +#------------------------------------------------------------------------------- +# Specify the location which holds the inventory files. +# This is an optional parameter if installing on +# Windows based Operating System. +#------------------------------------------------------------------------------- +INVENTORY_LOCATION= +#------------------------------------------------------------------------------- +# Specify the complete path of the Oracle Home. +#------------------------------------------------------------------------------- +ORACLE_HOME= + +#------------------------------------------------------------------------------- +# Specify the complete path of the Oracle Base. +#------------------------------------------------------------------------------- +ORACLE_BASE= + +#------------------------------------------------------------------------------- +# Specify the installation edition of the component. +# +# The value should contain only one of these choices. +# - EE : Enterprise Edition +# - SE2 : Standard Edition 2 + + +#------------------------------------------------------------------------------- + +oracle.install.db.InstallEdition= +############################################################################### +# # +# PRIVILEGED OPERATING SYSTEM GROUPS # +# ------------------------------------------ # +# Provide values for the OS groups to which SYSDBA and SYSOPER privileges # +# needs to be granted. If the install is being performed as a member of the # +# group "dba", then that will be used unless specified otherwise below. # +# # +# The value to be specified for OSDBA and OSOPER group is only for UNIX based # +# Operating System. # +# # +############################################################################### + +#------------------------------------------------------------------------------ +# The OSDBA_GROUP is the OS group which is to be granted SYSDBA privileges. +#------------------------------------------------------------------------------- +oracle.install.db.OSDBA_GROUP= + +#------------------------------------------------------------------------------ +# The OSOPER_GROUP is the OS group which is to be granted SYSOPER privileges. +# The value to be specified for OSOPER group is optional. +#------------------------------------------------------------------------------ +oracle.install.db.OSOPER_GROUP= + +#------------------------------------------------------------------------------ +# The OSBACKUPDBA_GROUP is the OS group which is to be granted SYSBACKUP privileges. +#------------------------------------------------------------------------------ +oracle.install.db.OSBACKUPDBA_GROUP= + +#------------------------------------------------------------------------------ +# The OSDGDBA_GROUP is the OS group which is to be granted SYSDG privileges. +#------------------------------------------------------------------------------ +oracle.install.db.OSDGDBA_GROUP= + +#------------------------------------------------------------------------------ +# The OSKMDBA_GROUP is the OS group which is to be granted SYSKM privileges. +#------------------------------------------------------------------------------ +oracle.install.db.OSKMDBA_GROUP= + +#------------------------------------------------------------------------------ +# The OSRACDBA_GROUP is the OS group which is to be granted SYSRAC privileges. +#------------------------------------------------------------------------------ +oracle.install.db.OSRACDBA_GROUP= +################################################################################ +# # +# Root script execution configuration # +# # +################################################################################ + +#------------------------------------------------------------------------------------------------------- +# Specify the root script execution mode. +# +# - true : To execute the root script automatically by using the appropriate configuration methods. +# - false : To execute the root script manually. +# +# If this option is selected, password should be specified on the console. +#------------------------------------------------------------------------------------------------------- +oracle.install.db.rootconfig.executeRootScript= + +#-------------------------------------------------------------------------------------- +# Specify the configuration method to be used for automatic root script execution. +# +# Following are the possible choices: +# - ROOT +# - SUDO +#-------------------------------------------------------------------------------------- +oracle.install.db.rootconfig.configMethod= +#-------------------------------------------------------------------------------------- +# Specify the absolute path of the sudo program. +# +# Applicable only when SUDO configuration method was chosen. +#-------------------------------------------------------------------------------------- +oracle.install.db.rootconfig.sudoPath= + +#-------------------------------------------------------------------------------------- +# Specify the name of the user who is in the sudoers list. +# Applicable only when SUDO configuration method was chosen. +# Note:For Single Instance database installations,the sudo user name must be the username of the user installing the database. +#-------------------------------------------------------------------------------------- +oracle.install.db.rootconfig.sudoUserName= + +############################################################################### +# # +# Grid Options # +# # +############################################################################### + +#------------------------------------------------------------------------------ +# Value is required only if the specified install option is INSTALL_DB_SWONLY +# +# Specify the cluster node names selected during the installation. +# +# Example : oracle.install.db.CLUSTER_NODES=node1,node2 +#------------------------------------------------------------------------------ +oracle.install.db.CLUSTER_NODES= + +############################################################################### +# # +# Database Configuration Options # +# # +############################################################################### + +#------------------------------------------------------------------------------- +# Specify the type of database to create. +# It can be one of the following: +# - GENERAL_PURPOSE +# - DATA_WAREHOUSE +# GENERAL_PURPOSE: A starter database designed for general purpose use or transaction-heavy applications. +# DATA_WAREHOUSE : A starter database optimized for data warehousing applications. +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.type= + +#------------------------------------------------------------------------------- +# Specify the Starter Database Global Database Name. +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.globalDBName= + +#------------------------------------------------------------------------------- +# Specify the Starter Database SID. +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.SID= + +#------------------------------------------------------------------------------- +# Specify the Pluggable Database name for the pluggable database in Container Database. +#------------------------------------------------------------------------------- +oracle.install.db.config.PDBName= + +#------------------------------------------------------------------------------- +# Specify the Starter Database character set. +# +# One of the following +# AL32UTF8, WE8ISO8859P15, WE8MSWIN1252, EE8ISO8859P2, +# EE8MSWIN1250, NE8ISO8859P10, NEE8ISO8859P4, BLT8MSWIN1257, +# BLT8ISO8859P13, CL8ISO8859P5, CL8MSWIN1251, AR8ISO8859P6, +# AR8MSWIN1256, EL8ISO8859P7, EL8MSWIN1253, IW8ISO8859P8, +# IW8MSWIN1255, JA16EUC, JA16EUCTILDE, JA16SJIS, JA16SJISTILDE, +# KO16MSWIN949, ZHS16GBK, TH8TISASCII, ZHT32EUC, ZHT16MSWIN950, +# ZHT16HKSCS, WE8ISO8859P9, TR8MSWIN1254, VN8MSWIN1258 +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.characterSet= + +#------------------------------------------------------------------------------ +# This variable should be set to true if Automatic Memory Management +# in Database is desired. +# If Automatic Memory Management is not desired, and memory allocation +# is to be done manually, then set it to false. +#------------------------------------------------------------------------------ +oracle.install.db.config.starterdb.memoryOption= + +#------------------------------------------------------------------------------- +# Specify the total memory allocation for the database. Value(in MB) should be +# at least 256 MB, and should not exceed the total physical memory available +# on the system. +# Example: oracle.install.db.config.starterdb.memoryLimit=512 +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.memoryLimit= + +############################################################################### +# # +# Passwords can be supplied for the following four schemas in the # +# starter database: # +# SYS # +# SYSTEM # +# DBSNMP (used by Enterprise Manager) # +# # +# Same password can be used for all accounts (not recommended) # +# or different passwords for each account can be provided (recommended) # +# # +############################################################################### + +#------------------------------------------------------------------------------ +# This variable holds the password that is to be used for all schemas in the +# starter database. +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.password.ALL= + +#------------------------------------------------------------------------------- +# Specify the SYS password for the starter database. +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.password.SYS= + +#------------------------------------------------------------------------------- +# Specify the SYSTEM password for the starter database. +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.password.SYSTEM= + +#------------------------------------------------------------------------------- +# Specify the DBSNMP password for the starter database. +# Applicable only when oracle.install.db.config.starterdb.managementOption=CLOUD_CONTROL +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.password.DBSNMP= + +#------------------------------------------------------------------------------- +# Specify the PDBADMIN password required for creation of Pluggable Database in the Container Database. +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.password.PDBADMIN= + +#------------------------------------------------------------------------------- +# Specify the management option to use for managing the database. +# Options are: +# 1. CLOUD_CONTROL - If you want to manage your database with Enterprise Manager Cloud Control along with Database Express. +# 2. DEFAULT -If you want to manage your database using the default Database Express option. +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.managementOption= + +#------------------------------------------------------------------------------- +# Specify the OMS host to connect to Cloud Control. +# Applicable only when oracle.install.db.config.starterdb.managementOption=CLOUD_CONTROL +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.omsHost= + +#------------------------------------------------------------------------------- +# Specify the OMS port to connect to Cloud Control. +# Applicable only when oracle.install.db.config.starterdb.managementOption=CLOUD_CONTROL +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.omsPort= + +#------------------------------------------------------------------------------- +# Specify the EM Admin user name to use to connect to Cloud Control. +# Applicable only when oracle.install.db.config.starterdb.managementOption=CLOUD_CONTROL +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.emAdminUser= + +#------------------------------------------------------------------------------- +# Specify the EM Admin password to use to connect to Cloud Control. +# Applicable only when oracle.install.db.config.starterdb.managementOption=CLOUD_CONTROL +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.emAdminPassword= + +############################################################################### +# # +# SPECIFY RECOVERY OPTIONS # +# ------------------------------------ # +# Recovery options for the database can be mentioned using the entries below # +# # +############################################################################### + +#------------------------------------------------------------------------------ +# This variable is to be set to false if database recovery is not required. Else +# this can be set to true. +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.enableRecovery= + +#------------------------------------------------------------------------------- +# Specify the type of storage to use for the database. +# It can be one of the following: +# - FILE_SYSTEM_STORAGE +# - ASM_STORAGE +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.storageType= + +#------------------------------------------------------------------------------- +# Specify the database file location which is a directory for datafiles, control +# files, redo logs. +# +# Applicable only when oracle.install.db.config.starterdb.storage=FILE_SYSTEM_STORAGE +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.fileSystemStorage.dataLocation= + +#------------------------------------------------------------------------------- +# Specify the recovery location. +# +# Applicable only when oracle.install.db.config.starterdb.storage=FILE_SYSTEM_STORAGE +#------------------------------------------------------------------------------- +oracle.install.db.config.starterdb.fileSystemStorage.recoveryLocation= + +#------------------------------------------------------------------------------- +# Specify the existing ASM disk groups to be used for storage. +# +# Applicable only when oracle.install.db.config.starterdb.storageType=ASM_STORAGE +#------------------------------------------------------------------------------- +oracle.install.db.config.asm.diskGroup= + +#------------------------------------------------------------------------------- +# Specify the password for ASMSNMP user of the ASM instance. +# +# Applicable only when oracle.install.db.config.starterdb.storage=ASM_STORAGE +#------------------------------------------------------------------------------- +oracle.install.db.config.asm.ASMSNMPPassword= diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/dbca.rsp b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/dbca.rsp new file mode 100644 index 0000000000..92c74e1eb4 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/dbca.rsp @@ -0,0 +1,605 @@ +############################################################################## +## ## +## DBCA response file ## +## ------------------ ## +## Copyright(c) Oracle Corporation 1998,2018. All rights reserved. ## +## ## +## Specify values for the variables listed below to customize ## +## your installation. ## +## ## +## Each variable is associated with a comment. The comment ## +## can help to populate the variables with the appropriate ## +## values. ## +## ## +## IMPORTANT NOTE: This file contains plain text passwords and ## +## should be secured to have read permission only by oracle user ## +## or db administrator who owns this installation. ## +############################################################################## +#------------------------------------------------------------------------------- +# Do not change the following system generated value. +#------------------------------------------------------------------------------- +responseFileVersion=/oracle/assistants/rspfmt_dbca_response_schema_v18.0.0 + +#----------------------------------------------------------------------------- +# Name : gdbName +# Datatype : String +# Description : Global database name of the database +# Valid values : . - when database domain isn't NULL +# - when database domain is NULL +# Default value : None +# Mandatory : Yes +#----------------------------------------------------------------------------- +gdbName=###ORACLE_SID### + +#----------------------------------------------------------------------------- +# Name : sid +# Datatype : String +# Description : System identifier (SID) of the database +# Valid values : Check Oracle12c Administrator's Guide +# Default value : specified in GDBNAME +# Mandatory : No +#----------------------------------------------------------------------------- +sid=###ORACLE_SID### + +#----------------------------------------------------------------------------- +# Name : databaseConfigType +# Datatype : String +# Description : database conf type as Single Instance, Real Application Cluster or Real Application Cluster One Nodes database +# Valid values : SI\RAC\RACONENODE +# Default value : SI +# Mandatory : No +#----------------------------------------------------------------------------- +databaseConfigType=RAC + +#----------------------------------------------------------------------------- +# Name : RACOneNodeServiceName +# Datatype : String +# Description : Service is required by application to connect to RAC One +# Node Database +# Valid values : Service Name +# Default value : None +# Mandatory : No [required in case DATABASECONFTYPE is set to RACONENODE ] +#----------------------------------------------------------------------------- +RACOneNodeServiceName= + +#----------------------------------------------------------------------------- +# Name : policyManaged +# Datatype : Boolean +# Description : Set to true if Database is policy managed and +# set to false if Database is admin managed +# Valid values : TRUE\FALSE +# Default value : FALSE +# Mandatory : No +#----------------------------------------------------------------------------- +policyManaged=false + + +#----------------------------------------------------------------------------- +# Name : createServerPool +# Datatype : Boolean +# Description : Set to true if new server pool need to be created for database +# if this option is specified then the newly created database +# will use this newly created serverpool. +# Multiple serverpoolname can not be specified for database +# Valid values : TRUE\FALSE +# Default value : FALSE +# Mandatory : No +#----------------------------------------------------------------------------- +createServerPool=false + +#----------------------------------------------------------------------------- +# Name : serverPoolName +# Datatype : String +# Description : Only one serverpool name need to be specified +# if Create Server Pool option is specified. +# Comma-separated list of Serverpool names if db need to use +# multiple Server pool +# Valid values : ServerPool name + +# Default value : None +# Mandatory : No [required in case of RAC service centric database] +#----------------------------------------------------------------------------- +serverPoolName= + +#----------------------------------------------------------------------------- +# Name : cardinality +# Datatype : Number +# Description : Specify Cardinality for create server pool operation + +# Valid values : any positive Integer value +# Default value : Number of qualified nodes on cluster +# Mandatory : No [Required when a new serverpool need to be created] +#----------------------------------------------------------------------------- +cardinality= + +#----------------------------------------------------------------------------- +# Name : force +# Datatype : Boolean +# Description : Set to true if new server pool need to be created by force +# if this option is specified then the newly created serverpool +# will be assigned server even if no free servers are available. +# This may affect already running database. +# This flag can be specified for Admin managed as well as policy managed db. +# Valid values : TRUE\FALSE +# Default value : FALSE +# Mandatory : No +#----------------------------------------------------------------------------- +force=false + +#----------------------------------------------------------------------------- +# Name : pqPoolName +# Datatype : String +# Description : Only one serverpool name needs to be specified +# if create server pool option is specified. +# Comma-separated list of serverpool names if use +# server pool. This is required to +# create Parallel Query (PQ) database. Applicable to Big Cluster +# Valid values : Parallel Query (PQ) pool name +# Default value : None +# Mandatory : No [required in case of RAC service centric database] +#----------------------------------------------------------------------------- +pqPoolName= + +#----------------------------------------------------------------------------- +# Name : pqCardinality +# Datatype : Number +# Description : Specify Cardinality for create server pool operation. +# Applicable to Big Cluster +# Valid values : any positive Integer value +# Default value : Number of qualified nodes on cluster +# Mandatory : No [Required when a new serverpool need to be created] +#----------------------------------------------------------------------------- +pqCardinality= + +#----------------------------------------------------------------------------- +# Name : createAsContainerDatabase +# Datatype : boolean +# Description : flag to create database as container database +# Valid values : Check Oracle12c Administrator's Guide +# Default value : false +# Mandatory : No +#----------------------------------------------------------------------------- +createAsContainerDatabase=###CONTAINER_DB_FLAG### + +#----------------------------------------------------------------------------- +# Name : numberOfPDBs +# Datatype : Number +# Description : Specify the number of pdb to be created +# Valid values : 0 to 252 +# Default value : 0 +# Mandatory : No +#----------------------------------------------------------------------------- +numberOfPDBs=1 + +#----------------------------------------------------------------------------- +# Name : pdbName +# Datatype : String +# Description : Specify the pdbname/pdbanme prefix if one or more pdb need to be created +# Valid values : Check Oracle12c Administrator's Guide +# Default value : None +# Mandatory : No +#----------------------------------------------------------------------------- +pdbName=###ORACLE_PDB### + +#----------------------------------------------------------------------------- +# Name : useLocalUndoForPDBs +# Datatype : boolean +# Description : Flag to create local undo tablespace for all PDB's. +# Valid values : TRUE\FALSE +# Default value : TRUE +# Mandatory : No +#----------------------------------------------------------------------------- +useLocalUndoForPDBs=true + +#----------------------------------------------------------------------------- +# Name : pdbAdminPassword +# Datatype : String +# Description : PDB Administrator user password +# Valid values : Check Oracle12c Administrator's Guide +# Default value : None +# Mandatory : No +#----------------------------------------------------------------------------- + +pdbAdminPassword=###ORACLE_PWD### + +#----------------------------------------------------------------------------- +# Name : nodelist +# Datatype : String +# Description : Comma-separated list of cluster nodes +# Valid values : Cluster node names +# Default value : None +# Mandatory : No (Yes for RAC database-centric database ) +#----------------------------------------------------------------------------- +nodelist=###PUBLIC_HOSTNAME### + +#----------------------------------------------------------------------------- +# Name : templateName +# Datatype : String +# Description : Name of the template +# Valid values : Template file name +# Default value : None +# Mandatory : Yes +#----------------------------------------------------------------------------- +templateName=/u01/app/oracle/product/18.3.0/dbhome_1/assistants/dbca/templates/General_Purpose.dbc + +#----------------------------------------------------------------------------- +# Name : sysPassword +# Datatype : String +# Description : Password for SYS user +# Valid values : Check Oracle12c Administrator's Guide +# Default value : None +# Mandatory : Yes +#----------------------------------------------------------------------------- +sysPassword=###ORACLE_PWD### + +#----------------------------------------------------------------------------- +# Name : systemPassword +# Datatype : String +# Description : Password for SYSTEM user +# Valid values : Check Oracle12c Administrator's Guide +# Default value : None +# Mandatory : Yes +#----------------------------------------------------------------------------- +systemPassword=###ORACLE_PWD### + +#----------------------------------------------------------------------------- +# Name : serviceUserPassword +# Datatype : String +# Description : Password for Windows Service user +# Default value : None +# Mandatory : If Oracle home is installed with windows service user +#----------------------------------------------------------------------------- +serviceUserPassword= + +#----------------------------------------------------------------------------- +# Name : emConfiguration +# Datatype : String +# Description : Enterprise Manager Configuration Type +# Valid values : CENTRAL|DBEXPRESS|BOTH|NONE +# Default value : NONE +# Mandatory : No +#----------------------------------------------------------------------------- +emConfiguration=DBEXPRESS + +#----------------------------------------------------------------------------- +# Name : emExpressPort +# Datatype : Number +# Description : Enterprise Manager Configuration Type +# Valid values : Check Oracle12c Administrator's Guide +# Default value : NONE +# Mandatory : No, will be picked up from DBEXPRESS_HTTPS_PORT env variable +# or auto generates a free port between 5500 and 5599 +#----------------------------------------------------------------------------- +emExpressPort=5500 + +#----------------------------------------------------------------------------- +# Name : runCVUChecks +# Datatype : Boolean +# Description : Specify whether to run Cluster Verification Utility checks +# periodically in Cluster environment +# Valid values : TRUE\FALSE +# Default value : FALSE +# Mandatory : No +#----------------------------------------------------------------------------- +runCVUChecks=true + +#----------------------------------------------------------------------------- +# Name : dbsnmpPassword +# Datatype : String +# Description : Password for DBSNMP user +# Valid values : Check Oracle12c Administrator's Guide +# Default value : None +# Mandatory : Yes, if emConfiguration is specified or +# the value of runCVUChecks is TRUE +#----------------------------------------------------------------------------- +dbsnmpPassword=###ORACLE_PWD### + +#----------------------------------------------------------------------------- +# Name : omsHost +# Datatype : String +# Description : EM management server host name +# Default value : None +# Mandatory : Yes, if CENTRAL is specified for emConfiguration +#----------------------------------------------------------------------------- +omsHost= + +#----------------------------------------------------------------------------- +# Name : omsPort +# Datatype : Number +# Description : EM management server port number +# Default value : None +# Mandatory : Yes, if CENTRAL is specified for emConfiguration +#----------------------------------------------------------------------------- +omsPort=0 + +#----------------------------------------------------------------------------- +# Name : emUser +# Datatype : String +# Description : EM Admin username to add or modify targets +# Default value : None +# Mandatory : Yes, if CENTRAL is specified for emConfiguration +#----------------------------------------------------------------------------- +emUser= + +#----------------------------------------------------------------------------- +# Name : emPassword +# Datatype : String +# Description : EM Admin user password +# Default value : None +# Mandatory : Yes, if CENTRAL is specified for emConfiguration +#----------------------------------------------------------------------------- +emPassword= + +#----------------------------------------------------------------------------- +# Name : dvConfiguration +# Datatype : Boolean +# Description : Specify "True" to configure and enable Oracle Database vault +# Valid values : True/False +# Default value : False +# Mandatory : No +#----------------------------------------------------------------------------- +dvConfiguration=false + +#----------------------------------------------------------------------------- +# Name : dvUserName +# Datatype : String +# Description : DataVault Owner +# Valid values : Check Oracle12c Administrator's Guide +# Default value : None +# Mandatory : Yes, if DataVault option is chosen +#----------------------------------------------------------------------------- +dvUserName= + +#----------------------------------------------------------------------------- +# Name : dvUserPassword +# Datatype : String +# Description : Password for DataVault Owner +# Valid values : Check Oracle12c Administrator's Guide +# Default value : None +# Mandatory : Yes, if DataVault option is chosen +#----------------------------------------------------------------------------- +dvUserPassword= + +#----------------------------------------------------------------------------- +# Name : dvAccountManagerName +# Datatype : String +# Description : DataVault Account Manager +# Valid values : Check Oracle12c Administrator's Guide +# Default value : None +# Mandatory : No +#----------------------------------------------------------------------------- +dvAccountManagerName= + +#----------------------------------------------------------------------------- +# Name : dvAccountManagerPassword +# Datatype : String +# Description : Password for DataVault Account Manager +# Valid values : Check Oracle12c Administrator's Guide +# Default value : None +# Mandatory : No +#----------------------------------------------------------------------------- +dvAccountManagerPassword= + +#----------------------------------------------------------------------------- +# Name : olsConfiguration +# Datatype : Boolean +# Description : Specify "True" to configure and enable Oracle Label Security +# Valid values : True/False +# Default value : False +# Mandatory : No +#----------------------------------------------------------------------------- +olsConfiguration=false + +#----------------------------------------------------------------------------- +# Name : datafileJarLocation +# Datatype : String +# Description : Location of the data file jar +# Valid values : Directory containing compressed datafile jar +# Default value : None +# Mandatory : No +#----------------------------------------------------------------------------- +datafileJarLocation={ORACLE_HOME}/assistants/dbca/templates/ + +#----------------------------------------------------------------------------- +# Name : datafileDestination +# Datatype : String +# Description : Location of the data file's +# Valid values : Directory for all the database files +# Default value : $ORACLE_BASE/oradata +# Mandatory : No +#----------------------------------------------------------------------------- +datafileDestination=+DATA/{DB_UNIQUE_NAME}/ + +#----------------------------------------------------------------------------- +# Name : recoveryAreaDestination +# Datatype : String +# Description : Location of the data file's +# Valid values : Recovery Area location +# Default value : $ORACLE_BASE/flash_recovery_area +# Mandatory : No +#----------------------------------------------------------------------------- +recoveryAreaDestination= + +#----------------------------------------------------------------------------- +# Name : storageType +# Datatype : String +# Description : Specifies the storage on which the database is to be created +# Valid values : FS (CFS for RAC), ASM +# Default value : FS +# Mandatory : No +#----------------------------------------------------------------------------- +storageType=ASM + +#----------------------------------------------------------------------------- +# Name : diskGroupName +# Datatype : String +# Description : Specifies the disk group name for the storage +# Default value : DATA +# Mandatory : No +#----------------------------------------------------------------------------- +diskGroupName=+DATA/{DB_UNIQUE_NAME}/ + +#----------------------------------------------------------------------------- +# Name : asmsnmpPassword +# Datatype : String +# Description : Password for ASM Monitoring +# Default value : None +# Mandatory : No +#----------------------------------------------------------------------------- +asmsnmpPassword= + +#----------------------------------------------------------------------------- +# Name : recoveryGroupName +# Datatype : String +# Description : Specifies the disk group name for the recovery area +# Default value : RECOVERY +# Mandatory : No +#----------------------------------------------------------------------------- +recoveryGroupName= + +#----------------------------------------------------------------------------- +# Name : characterSet +# Datatype : String +# Description : Character set of the database +# Valid values : Check Oracle12c National Language Support Guide +# Default value : "US7ASCII" +# Mandatory : NO +#----------------------------------------------------------------------------- +characterSet=AL32UTF8 + +#----------------------------------------------------------------------------- +# Name : nationalCharacterSet +# Datatype : String +# Description : National Character set of the database +# Valid values : "UTF8" or "AL16UTF16". For details, check Oracle12c National Language Support Guide +# Default value : "AL16UTF16" +# Mandatory : No +#----------------------------------------------------------------------------- +nationalCharacterSet=AL16UTF16 + +#----------------------------------------------------------------------------- +# Name : registerWithDirService +# Datatype : Boolean +# Description : Specifies whether to register with Directory Service. +# Valid values : TRUE \ FALSE +# Default value : FALSE +# Mandatory : No +#----------------------------------------------------------------------------- +registerWithDirService=false + + +#----------------------------------------------------------------------------- +# Name : dirServiceUserName +# Datatype : String +# Description : Specifies the name of the directory service user +# Mandatory : YES, if the value of registerWithDirService is TRUE +#----------------------------------------------------------------------------- +dirServiceUserName= + +#----------------------------------------------------------------------------- +# Name : dirServicePassword +# Datatype : String +# Description : The password of the directory service user. +# You can also specify the password at the command prompt instead of here. +# Mandatory : YES, if the value of registerWithDirService is TRUE +#----------------------------------------------------------------------------- +dirServicePassword= + +#----------------------------------------------------------------------------- +# Name : walletPassword +# Datatype : String +# Description : The password for wallet to created or modified. +# You can also specify the password at the command prompt instead of here. +# Mandatory : YES, if the value of registerWithDirService is TRUE +#----------------------------------------------------------------------------- +walletPassword= + +#----------------------------------------------------------------------------- +# Name : listeners +# Datatype : String +# Description : Specifies list of listeners to register the database with. +# By default the database is configured for all the listeners specified in the +# $ORACLE_HOME/network/admin/listener.ora +# Valid values : The list should be comma separated like "listener1,listener2". +# Mandatory : NO +#----------------------------------------------------------------------------- +listeners=LISTENER + +#----------------------------------------------------------------------------- +# Name : variablesFile +# Datatype : String +# Description : Location of the file containing variable value pair +# Valid values : A valid file-system file. The variable value pair format in this file +# is =. Each pair should be in a new line. +# Default value : None +# Mandatory : NO +#----------------------------------------------------------------------------- +variablesFile= + +#----------------------------------------------------------------------------- +# Name : variables +# Datatype : String +# Description : comma separated list of name=value pairs. Overrides variables defined in variablefile and templates +# Default value : None +# Mandatory : NO +#----------------------------------------------------------------------------- +variables=DB_UNIQUE_NAME=###ORACLE_SID###,ORACLE_BASE=###DB_BASE###,PDB_NAME=###ORACLE_PDB###,DB_NAME=###ORACLE_SID###,ORACLE_HOME=###DB_HOME###,SID=###ORACLE_SID### + +#----------------------------------------------------------------------------- +# Name : initParams +# Datatype : String +# Description : comma separated list of name=value pairs. Overrides initialization parameters defined in templates +# Default value : None +# Mandatory : NO +#----------------------------------------------------------------------------- +#initParams=family:dw_helper.instance_mode=read-only,processes=640,nls_language=AMERICAN,pga_aggregate_target=2008MB,sga_target=6022MB,dispatchers=(PROTOCOL=TCP) (SERVICE=orclXDB),db_block_size=8192BYTES,orcl1.undo_tablespace=UNDOTBS1,diagnostic_dest={ORACLE_BASE},cluster_database=true,orcl1.thread=1,audit_file_dest={ORACLE_BASE}/admin/{DB_UNIQUE_NAME}/adump,db_create_file_dest=+DATA/{DB_UNIQUE_NAME}/,nls_territory=AMERICA,local_listener=-oraagent-dummy-,compatible=12.2.0,db_name=orcl,audit_trail=db,orcl1.instance_number=1,remote_login_passwordfile=exclusive,open_cursors=300 +initParams=audit_trail=none,audit_sys_operations=false,remote_login_passwordfile=exclusive + +#----------------------------------------------------------------------------- +# Name : sampleSchema +# Datatype : Boolean +# Description : Specifies whether or not to add the Sample Schemas to your database +# Valid values : TRUE \ FALSE +# Default value : FASLE +# Mandatory : No +#----------------------------------------------------------------------------- +sampleSchema=false + +#----------------------------------------------------------------------------- +# Name : memoryPercentage +# Datatype : String +# Description : percentage of physical memory for Oracle +# Default value : None +# Mandatory : NO +#----------------------------------------------------------------------------- +memoryPercentage=40 + +#----------------------------------------------------------------------------- +# Name : databaseType +# Datatype : String +# Description : used for memory distribution when memoryPercentage specified +# Valid values : MULTIPURPOSE|DATA_WAREHOUSING|OLTP +# Default value : MULTIPURPOSE +# Mandatory : NO +#----------------------------------------------------------------------------- +databaseType=MULTIPURPOSE + +#----------------------------------------------------------------------------- +# Name : automaticMemoryManagement +# Datatype : Boolean +# Description : flag to indicate Automatic Memory Management is used +# Valid values : TRUE/FALSE +# Default value : TRUE +# Mandatory : NO +#----------------------------------------------------------------------------- +automaticMemoryManagement=false + +#----------------------------------------------------------------------------- +# Name : totalMemory +# Datatype : String +# Description : total memory in MB to allocate to Oracle +# Valid values : +# Default value : +# Mandatory : NO +#----------------------------------------------------------------------------- +totalMemory=5000 diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/dbca1.rsp b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/dbca1.rsp new file mode 100644 index 0000000000..c3d07dedf0 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/dbca1.rsp @@ -0,0 +1,605 @@ +############################################################################## +## ## +## DBCA response file ## +## ------------------ ## +## Copyright(c) Oracle Corporation 1998,2018. All rights reserved. ## +## ## +## Specify values for the variables listed below to customize ## +## your installation. ## +## ## +## Each variable is associated with a comment. The comment ## +## can help to populate the variables with the appropriate ## +## values. ## +## ## +## IMPORTANT NOTE: This file contains plain text passwords and ## +## should be secured to have read permission only by oracle user ## +## or db administrator who owns this installation. ## +############################################################################## +#------------------------------------------------------------------------------- +# Do not change the following system generated value. +#------------------------------------------------------------------------------- +responseFileVersion=/oracle/assistants/rspfmt_dbca_response_schema_v18.0.0 + +#----------------------------------------------------------------------------- +# Name : gdbName +# Datatype : String +# Description : Global database name of the database +# Valid values : . - when database domain isn't NULL +# - when database domain is NULL +# Default value : None +# Mandatory : Yes +#----------------------------------------------------------------------------- +gdbName=ORCLCDB + +#----------------------------------------------------------------------------- +# Name : sid +# Datatype : String +# Description : System identifier (SID) of the database +# Valid values : Check Oracle12c Administrator's Guide +# Default value : specified in GDBNAME +# Mandatory : No +#----------------------------------------------------------------------------- +sid=ORCLCDB + +#----------------------------------------------------------------------------- +# Name : databaseConfigType +# Datatype : String +# Description : database conf type as Single Instance, Real Application Cluster or Real Application Cluster One Nodes database +# Valid values : SI\RAC\RACONENODE +# Default value : SI +# Mandatory : No +#----------------------------------------------------------------------------- +databaseConfigType=RAC + +#----------------------------------------------------------------------------- +# Name : RACOneNodeServiceName +# Datatype : String +# Description : Service is required by application to connect to RAC One +# Node Database +# Valid values : Service Name +# Default value : None +# Mandatory : No [required in case DATABASECONFTYPE is set to RACONENODE ] +#----------------------------------------------------------------------------- +RACOneNodeServiceName= + +#----------------------------------------------------------------------------- +# Name : policyManaged +# Datatype : Boolean +# Description : Set to true if Database is policy managed and +# set to false if Database is admin managed +# Valid values : TRUE\FALSE +# Default value : FALSE +# Mandatory : No +#----------------------------------------------------------------------------- +policyManaged=false + + +#----------------------------------------------------------------------------- +# Name : createServerPool +# Datatype : Boolean +# Description : Set to true if new server pool need to be created for database +# if this option is specified then the newly created database +# will use this newly created serverpool. +# Multiple serverpoolname can not be specified for database +# Valid values : TRUE\FALSE +# Default value : FALSE +# Mandatory : No +#----------------------------------------------------------------------------- +createServerPool=false + +#----------------------------------------------------------------------------- +# Name : serverPoolName +# Datatype : String +# Description : Only one serverpool name need to be specified +# if Create Server Pool option is specified. +# Comma-separated list of Serverpool names if db need to use +# multiple Server pool +# Valid values : ServerPool name + +# Default value : None +# Mandatory : No [required in case of RAC service centric database] +#----------------------------------------------------------------------------- +serverPoolName= + +#----------------------------------------------------------------------------- +# Name : cardinality +# Datatype : Number +# Description : Specify Cardinality for create server pool operation + +# Valid values : any positive Integer value +# Default value : Number of qualified nodes on cluster +# Mandatory : No [Required when a new serverpool need to be created] +#----------------------------------------------------------------------------- +cardinality= + +#----------------------------------------------------------------------------- +# Name : force +# Datatype : Boolean +# Description : Set to true if new server pool need to be created by force +# if this option is specified then the newly created serverpool +# will be assigned server even if no free servers are available. +# This may affect already running database. +# This flag can be specified for Admin managed as well as policy managed db. +# Valid values : TRUE\FALSE +# Default value : FALSE +# Mandatory : No +#----------------------------------------------------------------------------- +force=false + +#----------------------------------------------------------------------------- +# Name : pqPoolName +# Datatype : String +# Description : Only one serverpool name needs to be specified +# if create server pool option is specified. +# Comma-separated list of serverpool names if use +# server pool. This is required to +# create Parallel Query (PQ) database. Applicable to Big Cluster +# Valid values : Parallel Query (PQ) pool name +# Default value : None +# Mandatory : No [required in case of RAC service centric database] +#----------------------------------------------------------------------------- +pqPoolName= + +#----------------------------------------------------------------------------- +# Name : pqCardinality +# Datatype : Number +# Description : Specify Cardinality for create server pool operation. +# Applicable to Big Cluster +# Valid values : any positive Integer value +# Default value : Number of qualified nodes on cluster +# Mandatory : No [Required when a new serverpool need to be created] +#----------------------------------------------------------------------------- +pqCardinality= + +#----------------------------------------------------------------------------- +# Name : createAsContainerDatabase +# Datatype : boolean +# Description : flag to create database as container database +# Valid values : Check Oracle12c Administrator's Guide +# Default value : false +# Mandatory : No +#----------------------------------------------------------------------------- +createAsContainerDatabase=true + +#----------------------------------------------------------------------------- +# Name : numberOfPDBs +# Datatype : Number +# Description : Specify the number of pdb to be created +# Valid values : 0 to 252 +# Default value : 0 +# Mandatory : No +#----------------------------------------------------------------------------- +numberOfPDBs=1 + +#----------------------------------------------------------------------------- +# Name : pdbName +# Datatype : String +# Description : Specify the pdbname/pdbanme prefix if one or more pdb need to be created +# Valid values : Check Oracle12c Administrator's Guide +# Default value : None +# Mandatory : No +#----------------------------------------------------------------------------- +pdbName=ORCLPDB + +#----------------------------------------------------------------------------- +# Name : useLocalUndoForPDBs +# Datatype : boolean +# Description : Flag to create local undo tablespace for all PDB's. +# Valid values : TRUE\FALSE +# Default value : TRUE +# Mandatory : No +#----------------------------------------------------------------------------- +useLocalUndoForPDBs=true + +#----------------------------------------------------------------------------- +# Name : pdbAdminPassword +# Datatype : String +# Description : PDB Administrator user password +# Valid values : Check Oracle12c Administrator's Guide +# Default value : None +# Mandatory : No +#----------------------------------------------------------------------------- + +pdbAdminPassword=Oracle_12c + +#----------------------------------------------------------------------------- +# Name : nodelist +# Datatype : String +# Description : Comma-separated list of cluster nodes +# Valid values : Cluster node names +# Default value : None +# Mandatory : No (Yes for RAC database-centric database ) +#----------------------------------------------------------------------------- +nodelist=racnode1 + +#----------------------------------------------------------------------------- +# Name : templateName +# Datatype : String +# Description : Name of the template +# Valid values : Template file name +# Default value : None +# Mandatory : Yes +#----------------------------------------------------------------------------- +templateName=/u01/app/oracle/product/18.3.0/dbhome_1/assistants/dbca/templates/General_Purpose.dbc + +#----------------------------------------------------------------------------- +# Name : sysPassword +# Datatype : String +# Description : Password for SYS user +# Valid values : Check Oracle12c Administrator's Guide +# Default value : None +# Mandatory : Yes +#----------------------------------------------------------------------------- +sysPassword=Oracle_12c + +#----------------------------------------------------------------------------- +# Name : systemPassword +# Datatype : String +# Description : Password for SYSTEM user +# Valid values : Check Oracle12c Administrator's Guide +# Default value : None +# Mandatory : Yes +#----------------------------------------------------------------------------- +systemPassword=Oracle_12c + +#----------------------------------------------------------------------------- +# Name : serviceUserPassword +# Datatype : String +# Description : Password for Windows Service user +# Default value : None +# Mandatory : If Oracle home is installed with windows service user +#----------------------------------------------------------------------------- +serviceUserPassword= + +#----------------------------------------------------------------------------- +# Name : emConfiguration +# Datatype : String +# Description : Enterprise Manager Configuration Type +# Valid values : CENTRAL|DBEXPRESS|BOTH|NONE +# Default value : NONE +# Mandatory : No +#----------------------------------------------------------------------------- +emConfiguration=DBEXPRESS + +#----------------------------------------------------------------------------- +# Name : emExpressPort +# Datatype : Number +# Description : Enterprise Manager Configuration Type +# Valid values : Check Oracle12c Administrator's Guide +# Default value : NONE +# Mandatory : No, will be picked up from DBEXPRESS_HTTPS_PORT env variable +# or auto generates a free port between 5500 and 5599 +#----------------------------------------------------------------------------- +emExpressPort=5500 + +#----------------------------------------------------------------------------- +# Name : runCVUChecks +# Datatype : Boolean +# Description : Specify whether to run Cluster Verification Utility checks +# periodically in Cluster environment +# Valid values : TRUE\FALSE +# Default value : FALSE +# Mandatory : No +#----------------------------------------------------------------------------- +runCVUChecks=true + +#----------------------------------------------------------------------------- +# Name : dbsnmpPassword +# Datatype : String +# Description : Password for DBSNMP user +# Valid values : Check Oracle12c Administrator's Guide +# Default value : None +# Mandatory : Yes, if emConfiguration is specified or +# the value of runCVUChecks is TRUE +#----------------------------------------------------------------------------- +dbsnmpPassword=Oracle_12c + +#----------------------------------------------------------------------------- +# Name : omsHost +# Datatype : String +# Description : EM management server host name +# Default value : None +# Mandatory : Yes, if CENTRAL is specified for emConfiguration +#----------------------------------------------------------------------------- +omsHost= + +#----------------------------------------------------------------------------- +# Name : omsPort +# Datatype : Number +# Description : EM management server port number +# Default value : None +# Mandatory : Yes, if CENTRAL is specified for emConfiguration +#----------------------------------------------------------------------------- +omsPort=0 + +#----------------------------------------------------------------------------- +# Name : emUser +# Datatype : String +# Description : EM Admin username to add or modify targets +# Default value : None +# Mandatory : Yes, if CENTRAL is specified for emConfiguration +#----------------------------------------------------------------------------- +emUser= + +#----------------------------------------------------------------------------- +# Name : emPassword +# Datatype : String +# Description : EM Admin user password +# Default value : None +# Mandatory : Yes, if CENTRAL is specified for emConfiguration +#----------------------------------------------------------------------------- +emPassword= + +#----------------------------------------------------------------------------- +# Name : dvConfiguration +# Datatype : Boolean +# Description : Specify "True" to configure and enable Oracle Database vault +# Valid values : True/False +# Default value : False +# Mandatory : No +#----------------------------------------------------------------------------- +dvConfiguration=false + +#----------------------------------------------------------------------------- +# Name : dvUserName +# Datatype : String +# Description : DataVault Owner +# Valid values : Check Oracle12c Administrator's Guide +# Default value : None +# Mandatory : Yes, if DataVault option is chosen +#----------------------------------------------------------------------------- +dvUserName= + +#----------------------------------------------------------------------------- +# Name : dvUserPassword +# Datatype : String +# Description : Password for DataVault Owner +# Valid values : Check Oracle12c Administrator's Guide +# Default value : None +# Mandatory : Yes, if DataVault option is chosen +#----------------------------------------------------------------------------- +dvUserPassword= + +#----------------------------------------------------------------------------- +# Name : dvAccountManagerName +# Datatype : String +# Description : DataVault Account Manager +# Valid values : Check Oracle12c Administrator's Guide +# Default value : None +# Mandatory : No +#----------------------------------------------------------------------------- +dvAccountManagerName= + +#----------------------------------------------------------------------------- +# Name : dvAccountManagerPassword +# Datatype : String +# Description : Password for DataVault Account Manager +# Valid values : Check Oracle12c Administrator's Guide +# Default value : None +# Mandatory : No +#----------------------------------------------------------------------------- +dvAccountManagerPassword= + +#----------------------------------------------------------------------------- +# Name : olsConfiguration +# Datatype : Boolean +# Description : Specify "True" to configure and enable Oracle Label Security +# Valid values : True/False +# Default value : False +# Mandatory : No +#----------------------------------------------------------------------------- +olsConfiguration=false + +#----------------------------------------------------------------------------- +# Name : datafileJarLocation +# Datatype : String +# Description : Location of the data file jar +# Valid values : Directory containing compressed datafile jar +# Default value : None +# Mandatory : No +#----------------------------------------------------------------------------- +datafileJarLocation={ORACLE_HOME}/assistants/dbca/templates/ + +#----------------------------------------------------------------------------- +# Name : datafileDestination +# Datatype : String +# Description : Location of the data file's +# Valid values : Directory for all the database files +# Default value : $ORACLE_BASE/oradata +# Mandatory : No +#----------------------------------------------------------------------------- +datafileDestination=+DATA/{DB_UNIQUE_NAME}/ + +#----------------------------------------------------------------------------- +# Name : recoveryAreaDestination +# Datatype : String +# Description : Location of the data file's +# Valid values : Recovery Area location +# Default value : $ORACLE_BASE/flash_recovery_area +# Mandatory : No +#----------------------------------------------------------------------------- +recoveryAreaDestination= + +#----------------------------------------------------------------------------- +# Name : storageType +# Datatype : String +# Description : Specifies the storage on which the database is to be created +# Valid values : FS (CFS for RAC), ASM +# Default value : FS +# Mandatory : No +#----------------------------------------------------------------------------- +storageType=ASM + +#----------------------------------------------------------------------------- +# Name : diskGroupName +# Datatype : String +# Description : Specifies the disk group name for the storage +# Default value : DATA +# Mandatory : No +#----------------------------------------------------------------------------- +diskGroupName=+DATA/{DB_UNIQUE_NAME}/ + +#----------------------------------------------------------------------------- +# Name : asmsnmpPassword +# Datatype : String +# Description : Password for ASM Monitoring +# Default value : None +# Mandatory : No +#----------------------------------------------------------------------------- +asmsnmpPassword= + +#----------------------------------------------------------------------------- +# Name : recoveryGroupName +# Datatype : String +# Description : Specifies the disk group name for the recovery area +# Default value : RECOVERY +# Mandatory : No +#----------------------------------------------------------------------------- +recoveryGroupName= + +#----------------------------------------------------------------------------- +# Name : characterSet +# Datatype : String +# Description : Character set of the database +# Valid values : Check Oracle12c National Language Support Guide +# Default value : "US7ASCII" +# Mandatory : NO +#----------------------------------------------------------------------------- +characterSet=AL32UTF8 + +#----------------------------------------------------------------------------- +# Name : nationalCharacterSet +# Datatype : String +# Description : National Character set of the database +# Valid values : "UTF8" or "AL16UTF16". For details, check Oracle12c National Language Support Guide +# Default value : "AL16UTF16" +# Mandatory : No +#----------------------------------------------------------------------------- +nationalCharacterSet=AL16UTF16 + +#----------------------------------------------------------------------------- +# Name : registerWithDirService +# Datatype : Boolean +# Description : Specifies whether to register with Directory Service. +# Valid values : TRUE \ FALSE +# Default value : FALSE +# Mandatory : No +#----------------------------------------------------------------------------- +registerWithDirService=false + + +#----------------------------------------------------------------------------- +# Name : dirServiceUserName +# Datatype : String +# Description : Specifies the name of the directory service user +# Mandatory : YES, if the value of registerWithDirService is TRUE +#----------------------------------------------------------------------------- +dirServiceUserName= + +#----------------------------------------------------------------------------- +# Name : dirServicePassword +# Datatype : String +# Description : The password of the directory service user. +# You can also specify the password at the command prompt instead of here. +# Mandatory : YES, if the value of registerWithDirService is TRUE +#----------------------------------------------------------------------------- +dirServicePassword= + +#----------------------------------------------------------------------------- +# Name : walletPassword +# Datatype : String +# Description : The password for wallet to created or modified. +# You can also specify the password at the command prompt instead of here. +# Mandatory : YES, if the value of registerWithDirService is TRUE +#----------------------------------------------------------------------------- +walletPassword= + +#----------------------------------------------------------------------------- +# Name : listeners +# Datatype : String +# Description : Specifies list of listeners to register the database with. +# By default the database is configured for all the listeners specified in the +# $ORACLE_HOME/network/admin/listener.ora +# Valid values : The list should be comma separated like "listener1,listener2". +# Mandatory : NO +#----------------------------------------------------------------------------- +listeners=LISTENER + +#----------------------------------------------------------------------------- +# Name : variablesFile +# Datatype : String +# Description : Location of the file containing variable value pair +# Valid values : A valid file-system file. The variable value pair format in this file +# is =. Each pair should be in a new line. +# Default value : None +# Mandatory : NO +#----------------------------------------------------------------------------- +variablesFile= + +#----------------------------------------------------------------------------- +# Name : variables +# Datatype : String +# Description : comma separated list of name=value pairs. Overrides variables defined in variablefile and templates +# Default value : None +# Mandatory : NO +#----------------------------------------------------------------------------- +variables=DB_UNIQUE_NAME=ORCLCDB,ORACLE_BASE=/u01/app/oracle,PDB_NAME=ORCLPDB,DB_NAME=ORCLCDB,ORACLE_HOME=/u01/app/oracle/product/18.3.0/dbhome_1,SID=ORCLCDB + +#----------------------------------------------------------------------------- +# Name : initParams +# Datatype : String +# Description : comma separated list of name=value pairs. Overrides initialization parameters defined in templates +# Default value : None +# Mandatory : NO +#----------------------------------------------------------------------------- +#initParams=family:dw_helper.instance_mode=read-only,processes=640,nls_language=AMERICAN,pga_aggregate_target=2008MB,sga_target=6022MB,dispatchers=(PROTOCOL=TCP) (SERVICE=orclXDB),db_block_size=8192BYTES,orcl1.undo_tablespace=UNDOTBS1,diagnostic_dest={ORACLE_BASE},cluster_database=true,orcl1.thread=1,audit_file_dest={ORACLE_BASE}/admin/{DB_UNIQUE_NAME}/adump,db_create_file_dest=+DATA/{DB_UNIQUE_NAME}/,nls_territory=AMERICA,local_listener=-oraagent-dummy-,compatible=12.2.0,db_name=orcl,audit_trail=db,orcl1.instance_number=1,remote_login_passwordfile=exclusive,open_cursors=300 +initParams=audit_trail=none,audit_sys_operations=false,remote_login_passwordfile=exclusive + +#----------------------------------------------------------------------------- +# Name : sampleSchema +# Datatype : Boolean +# Description : Specifies whether or not to add the Sample Schemas to your database +# Valid values : TRUE \ FALSE +# Default value : FASLE +# Mandatory : No +#----------------------------------------------------------------------------- +sampleSchema=false + +#----------------------------------------------------------------------------- +# Name : memoryPercentage +# Datatype : String +# Description : percentage of physical memory for Oracle +# Default value : None +# Mandatory : NO +#----------------------------------------------------------------------------- +memoryPercentage=40 + +#----------------------------------------------------------------------------- +# Name : databaseType +# Datatype : String +# Description : used for memory distribution when memoryPercentage specified +# Valid values : MULTIPURPOSE|DATA_WAREHOUSING|OLTP +# Default value : MULTIPURPOSE +# Mandatory : NO +#----------------------------------------------------------------------------- +databaseType=MULTIPURPOSE + +#----------------------------------------------------------------------------- +# Name : automaticMemoryManagement +# Datatype : Boolean +# Description : flag to indicate Automatic Memory Management is used +# Valid values : TRUE/FALSE +# Default value : TRUE +# Mandatory : NO +#----------------------------------------------------------------------------- +automaticMemoryManagement=false + +#----------------------------------------------------------------------------- +# Name : totalMemory +# Datatype : String +# Description : total memory in MB to allocate to Oracle +# Valid values : +# Default value : +# Mandatory : NO +#----------------------------------------------------------------------------- +totalMemory=5000 diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/dbca_21c.rsp b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/dbca_21c.rsp new file mode 100644 index 0000000000..4b81467bcb --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/dbca_21c.rsp @@ -0,0 +1,59 @@ +responseFileVersion=/oracle/assistants/rspfmt_dbca_response_schema_v21.0.0 +gdbName=###ORACLE_SID### +sid=###ORACLE_SID### +databaseConfigType=###DATABASE_CONFIG_TYPE### +RACOneNodeServiceName= +policyManaged=false +managementPolicy= +createServerPool=false +serverPoolName= +cardinality= +force=false +pqPoolName= +pqCardinality= +createAsContainerDatabase=###CONTAINER_DB_FLAG### +numberOfPDBs=###PDB_COUNT### +pdbName=###ORACLE_PDB### +useLocalUndoForPDBs=true +pdbAdminPassword=###ORACLE_PWD### +nodelist=###DB_NODES### +templateName={ORACLE_HOME}/assistants/dbca/templates/General_Purpose.dbc +sysPassword=###ORACLE_PWD### +systemPassword=###ORACLE_PWD### +oracleHomeUserPassword= +emConfiguration=DBEXPRESS +emExpressPort=5500 +runCVUChecks=true +dbsnmpPassword=###ORACLE_PWD### +omsHost= +omsPort= +emUser= +emPassword= +dvConfiguration=false +dvUserName= +dvUserPassword= +dvAccountManagerName= +dvAccountManagerPassword= +olsConfiguration=false +datafileJarLocation={ORACLE_HOME}/assistants/dbca/templates/ +datafileDestination=+DATA/{DB_UNIQUE_NAME}/ +recoveryAreaDestination= +storageType=ASM +diskGroupName=+DATA/{DB_UNIQUE_NAME}/ +asmsnmpPassword= +recoveryGroupName= +characterSet=AL32UTF8 +nationalCharacterSet=AL16UTF16 +registerWithDirService=false +dirServiceUserName= +dirServicePassword= +walletPassword= +listeners=LISTENER +variablesFile= +variables=DB_UNIQUE_NAME=###ORACLE_SID###,ORACLE_BASE=###DB_BASE###,PDB_NAME=###ORACLE_PDB###,DB_NAME=###ORACLE_SID###,ORACLE_HOME=###DB_HOME###,SID=###ORACLE_SID### +initParams=audit_trail=none,audit_sys_operations=false,remote_login_passwordfile=exclusive +sampleSchema=false +memoryPercentage=40 +databaseType=MULTIPURPOSE +automaticMemoryManagement=false +totalMemory=###TOTAL_MEMORY### diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/dbca_21cv1.rsp b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/dbca_21cv1.rsp new file mode 100644 index 0000000000..e93f436c08 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/dbca_21cv1.rsp @@ -0,0 +1,613 @@ +############################################################################## +## ## +## DBCA response file ## +## ------------------ ## +## Copyright(c) Oracle Corporation 1998,2020. All rights reserved. ## +## ## +## Specify values for the variables listed below to customize ## +## your installation. ## +## ## +## Each variable is associated with a comment. The comment ## +## can help to populate the variables with the appropriate ## +## values. ## +## ## +## IMPORTANT NOTE: This file contains plain text passwords and ## +## should be secured to have read permission only by oracle user ## +## or db administrator who owns this installation. ## +############################################################################## +#------------------------------------------------------------------------------- +# Do not change the following system generated value. +#------------------------------------------------------------------------------- +responseFileVersion=/oracle/assistants/rspfmt_dbca_response_schema_v21.0.0 + +#----------------------------------------------------------------------------- +# Name : gdbName +# Datatype : String +# Description : Global database name of the database +# Valid values : . - when database domain isn't NULL +# - when database domain is NULL +# Default value : None +# Mandatory : Yes +#----------------------------------------------------------------------------- +gdbName= + +#----------------------------------------------------------------------------- +# Name : sid +# Datatype : String +# Description : System identifier (SID) of the database +# Valid values : Check Oracle21c Administrator's Guide +# Default value : specified in GDBNAME +# Mandatory : No +#----------------------------------------------------------------------------- +sid= + +#----------------------------------------------------------------------------- +# Name : databaseConfigType +# Datatype : String +# Description : database conf type as Single Instance, Real Application Cluster or Real Application Cluster One Nodes database +# Valid values : SI\RAC\RACONENODE +# Default value : SI +# Mandatory : No +#----------------------------------------------------------------------------- +databaseConfigType= + +#----------------------------------------------------------------------------- +# Name : RACOneNodeServiceName +# Datatype : String +# Description : Service is required by application to connect to RAC One +# Node Database +# Valid values : Service Name +# Default value : None +# Mandatory : No [required in case DATABASECONFTYPE is set to RACONENODE ] +#----------------------------------------------------------------------------- +RACOneNodeServiceName= + +#----------------------------------------------------------------------------- +# Name : policyManaged +# Datatype : Boolean +# Description : Set to true if Database is policy managed and +# set to false if Database is admin managed +# Valid values : TRUE\FALSE +# Default value : FALSE +# Mandatory : No +#----------------------------------------------------------------------------- +policyManaged= + +#----------------------------------------------------------------------------- +## Name : managementPolicy +## Datatype : String +## Description : Set to AUTOMATIC or RANK based on management policy value +## Valid values : AUTOMATIC\RANK +## Default value : AUTOMATIC +## Mandatory : No +##----------------------------------------------------------------------------- +managementPolicy= + +#----------------------------------------------------------------------------- +# Name : createServerPool +# Datatype : Boolean +# Description : Set to true if new server pool need to be created for database +# if this option is specified then the newly created database +# will use this newly created serverpool. +# Multiple serverpoolname can not be specified for database +# Valid values : TRUE\FALSE +# Default value : FALSE +# Mandatory : No +#----------------------------------------------------------------------------- +createServerPool= + +#----------------------------------------------------------------------------- +# Name : serverPoolName +# Datatype : String +# Description : Only one serverpool name need to be specified +# if Create Server Pool option is specified. +# Comma-separated list of Serverpool names if db need to use +# multiple Server pool +# Valid values : ServerPool name + +# Default value : None +# Mandatory : No [required in case of RAC service centric database] +#----------------------------------------------------------------------------- +serverPoolName= + +#----------------------------------------------------------------------------- +# Name : cardinality +# Datatype : Number +# Description : Specify Cardinality for create server pool operation + +# Valid values : any positive Integer value +# Default value : Number of qualified nodes on cluster +# Mandatory : No [Required when a new serverpool need to be created] +#----------------------------------------------------------------------------- +cardinality= + +#----------------------------------------------------------------------------- +# Name : force +# Datatype : Boolean +# Description : Set to true if new server pool need to be created by force +# if this option is specified then the newly created serverpool +# will be assigned server even if no free servers are available. +# This may affect already running database. +# This flag can be specified for Admin managed as well as policy managed db. +# Valid values : TRUE\FALSE +# Default value : FALSE +# Mandatory : No +#----------------------------------------------------------------------------- +force= + +#----------------------------------------------------------------------------- +# Name : pqPoolName +# Datatype : String +# Description : Only one serverpool name needs to be specified +# if create server pool option is specified. +# Comma-separated list of serverpool names if use +# server pool. This is required to +# create Parallel Query (PQ) database. Applicable to Big Cluster +# Valid values : Parallel Query (PQ) pool name +# Default value : None +# Mandatory : No [required in case of RAC service centric database] +#----------------------------------------------------------------------------- +pqPoolName= + +#----------------------------------------------------------------------------- +# Name : pqCardinality +# Datatype : Number +# Description : Specify Cardinality for create server pool operation. +# Applicable to Big Cluster +# Valid values : any positive Integer value +# Default value : Number of qualified nodes on cluster +# Mandatory : No [Required when a new serverpool need to be created] +#----------------------------------------------------------------------------- +pqCardinality= + +#----------------------------------------------------------------------------- +# Name : createAsContainerDatabase +# Datatype : boolean +# Description : flag to create database as container database +# Valid values : Check Oracle21c Administrator's Guide +# Default value : false +# Mandatory : No +#----------------------------------------------------------------------------- +createAsContainerDatabase= + +#----------------------------------------------------------------------------- +# Name : numberOfPDBs +# Datatype : Number +# Description : Specify the number of pdb to be created +# Valid values : 0 to 4094 +# Default value : 0 +# Mandatory : No +#----------------------------------------------------------------------------- +numberOfPDBs= + +#----------------------------------------------------------------------------- +# Name : pdbName +# Datatype : String +# Description : Specify the pdbname/pdbanme prefix if one or more pdb need to be created +# Valid values : Check Oracle21c Administrator's Guide +# Default value : None +# Mandatory : No +#----------------------------------------------------------------------------- +pdbName= + +#----------------------------------------------------------------------------- +# Name : useLocalUndoForPDBs +# Datatype : boolean +# Description : Flag to create local undo tablespace for all PDB's. +# Valid values : TRUE\FALSE +# Default value : TRUE +# Mandatory : No +#----------------------------------------------------------------------------- +useLocalUndoForPDBs= + +#----------------------------------------------------------------------------- +# Name : pdbAdminPassword +# Datatype : String +# Description : PDB Administrator user password +# Valid values : Check Oracle21c Administrator's Guide +# Default value : None +# Mandatory : No +#----------------------------------------------------------------------------- + +pdbAdminPassword= + +#----------------------------------------------------------------------------- +# Name : nodelist +# Datatype : String +# Description : Comma-separated list of cluster nodes +# Valid values : Cluster node names +# Default value : None +# Mandatory : No (Yes for RAC database-centric database ) +#----------------------------------------------------------------------------- +nodelist= + +#----------------------------------------------------------------------------- +# Name : templateName +# Datatype : String +# Description : Name of the template +# Valid values : Template file name +# Default value : None +# Mandatory : Yes +#----------------------------------------------------------------------------- +templateName= + +#----------------------------------------------------------------------------- +# Name : sysPassword +# Datatype : String +# Description : Password for SYS user +# Valid values : Check Oracle21c Administrator's Guide +# Default value : None +# Mandatory : Yes +#----------------------------------------------------------------------------- +sysPassword= + +#----------------------------------------------------------------------------- +# Name : systemPassword +# Datatype : String +# Description : Password for SYSTEM user +# Valid values : Check Oracle21c Administrator's Guide +# Default value : None +# Mandatory : Yes +#----------------------------------------------------------------------------- +systemPassword= + +#----------------------------------------------------------------------------- +# Name : oracleHomeUserPassword +# Datatype : String +# Description : Password for Windows Service user +# Default value : None +# Mandatory : If Oracle home is installed with windows service user +#----------------------------------------------------------------------------- +oracleHomeUserPassword= + +#----------------------------------------------------------------------------- +# Name : emConfiguration +# Datatype : String +# Description : Enterprise Manager Configuration Type +# Valid values : CENTRAL|DBEXPRESS|BOTH|NONE +# Default value : NONE +# Mandatory : No +#----------------------------------------------------------------------------- +emConfiguration= + +#----------------------------------------------------------------------------- +# Name : emExpressPort +# Datatype : Number +# Description : Enterprise Manager Configuration Type +# Valid values : Check Oracle21c Administrator's Guide +# Default value : NONE +# Mandatory : No, will be picked up from DBEXPRESS_HTTPS_PORT env variable +# or auto generates a free port between 5500 and 5599 +#----------------------------------------------------------------------------- +emExpressPort=5500 + +#----------------------------------------------------------------------------- +# Name : runCVUChecks +# Datatype : Boolean +# Description : Specify whether to run Cluster Verification Utility checks +# periodically in Cluster environment +# Valid values : TRUE\FALSE +# Default value : FALSE +# Mandatory : No +#----------------------------------------------------------------------------- +runCVUChecks= + +#----------------------------------------------------------------------------- +# Name : dbsnmpPassword +# Datatype : String +# Description : Password for DBSNMP user +# Valid values : Check Oracle21c Administrator's Guide +# Default value : None +# Mandatory : Yes, if emConfiguration is specified or +# the value of runCVUChecks is TRUE +#----------------------------------------------------------------------------- +dbsnmpPassword= + +#----------------------------------------------------------------------------- +# Name : omsHost +# Datatype : String +# Description : EM management server host name +# Default value : None +# Mandatory : Yes, if CENTRAL is specified for emConfiguration +#----------------------------------------------------------------------------- +omsHost= + +#----------------------------------------------------------------------------- +# Name : omsPort +# Datatype : Number +# Description : EM management server port number +# Default value : None +# Mandatory : Yes, if CENTRAL is specified for emConfiguration +#----------------------------------------------------------------------------- +omsPort= + +#----------------------------------------------------------------------------- +# Name : emUser +# Datatype : String +# Description : EM Admin username to add or modify targets +# Default value : None +# Mandatory : Yes, if CENTRAL is specified for emConfiguration +#----------------------------------------------------------------------------- +emUser= + +#----------------------------------------------------------------------------- +# Name : emPassword +# Datatype : String +# Description : EM Admin user password +# Default value : None +# Mandatory : Yes, if CENTRAL is specified for emConfiguration +#----------------------------------------------------------------------------- +emPassword= + +#----------------------------------------------------------------------------- +# Name : dvConfiguration +# Datatype : Boolean +# Description : Specify "True" to configure and enable Oracle Database vault +# Valid values : True/False +# Default value : False +# Mandatory : No +#----------------------------------------------------------------------------- +dvConfiguration= + +#----------------------------------------------------------------------------- +# Name : dvUserName +# Datatype : String +# Description : DataVault Owner +# Valid values : Check Oracle21c Administrator's Guide +# Default value : None +# Mandatory : Yes, if DataVault option is chosen +#----------------------------------------------------------------------------- +dvUserName= + +#----------------------------------------------------------------------------- +# Name : dvUserPassword +# Datatype : String +# Description : Password for DataVault Owner +# Valid values : Check Oracle21c Administrator's Guide +# Default value : None +# Mandatory : Yes, if DataVault option is chosen +#----------------------------------------------------------------------------- +dvUserPassword= + +#----------------------------------------------------------------------------- +# Name : dvAccountManagerName +# Datatype : String +# Description : DataVault Account Manager +# Valid values : Check Oracle21c Administrator's Guide +# Default value : None +# Mandatory : No +#----------------------------------------------------------------------------- +dvAccountManagerName= + +#----------------------------------------------------------------------------- +# Name : dvAccountManagerPassword +# Datatype : String +# Description : Password for DataVault Account Manager +# Valid values : Check Oracle21c Administrator's Guide +# Default value : None +# Mandatory : No +#----------------------------------------------------------------------------- +dvAccountManagerPassword= + +#----------------------------------------------------------------------------- +# Name : olsConfiguration +# Datatype : Boolean +# Description : Specify "True" to configure and enable Oracle Label Security +# Valid values : True/False +# Default value : False +# Mandatory : No +#----------------------------------------------------------------------------- +olsConfiguration= + +#----------------------------------------------------------------------------- +# Name : datafileJarLocation +# Datatype : String +# Description : Location of the data file jar +# Valid values : Directory containing compressed datafile jar +# Default value : None +# Mandatory : No +#----------------------------------------------------------------------------- +datafileJarLocation= + +#----------------------------------------------------------------------------- +# Name : datafileDestination +# Datatype : String +# Description : Location of the data file's +# Valid values : Directory for all the database files +# Default value : $ORACLE_BASE/oradata +# Mandatory : No +#----------------------------------------------------------------------------- +datafileDestination= + +#----------------------------------------------------------------------------- +# Name : recoveryAreaDestination +# Datatype : String +# Description : Location of the data file's +# Valid values : Recovery Area location +# Default value : $ORACLE_BASE/flash_recovery_area +# Mandatory : No +#----------------------------------------------------------------------------- +recoveryAreaDestination= + +#----------------------------------------------------------------------------- +# Name : storageType +# Datatype : String +# Description : Specifies the storage on which the database is to be created +# Valid values : FS (CFS for RAC), ASM +# Default value : FS +# Mandatory : No +#----------------------------------------------------------------------------- +storageType= + +#----------------------------------------------------------------------------- +# Name : diskGroupName +# Datatype : String +# Description : Specifies the disk group name for the storage +# Default value : DATA +# Mandatory : No +#----------------------------------------------------------------------------- +diskGroupName= + +#----------------------------------------------------------------------------- +# Name : asmsnmpPassword +# Datatype : String +# Description : Password for ASM Monitoring +# Default value : None +# Mandatory : No +#----------------------------------------------------------------------------- +asmsnmpPassword= + +#----------------------------------------------------------------------------- +# Name : recoveryGroupName +# Datatype : String +# Description : Specifies the disk group name for the recovery area +# Default value : RECOVERY +# Mandatory : No +#----------------------------------------------------------------------------- +recoveryGroupName= + +#----------------------------------------------------------------------------- +# Name : characterSet +# Datatype : String +# Description : Character set of the database +# Valid values : Check Oracle21c National Language Support Guide +# Default value : "US7ASCII" +# Mandatory : NO +#----------------------------------------------------------------------------- +characterSet= + +#----------------------------------------------------------------------------- +# Name : nationalCharacterSet +# Datatype : String +# Description : National Character set of the database +# Valid values : "UTF8" or "AL16UTF16". For details, check Oracle21c National Language Support Guide +# Default value : "AL16UTF16" +# Mandatory : No +#----------------------------------------------------------------------------- +nationalCharacterSet= + +#----------------------------------------------------------------------------- +# Name : registerWithDirService +# Datatype : Boolean +# Description : Specifies whether to register with Directory Service. +# Valid values : TRUE \ FALSE +# Default value : FALSE +# Mandatory : No +#----------------------------------------------------------------------------- +registerWithDirService= + + +#----------------------------------------------------------------------------- +# Name : dirServiceUserName +# Datatype : String +# Description : Specifies the name of the directory service user +# Mandatory : YES, if the value of registerWithDirService is TRUE +#----------------------------------------------------------------------------- +dirServiceUserName= + +#----------------------------------------------------------------------------- +# Name : dirServicePassword +# Datatype : String +# Description : The password of the directory service user. +# You can also specify the password at the command prompt instead of here. +# Mandatory : YES, if the value of registerWithDirService is TRUE +#----------------------------------------------------------------------------- +dirServicePassword= + +#----------------------------------------------------------------------------- +# Name : walletPassword +# Datatype : String +# Description : The password for wallet to created or modified. +# You can also specify the password at the command prompt instead of here. +# Mandatory : YES, if the value of registerWithDirService is TRUE +#----------------------------------------------------------------------------- +walletPassword= + +#----------------------------------------------------------------------------- +# Name : listeners +# Datatype : String +# Description : Specifies list of listeners to register the database with. +# By default the database is configured for all the listeners specified in the +# $ORACLE_HOME/network/admin/listener.ora +# Valid values : The list should be comma separated like "listener1,listener2". +# Mandatory : NO +#----------------------------------------------------------------------------- +listeners= + +#----------------------------------------------------------------------------- +# Name : variablesFile +# Datatype : String +# Description : Location of the file containing variable value pair +# Valid values : A valid file-system file. The variable value pair format in this file +# is =. Each pair should be in a new line. +# Default value : None +# Mandatory : NO +#----------------------------------------------------------------------------- +variablesFile= + +#----------------------------------------------------------------------------- +# Name : variables +# Datatype : String +# Description : comma separated list of name=value pairs. Overrides variables defined in variablefile and templates +# Default value : None +# Mandatory : NO +#----------------------------------------------------------------------------- +variables= + +#----------------------------------------------------------------------------- +# Name : initParams +# Datatype : String +# Description : comma separated list of name=value pairs. Overrides initialization parameters defined in templates +# Default value : None +# Mandatory : NO +#----------------------------------------------------------------------------- +initParams= + +#----------------------------------------------------------------------------- +# Name : sampleSchema +# Datatype : Boolean +# Description : Specifies whether or not to add the Sample Schemas to your database +# Valid values : TRUE \ FALSE +# Default value : FASLE +# Mandatory : No +#----------------------------------------------------------------------------- +sampleSchema= + +#----------------------------------------------------------------------------- +# Name : memoryPercentage +# Datatype : String +# Description : percentage of physical memory for Oracle +# Default value : None +# Mandatory : NO +#----------------------------------------------------------------------------- +memoryPercentage= + +#----------------------------------------------------------------------------- +# Name : databaseType +# Datatype : String +# Description : used for memory distribution when memoryPercentage specified +# Valid values : MULTIPURPOSE|DATA_WAREHOUSING|OLTP +# Default value : MULTIPURPOSE +# Mandatory : NO +#----------------------------------------------------------------------------- +databaseType= + +#----------------------------------------------------------------------------- +# Name : automaticMemoryManagement +# Datatype : Boolean +# Description : flag to indicate Automatic Memory Management is used +# Valid values : TRUE/FALSE +# Default value : TRUE +# Mandatory : NO +#----------------------------------------------------------------------------- +automaticMemoryManagement= + +#----------------------------------------------------------------------------- +# Name : totalMemory +# Datatype : String +# Description : total memory in MB to allocate to Oracle +# Valid values : +# Default value : +# Mandatory : NO +#----------------------------------------------------------------------------- +totalMemory= diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/enableRAC.sh b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/enableRAC.sh new file mode 100755 index 0000000000..ea6147df01 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/enableRAC.sh @@ -0,0 +1,19 @@ +#!/bin/bash +# LICENSE UPL 1.0 +# +# Copyright (c) 2018,2021 Oracle and/or its affiliates. +# +# Since: January, 2018 +# Author: sanjay.singh@oracle.com, paramdeep.saini@oracle.com +# Description: Enable RAC feature in Oracle Software +# +# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS HEADER. +# shellcheck disable=SC1090 +source /home/"${DB_USER}"/.bashrc + +export ORACLE_HOME=${DB_HOME} +export PATH=${ORACLE_HOME}/bin:/bin:/sbin:/usr/bin +export LD_LIBRARY_PATH=${ORACLE_HOME}/lib:/lib:/usr/lib + +make -f "$DB_HOME"/rdbms/lib/ins_rdbms.mk rac_on +make -f "$DB_HOME"/rdbms/lib/ins_rdbms.mk ioracle diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/fixupPreq.sh b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/fixupPreq.sh new file mode 100755 index 0000000000..978f0b49e6 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/fixupPreq.sh @@ -0,0 +1,30 @@ +#!/bin/bash +# LICENSE UPL 1.0 +# +# Copyright (c) 2018,2021 Oracle and/or its affiliates. +# +# Since: January, 2018 +# Author: sanjay.singh@oracle.com, paramdeep.saini@oracle.com +# Description: Setup the Linux kernel parameter inside the container. Note that some parameter need to be set on container host. +# +# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS HEADER. + +rpm -Uvh "$GRID_HOME/cv/rpm/cvuqdisk*" +echo "oracle soft nofile 1024" > /etc/security/limits.conf +echo "oracle hard nofile 65536" >> /etc/security/limits.conf +echo "oracle soft nproc 16384" >> /etc/security/limits.conf +echo "oracle hard nproc 16384" >> /etc/security/limits.conf +echo "oracle soft stack 10240" >> /etc/security/limits.conf +echo "oracle hard stack 32768" >> /etc/security/limits.conf +echo "oracle hard memlock 134217728" >> /etc/security/limits.conf +echo "oracle soft memlock 134217728" >> /etc/security/limits.conf +echo "grid soft nofile 1024" >> /etc/security/limits.conf +echo "grid hard nofile 65536" >> /etc/security/limits.conf +echo "grid soft nproc 16384" >> /etc/security/limits.conf +echo "grid hard nproc 16384" >> /etc/security/limits.conf +echo "grid soft stack 10240" >> /etc/security/limits.conf +echo "grid hard stack 32768" >> /etc/security/limits.conf +echo "grid hard memlock 134217728" >> /etc/security/limits.conf +echo "grid soft memlock 134217728" >> /etc/security/limits.conf +echo "ulimit -S -s 10240" >> /home/grid/.bashrc +echo "ulimit -S -s 10240" >> /home/oracle/.bashrc diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/functions.sh b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/functions.sh new file mode 100755 index 0000000000..5d3f26bfaf --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/functions.sh @@ -0,0 +1,196 @@ +#!/bin/bash +# LICENSE UPL 1.0 +# +# Copyright (c) 2018,2021 Oracle and/or its affiliates. +# +# Since: January, 2018 +# Author: sanjay.singh@oracle.com, paramdeep.saini@oracle.com +# Description: Common Function File +# +# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS HEADER. +# + +export logfile=/tmp/orod.log +export logdir=/tmp +export STD_OUT_FILE="/proc/1/fd/1" +export STD_ERR_FILE="/proc/1/fd/2" +export TOP_PID=$$ + +###### Function Related to printing messages and exit the script if error occurred ################## +error_exit() { + # shellcheck disable=SC2155 +local NOW=$(date +"%m-%d-%Y %T %Z") + # Display error message and exit +# echo "${PROGNAME}: ${1:-"Unknown Error"}" 1>&2 + echo "${NOW} : ${PROGNAME}: ${1:-"Unknown Error"}" | tee -a $logfile > $STD_OUT_FILE + kill -s TERM $TOP_PID +} + +print_message () +{ + # shellcheck disable=SC2155 + local NOW=$(date +"%m-%d-%Y %T %Z") + # Display message and return + echo "${NOW} : ${PROGNAME} : ${1:-"Unknown Message"}" | tee -a $logfile > $STD_OUT_FILE + return $? +} + +##################################################################################################### + +####### Function related to IP Checks ############################################################### + +validating_env_vars () +{ +local stat=3 +local ip="${1}" +local alive="${2}" + +print_message "checking IP is in correct format such as xxx.xxx.xxx.xxx" + +if valid_ip "$ip"; then + print_message "IP $ip format check passed!" +else + error_exit "IP $ip is not in correct format..please check!" +fi + +# Checking if Host is alive + +if [ "${alive}" == "true" ]; then + +print_message "Checking if IP is pingable or not!" + +if host_alive "$ip"; then + print_message "IP $ip is pingable ...check passed!" +else + error_exit "IP $ip is not pingable..check failed!" +fi + +else + +print_message "Checking if IP is pingable or not!" + +if host_alive "$ip"; then + error_exit "IP $ip is already allocated...check failed!" +else + print_message "IP $ip is not pingable..check passed!" +fi + +fi +} + +check_interface () +{ +local ethcard=$1 +local output + +ip link show | grep "$ethcard" + +output=$? + + if [ $output -eq 0 ];then + return 0 + else + return 1 + fi +} + +valid_ip() +{ + local ip=$1 + local stat=1 + if [[ $ip =~ ^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$ ]]; then + OIFS=$IFS + IFS='.' + # shellcheck disable=SC2206 + ip=($ip) + IFS=$OIFS + [[ ${ip[0]} -le 255 && ${ip[1]} -le 255 \ + && ${ip[2]} -le 255 && ${ip[3]} -le 255 ]] + stat=$? + fi + return $stat +} + +host_alive() +{ + + local ip_or_hostname=$1 + local stat=1 +ping -c 1 -W 1 "$ip_or_hostname" >& /dev/null +# shellcheck disable=SC2181 +if [ $? -eq 0 ]; then + stat=0 + return $stat +else + stat=1 + return $stat +fi + +} + +resolveip(){ + + local host="$1" + if [ -z "$host" ] + then + return 1 + else + # shellcheck disable=SC2155,SC2178 + local ip=$( getent hosts "$host" | awk '{print $1}' ) + # shellcheck disable=SC2128 + if [ -z "$ip" ] + then + # shellcheck disable=SC2178 + ip=$( dig +short "$host" ) + # shellcheck disable=SC2128 + if [ -z "$ip" ] + then + print_message "unable to resolve '$host'" + return 1 + else + # shellcheck disable=SC2128 + print_message "$ip" + return 0 + fi + else + # shellcheck disable=SC2128 + print_message "$ip" + return 0 + fi + fi +} + +################################################################################################################## + +############################################Match an Array element####################### +isStringExist () +{ +local checkthestring="$1" +local stringtocheck="$2" +local stat=1 + +IFS=', ' read -r -a string_array <<< "$checkthestring" + +for ((i=0; i < ${#string_array[@]}; ++i)); do + if [ "${stringtocheck}" == "${string_array[i]}" ]; then + stat=0 + fi +done + return $stat +} + + +######################################################################################### + + +##################################################Password function########################## + +setpasswd () +{ + +local user=$1 +local pass=$2 +echo "$pass" | passwd "$user" --stdin +} + +############################################################################################## diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/grid.rsp b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/grid.rsp new file mode 100644 index 0000000000..4baedc896d --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/grid.rsp @@ -0,0 +1,672 @@ +############################################################################### +## Copyright(c) Oracle Corporation 1998,2018. All rights reserved. ## +## ## +## Specify values for the variables listed below to customize ## +## your installation. ## +## ## +## Each variable is associated with a comment. The comment ## +## can help to populate the variables with the appropriate ## +## values. ## +## ## +## IMPORTANT NOTE: This file contains plain text passwords and ## +## should be secured to have read permission only by oracle user ## +## or db administrator who owns this installation. ## +## ## +############################################################################### + +############################################################################### +## ## +## Instructions to fill this response file ## +## To register and configure 'Grid Infrastructure for Cluster' ## +## - Fill out sections A,B,C,D,E,F and G ## +## - Fill out section G if OCR and voting disk should be placed on ASM ## +## ## +## To register and configure 'Grid Infrastructure for Standalone server' ## +## - Fill out sections A,B and G ## +## ## +## To register software for 'Grid Infrastructure' ## +## - Fill out sections A,B and D ## +## - Provide the cluster nodes in section D when choosing CRS_SWONLY as ## +## installation option in section A ## +## ## +## To upgrade clusterware and/or Automatic storage management of earlier ## +## releases ## +## - Fill out sections A,B,C,D and H ## +## ## +## To add more nodes to the cluster ## +## - Fill out sections A and D ## +## - Provide the cluster nodes in section D when choosing CRS_ADDNODE as ## +## installation option in section A ## +## ## +############################################################################### + +#------------------------------------------------------------------------------ +# Do not change the following system generated value. +#------------------------------------------------------------------------------ +oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v18.0.0 + +############################################################################### +# # +# SECTION A - BASIC # +# # +############################################################################### + + +#------------------------------------------------------------------------------- +# Specify the location which holds the inventory files. +# This is an optional parameter if installing on +# Windows based Operating System. +#------------------------------------------------------------------------------- +INVENTORY_LOCATION=###INVENTORY### + +#------------------------------------------------------------------------------- +# Specify the installation option. +# Allowed values: CRS_CONFIG or HA_CONFIG or UPGRADE or CRS_SWONLY or HA_SWONLY +# - CRS_CONFIG : To register home and configure Grid Infrastructure for cluster +# - HA_CONFIG : To register home and configure Grid Infrastructure for stand alone server +# - UPGRADE : To register home and upgrade clusterware software of earlier release +# - CRS_SWONLY : To register Grid Infrastructure Software home (can be configured for cluster +# or stand alone server later) +# - HA_SWONLY : To register Grid Infrastructure Software home (can be configured for stand +# alone server later. This is only supported on Windows.) +# - CRS_ADDNODE : To add more nodes to the cluster +# - CRS_DELETE_NODE : To delete nodes to the cluster +#------------------------------------------------------------------------------- +oracle.install.option=CRS_CONFIG + +#------------------------------------------------------------------------------- +# Specify the complete path of the Oracle Base. +#------------------------------------------------------------------------------- +ORACLE_BASE=###GRID_BASE### + +################################################################################ +# # +# SECTION B - GROUPS # +# # +# The following three groups need to be assigned for all GI installations. # +# OSDBA and OSOPER can be the same or different. OSASM must be different # +# than the other two. # +# The value to be specified for OSDBA, OSOPER and OSASM group is only for # +# Unix based Operating System. # +# These groups are not required for upgrades, as they will be determined # +# from the Oracle home to upgrade. # +# # +################################################################################ +#------------------------------------------------------------------------------- +# The OSDBA_GROUP is the OS group which is to be granted SYSDBA privileges. +#------------------------------------------------------------------------------- +oracle.install.asm.OSDBA=dba + +#------------------------------------------------------------------------------- +# The OSOPER_GROUP is the OS group which is to be granted SYSOPER privileges. +# The value to be specified for OSOPER group is optional. +# Value should not be provided if configuring Client Cluster - i.e. storageOption=CLIENT_ASM_STORAGE. +#------------------------------------------------------------------------------- +oracle.install.asm.OSOPER= + +#------------------------------------------------------------------------------- +# The OSASM_GROUP is the OS group which is to be granted SYSASM privileges. This +# must be different than the previous two. +#------------------------------------------------------------------------------- +oracle.install.asm.OSASM=asmadmin + +################################################################################ +# # +# SECTION C - SCAN # +# # +################################################################################ +#------------------------------------------------------------------------------- +# Specify the type of SCAN configuration for the cluster +# Allowed values : LOCAL_SCAN and SHARED_SCAN +#------------------------------------------------------------------------------- +oracle.install.crs.config.scanType=###SCAN_TYPE### + +#------------------------------------------------------------------------------- +# Applicable only if SHARED_SCAN is being configured for cluster +# Specify the path to the SCAN client data file +#------------------------------------------------------------------------------- +oracle.install.crs.config.SCANClientDataFile=###SHARED_SCAN_FILE### + +#------------------------------------------------------------------------------- +# Specify a name for SCAN +# Applicable if LOCAL_SCAN is being configured for the cluster +# If you choose to configure the cluster with GNS with Auto assigned Node VIPs(DHCP),then the scanName should be specified in the format of 'SCAN name.Cluster name.GNS sub-domain' +#------------------------------------------------------------------------------- +oracle.install.crs.config.gpnp.scanName=###SCAN_NAME### + +#------------------------------------------------------------------------------- +# Specify a unused port number for SCAN service +#------------------------------------------------------------------------------- + +oracle.install.crs.config.gpnp.scanPort=###SCAN_PORT### + +################################################################################ +# # +# SECTION D - CLUSTER & GNS # +# # +################################################################################ +#------------------------------------------------------------------------------- +# Specify the required cluster configuration +# Allowed values: STANDALONE, DOMAIN, MEMBERDB, MEMBERAPP +#------------------------------------------------------------------------------- +oracle.install.crs.config.ClusterConfiguration=###CLUSTER_TYPE### + +#------------------------------------------------------------------------------- +# Specify 'true' if you would like to configure the cluster as Extended, else +# specify 'false' +# +# Applicable only for STANDALONE and DOMAIN cluster configuration +#------------------------------------------------------------------------------- +oracle.install.crs.config.configureAsExtendedCluster=false + + +#------------------------------------------------------------------------------- +# Specify the Member Cluster Manifest file +# +# Applicable only for MEMBERDB and MEMBERAPP cluster configuration +#------------------------------------------------------------------------------- +oracle.install.crs.config.memberClusterManifestFile=###MEMBERDB_FILE### + +#------------------------------------------------------------------------------- +# Specify a name for the Cluster you are creating. +# +# The maximum length allowed for clustername is 15 characters. The name can be +# any combination of lower and uppercase alphabets (A - Z), (0 - 9), hyphen(-) +# and underscore(_). +# +# Applicable only for STANDALONE and DOMAIN cluster configuration +#------------------------------------------------------------------------------- +oracle.install.crs.config.clusterName=###CLUSTER_NAME### + +#------------------------------------------------------------------------------- +# Applicable only for STANDALONE, DOMAIN, MEMBERDB cluster configuration. +# Specify 'true' if you would like to configure Grid Naming Service(GNS), else +# specify 'false' +#------------------------------------------------------------------------------- +oracle.install.crs.config.gpnp.configureGNS=###CONFIGURE_GNS### + +#------------------------------------------------------------------------------- +# Applicable only for STANDALONE and DOMAIN cluster configuration if you choose to configure GNS. +# Specify 'true' if you would like to assign SCAN name VIP and Node VIPs by DHCP +# , else specify 'false' +#------------------------------------------------------------------------------- +oracle.install.crs.config.autoConfigureClusterNodeVIP=###DHCP_CONF### + +#------------------------------------------------------------------------------- +# Applicable only if you choose to configure GNS. +# Specify the type of GNS configuration for cluster +# Allowed values are: CREATE_NEW_GNS and USE_SHARED_GNS +# Only USE_SHARED_GNS value is allowed for MEMBERDB cluster configuration. +#------------------------------------------------------------------------------- +oracle.install.crs.config.gpnp.gnsOption=###GNS_OPTIONS### + +#------------------------------------------------------------------------------- +# Applicable only if SHARED_GNS is being configured for cluster +# Specify the path to the GNS client data file +#------------------------------------------------------------------------------- +oracle.install.crs.config.gpnp.gnsClientDataFile= + +#------------------------------------------------------------------------------- +# Applicable only for STANDALONE and DOMAIN cluster configuration if you choose to +# configure GNS for this cluster oracle.install.crs.config.gpnp.gnsOption=CREATE_NEW_GNS +# Specify the GNS subdomain and an unused virtual hostname for GNS service +#------------------------------------------------------------------------------- +oracle.install.crs.config.gpnp.gnsSubDomain=###GNS_SUBDOMAIN### +oracle.install.crs.config.gpnp.gnsVIPAddress=###GNSVIP_HOSTNAME### + +#------------------------------------------------------------------------------- +# Specify the list of sites - only if configuring an Extended Cluster +#------------------------------------------------------------------------------- +oracle.install.crs.config.sites= + +#------------------------------------------------------------------------------- +# Specify the list of nodes that have to be configured to be part of the cluster. +# +# The list should a comma-separated list of tuples. Each tuple should be a +# colon-separated string that contains +# - 1 field if you have chosen CRS_SWONLY as installation option, or +# - 1 field if configuring an Application Cluster, or +# - 3 fields if configuring a Flex Cluster +# - 3 fields if adding more nodes to the configured cluster, or +# - 4 fields if configuring an Extended Cluster +# +# The fields should be ordered as follows: +# 1. The first field should be the public node name. +# 2. The second field should be the virtual host name +# (Should be specified as AUTO if you have chosen 'auto configure for VIP' +# i.e. autoConfigureClusterNodeVIP=true) +# 3. The third field indicates the role of node (HUB,LEAF). This has to +# be provided only if Flex Cluster is being configured. +# For Extended Cluster only HUB should be specified for all nodes +# 4. The fourth field indicates the site designation for the node. To be specified only if configuring an Extended Cluster. +# The 2nd and 3rd fields are not applicable if you have chosen CRS_SWONLY as installation option +# The 2nd and 3rd fields are not applicable if configuring an Application Cluster +# +# Examples +# For registering GI for a cluster software: oracle.install.crs.config.clusterNodes=node1,node2 +# For adding more nodes to the configured cluster: oracle.install.crs.config.clusterNodes=node1:node1-vip:HUB,node2:node2-vip:LEAF +# For configuring Application Cluster: oracle.install.crs.config.clusterNodes=node1,node2 +# For configuring Flex Cluster: oracle.install.crs.config.clusterNodes=node1:node1-vip:HUB,node2:node2-vip:LEAF +# For configuring Extended Cluster: oracle.install.crs.config.clusterNodes=node1:node1-vip:HUB:site1,node2:node2-vip:HUB:site2 +# You can specify a range of nodes in the tuple using colon separated fields of format +# hostnameprefix:lowerbound-upperbound:hostnamesuffix:vipsuffix:role of node +# +#------------------------------------------------------------------------------- +#oracle.install.crs.config.clusterNodes=###HOSTNAME###:###HOSTNAME_VIP###:HUB +oracle.install.crs.config.clusterNodes=###CRS_CONFIG_NODES### + +#------------------------------------------------------------------------------- +# The value should be a comma separated strings where each string is as shown below +# InterfaceName:SubnetAddress:InterfaceType +# where InterfaceType can be either "1", "2", "3", "4", or "5" +# InterfaceType stand for the following values +# - 1 : PUBLIC +# - 2 : PRIVATE +# - 3 : DO NOT USE +# - 4 : ASM +# - 5 : ASM & PRIVATE +# +# For example: eth0:140.87.24.0:1,eth1:10.2.1.0:2,eth2:140.87.52.0:3 +# +#------------------------------------------------------------------------------- +oracle.install.crs.config.networkInterfaceList=###NETWORK_STRING### + +#------------------------------------------------------------------------------ +# Create a separate ASM DiskGroup to store GIMR data. +# Specify 'true' if you would like to separate GIMR data with clusterware data, +# else specify 'false' +# Value should be 'true' for DOMAIN cluster configurations +# Value can be true/false for STANDALONE cluster configurations. +#------------------------------------------------------------------------------ +oracle.install.asm.configureGIMRDataDG=###GIMR_DG_FLAG### + +################################################################################ +# # +# SECTION E - STORAGE # +# # +################################################################################ + +#------------------------------------------------------------------------------- +# Specify the type of storage to use for Oracle Cluster Registry(OCR) and Voting +# Disks files +# - FLEX_ASM_STORAGE +# - CLIENT_ASM_STORAGE +# +# Applicable only for MEMBERDB cluster configuration +#------------------------------------------------------------------------------- +oracle.install.crs.config.storageOption=###STORAGE_OPTIONS_FOR_MEMBERDB### +################################################################################ +# # +# SECTION F - IPMI # +# # +################################################################################ + +#------------------------------------------------------------------------------- +# Specify 'true' if you would like to configure Intelligent Power Management interface +# (IPMI), else specify 'false' +#------------------------------------------------------------------------------- +oracle.install.crs.config.useIPMI=false + +#------------------------------------------------------------------------------- +# Applicable only if you choose to configure IPMI +# i.e. oracle.install.crs.config.useIPMI=true +# Specify the username and password for using IPMI service +#------------------------------------------------------------------------------- +oracle.install.crs.config.ipmi.bmcUsername= +oracle.install.crs.config.ipmi.bmcPassword= +################################################################################ +# # +# SECTION G - ASM # +# # +################################################################################ + +#------------------------------------------------------------------------------- +# ASM Storage Type +# Allowed values are : ASM and ASM_ON_NAS +# ASM_ON_NAS applicable only if +# oracle.install.crs.config.ClusterConfiguration=STANDALONE +#------------------------------------------------------------------------------- +oracle.install.asm.storageOption=ASM + +#------------------------------------------------------------------------------- +# NAS location to create ASM disk group for storing OCR/VDSK +# Specify the NAS location where you want the ASM disk group to be created +# to be used to store OCR/VDSK files +# Applicable only if oracle.install.asm.storageOption=ASM_ON_NAS +#------------------------------------------------------------------------------- +oracle.install.asmOnNAS.ocrLocation= +#------------------------------------------------------------------------------ +# Create a separate ASM DiskGroup on NAS to store GIMR data +# Specify 'true' if you would like to separate GIMR data with clusterware data, else +# specify 'false' +# Applicable only if oracle.install.asm.storageOption=ASM_ON_NAS +#------------------------------------------------------------------------------ +oracle.install.asmOnNAS.configureGIMRDataDG=false + +#------------------------------------------------------------------------------- +# NAS location to create ASM disk group for storing GIMR data +# Specify the NAS location where you want the ASM disk group to be created +# to be used to store the GIMR database +# Applicable only if oracle.install.asm.storageOption=ASM_ON_NAS +# and oracle.install.asmOnNAS.configureGIMRDataDG=true +#------------------------------------------------------------------------------- +oracle.install.asmOnNAS.gimrLocation= + +#------------------------------------------------------------------------------- +# Password for SYS user of Oracle ASM +#------------------------------------------------------------------------------- +oracle.install.asm.SYSASMPassword=###PASSWORD### + +#------------------------------------------------------------------------------- +# The ASM DiskGroup +# +# Example: oracle.install.asm.diskGroup.name=data +# +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.name=###DB_ASM_DISKGROUP### + +#------------------------------------------------------------------------------- +# Redundancy level to be used by ASM. +# It can be one of the following +# - NORMAL +# - HIGH +# - EXTERNAL +# - FLEX# - EXTENDED (required if oracle.install.crs.config.ClusterConfiguration=EXTENDED) +# Example: oracle.install.asm.diskGroup.redundancy=NORMAL +# +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.redundancy=EXTERNAL + +#------------------------------------------------------------------------------- +# Allocation unit size to be used by ASM. +# It can be one of the following values +# - 1 +# - 2 +# - 4 +# - 8 +# - 16 +# Example: oracle.install.asm.diskGroup.AUSize=4 +# size unit is MB +# +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.AUSize=4 + +#------------------------------------------------------------------------------- +# Failure Groups for the disk group +# If configuring for Extended cluster specify as list of "failure group name:site" +# tuples. +# Else just specify as list of failure group names +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.FailureGroups= + +#------------------------------------------------------------------------------- +# List of disks and their failure groups to create a ASM DiskGroup +# (Use this if each of the disks have an associated failure group) +# Failure Groups are not required if oracle.install.asm.diskGroup.redundancy=EXTERNAL +# Example: +# For Unix based Operating System: +# oracle.install.asm.diskGroup.disksWithFailureGroupNames=/oracle/asm/disk1,FGName,/oracle/asm/disk2,FGName +# For Windows based Operating System: +# oracle.install.asm.diskGroup.disksWithFailureGroupNames=\\.\ORCLDISKDATA0,FGName,\\.\ORCLDISKDATA1,FGName +# +#------------------------------------------------------------------------------- +#oracle.install.asm.diskGroup.disksWithFailureGroupNames=/dev/asm-disk1,,/dev/asm-disk2,,/dev/asm-disk3, +oracle.install.asm.diskGroup.disksWithFailureGroupNames=###ASM_DISKGROUP_FG_DISKS### + +#------------------------------------------------------------------------------- +# List of disks to create a ASM DiskGroup +# (Use this variable only if failure groups configuration is not required) +# Example: +# For Unix based Operating System: +# oracle.install.asm.diskGroup.disks=/oracle/asm/disk1,/oracle/asm/disk2 +# For Windows based Operating System: +# oracle.install.asm.diskGroup.disks=\\.\ORCLDISKDATA0,\\.\ORCLDISKDATA1 +# +#------------------------------------------------------------------------------- +#oracle.install.asm.diskGroup.disks=/dev/asm-disk1,/dev/asm-disk2,/dev/asm-disk3 +oracle.install.asm.diskGroup.disks=###ASM_DISKGROUP_DISKS### + +#------------------------------------------------------------------------------- +# List of failure groups to be marked as QUORUM. +# Quorum failure groups contain only voting disk data, no user data is stored +# Example: +# oracle.install.asm.diskGroup.quorumFailureGroupNames=FGName1,FGName2 +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.quorumFailureGroupNames= + +#------------------------------------------------------------------------------- +# The disk discovery string to be used to discover the disks used create a ASM DiskGroup +# +# Example: +# For Unix based Operating System: +# oracle.install.asm.diskGroup.diskDiscoveryString=/oracle/asm/* +# For Windows based Operating System: +# oracle.install.asm.diskGroup.diskDiscoveryString=\\.\ORCLDISK* +# +#------------------------------------------------------------------------------- +#oracle.install.asm.diskGroup.diskDiscoveryString=/dev/asm* +oracle.install.asm.diskGroup.diskDiscoveryString=###ASM_DISCOVERY_STRING### + +#------------------------------------------------------------------------------- +# Password for ASMSNMP account +# ASMSNMP account is used by Oracle Enterprise Manager to monitor Oracle ASM instances +#------------------------------------------------------------------------------- +oracle.install.asm.monitorPassword=###PASSWORD### + +#------------------------------------------------------------------------------- +# GIMR Storage data ASM DiskGroup +# Applicable only when +# oracle.install.asm.configureGIMRDataDG=true +# Example: oracle.install.asm.GIMRDG.name=MGMT +# +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.name=###GIMR_DG_NAME### + +#------------------------------------------------------------------------------- +# Redundancy level to be used by ASM. +# It can be one of the following +# - NORMAL +# - HIGH +# - EXTERNAL +# - FLEX# - EXTENDED (only if oracle.install.crs.config.ClusterConfiguration=EXTENDED) +# Example: oracle.install.asm.gimrDG.redundancy=NORMAL +# +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.redundancy=###GIMR_DG_REDUNDANCY### + +#------------------------------------------------------------------------------- +# Allocation unit size to be used by ASM. +# It can be one of the following values +# - 1 +# - 2 +# - 4 +# - 8 +# - 16 +# Example: oracle.install.asm.gimrDG.AUSize=4 +# size unit is MB +# +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.AUSize=1 + +#------------------------------------------------------------------------------- +# Failure Groups for the GIMR storage data ASM disk group +# If configuring for Extended cluster specify as list of "failure group name:site" +# tuples. +# Else just specify as list of failure group names +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.FailureGroups=###GIMR_DG_FAILURE_GROUP### + +#------------------------------------------------------------------------------- +# List of disks and their failure groups to create GIMR data ASM DiskGroup +# (Use this if each of the disks have an associated failure group) +# Failure Groups are not required if oracle.install.asm.gimrDG.redundancy=EXTERNAL +# Example: +# For Unix based Operating System: +# oracle.install.asm.gimrDG.disksWithFailureGroupNames=/oracle/asm/disk1,FGName,/oracle/asm/disk2,FGName +# For Windows based Operating System: +# oracle.install.asm.gimrDG.disksWithFailureGroupNames=\\.\ORCLDISKDATA0,FGName,\\.\ORCLDISKDATA1,FGName +# +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.disksWithFailureGroupNames=###GIMR_DISKGROUP_FG_DISKS### + +#------------------------------------------------------------------------------- +# List of disks to create GIMR data ASM DiskGroup +# (Use this variable only if failure groups configuration is not required) +# Example: +# For Unix based Operating System: +# oracle.install.asm.gimrDG.disks=/oracle/asm/disk1,/oracle/asm/disk2 +# For Windows based Operating System: +# oracle.install.asm.gimrDG.disks=\\.\ORCLDISKDATA0,\\.\ORCLDISKDATA1 +# +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.disks=###GIMR_DISKGROUP_DISKS### + +#------------------------------------------------------------------------------- +# List of failure groups to be marked as QUORUM. +# Quorum failure groups contain only voting disk data, no user data is stored +# Example: +# oracle.install.asm.gimrDG.quorumFailureGroupNames=FGName1,FGName2 +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.quorumFailureGroupNames= + +#------------------------------------------------------------------------------- +# Configure AFD - ASM Filter Driver +# Applicable only for FLEX_ASM_STORAGE option +# Specify 'true' if you want to configure AFD, else specify 'false' +#------------------------------------------------------------------------------- +oracle.install.asm.configureAFD=false +#------------------------------------------------------------------------------- +# Configure RHPS - Rapid Home Provisioning Service +# Applicable only for DOMAIN cluster configuration +# Specify 'true' if you want to configure RHP service, else specify 'false' +#------------------------------------------------------------------------------- +oracle.install.crs.configureRHPS=false + +################################################################################ +# # +# SECTION H - UPGRADE # +# # +################################################################################ +#------------------------------------------------------------------------------- +# Specify whether to ignore down nodes during upgrade operation. +# Value should be 'true' to ignore down nodes otherwise specify 'false' +#------------------------------------------------------------------------------- +oracle.install.crs.config.ignoreDownNodes=false +################################################################################ +# # +# MANAGEMENT OPTIONS # +# # +################################################################################ + +#------------------------------------------------------------------------------- +# Specify the management option to use for managing Oracle Grid Infrastructure +# Options are: +# 1. CLOUD_CONTROL - If you want to manage your Oracle Grid Infrastructure with Enterprise Manager Cloud Control. +# 2. NONE -If you do not want to manage your Oracle Grid Infrastructure with Enterprise Manager Cloud Control. +#------------------------------------------------------------------------------- +oracle.install.config.managementOption=NONE + +#------------------------------------------------------------------------------- +# Specify the OMS host to connect to Cloud Control. +# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL +#------------------------------------------------------------------------------- +oracle.install.config.omsHost= + +#------------------------------------------------------------------------------- +# Specify the OMS port to connect to Cloud Control. +# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL +#------------------------------------------------------------------------------- +oracle.install.config.omsPort=0 + +#------------------------------------------------------------------------------- +# Specify the EM Admin user name to use to connect to Cloud Control. +# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL +#------------------------------------------------------------------------------- +oracle.install.config.emAdminUser= + +#------------------------------------------------------------------------------- +# Specify the EM Admin password to use to connect to Cloud Control. +# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL +#------------------------------------------------------------------------------- +oracle.install.config.emAdminPassword= +################################################################################ +# # +# Root script execution configuration # +# # +################################################################################ + +#------------------------------------------------------------------------------------------------------- +# Specify the root script execution mode. +# +# - true : To execute the root script automatically by using the appropriate configuration methods. +# - false : To execute the root script manually. +# +# If this option is selected, password should be specified on the console. +#------------------------------------------------------------------------------------------------------- +oracle.install.crs.rootconfig.executeRootScript=false + +#-------------------------------------------------------------------------------------- +# Specify the configuration method to be used for automatic root script execution. +# +# Following are the possible choices: +# - ROOT +# - SUDO +#-------------------------------------------------------------------------------------- +oracle.install.crs.rootconfig.configMethod=ROOT +#-------------------------------------------------------------------------------------- +# Specify the absolute path of the sudo program. +# +# Applicable only when SUDO configuration method was chosen. +#-------------------------------------------------------------------------------------- +oracle.install.crs.rootconfig.sudoPath= + +#-------------------------------------------------------------------------------------- +# Specify the name of the user who is in the sudoers list. +# +# Applicable only when SUDO configuration method was chosen. +#-------------------------------------------------------------------------------------- +oracle.install.crs.rootconfig.sudoUserName= +#-------------------------------------------------------------------------------------- +# Specify the nodes batch map. +# +# This should be a comma separated list of node:batch pairs. +# During upgrade, you can sequence the automatic execution of root scripts +# by pooling the nodes into batches. +# A maximum of three batches can be specified. +# Installer will execute the root scripts on all the nodes in one batch before +# proceeding to next batch. +# Root script execution on the local node must be in Batch 1. +# Only one type of node role can be used for each batch. +# Root script execution should be done first in all HUB nodes and then, when +# existent, in all the LEAF nodes. +# +# Examples: +# 1. oracle.install.crs.config.batchinfo=HUBNode1:1,HUBNode2:2,HUBNode3:2,LEAFNode4:3 +# 2. oracle.install.crs.config.batchinfo=HUBNode1:1,LEAFNode2:2,LEAFNode3:2,LEAFNode4:2 +# 3. oracle.install.crs.config.batchinfo=HUBNode1:1,HUBNode2:1,LEAFNode3:2,LEAFNode4:3 +# +# Applicable only for UPGRADE install option. +#-------------------------------------------------------------------------------------- +oracle.install.crs.config.batchinfo= +################################################################################ +# # +# APPLICATION CLUSTER OPTIONS # +# # +################################################################################ + +#------------------------------------------------------------------------------- +# Specify the Virtual hostname to configure virtual access for your Application +# The value to be specified for Virtual hostname is optional. +#------------------------------------------------------------------------------- +oracle.install.crs.app.applicationAddress= +################################################################################# +# # +# DELETE NODE OPTIONS # +# # +################################################################################# + +#-------------------------------------------------------------------------------- +# Specify the node names to delete nodes from cluster. +# Delete node will be performed only for the remote nodes from the cluster. +#-------------------------------------------------------------------------------- +oracle.install.crs.deleteNode.nodes= diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/grid1.rsp b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/grid1.rsp new file mode 100644 index 0000000000..ebfc119b01 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/grid1.rsp @@ -0,0 +1,671 @@ +############################################################################### +## Copyright(c) Oracle Corporation 1998,2018. All rights reserved. ## +## ## +## Specify values for the variables listed below to customize ## +## your installation. ## +## ## +## Each variable is associated with a comment. The comment ## +## can help to populate the variables with the appropriate ## +## values. ## +## ## +## IMPORTANT NOTE: This file contains plain text passwords and ## +## should be secured to have read permission only by oracle user ## +## or db administrator who owns this installation. ## +## ## +############################################################################### + +############################################################################### +## ## +## Instructions to fill this response file ## +## To register and configure 'Grid Infrastructure for Cluster' ## +## - Fill out sections A,B,C,D,E,F and G ## +## - Fill out section G if OCR and voting disk should be placed on ASM ## +## ## +## To register and configure 'Grid Infrastructure for Standalone server' ## +## - Fill out sections A,B and G ## +## ## +## To register software for 'Grid Infrastructure' ## +## - Fill out sections A,B and D ## +## - Provide the cluster nodes in section D when choosing CRS_SWONLY as ## +## installation option in section A ## +## ## +## To upgrade clusterware and/or Automatic storage management of earlier ## +## releases ## +## - Fill out sections A,B,C,D and H ## +## ## +## To add more nodes to the cluster ## +## - Fill out sections A and D ## +## - Provide the cluster nodes in section D when choosing CRS_ADDNODE as ## +## installation option in section A ## +## ## +############################################################################### + +#------------------------------------------------------------------------------ +# Do not change the following system generated value. +#------------------------------------------------------------------------------ +oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v18.0.0 + +############################################################################### +# # +# SECTION A - BASIC # +# # +############################################################################### + + +#------------------------------------------------------------------------------- +# Specify the location which holds the inventory files. +# This is an optional parameter if installing on +# Windows based Operating System. +#------------------------------------------------------------------------------- +INVENTORY_LOCATION=/u01/app/oraInventory + +#------------------------------------------------------------------------------- +# Specify the installation option. +# Allowed values: CRS_CONFIG or HA_CONFIG or UPGRADE or CRS_SWONLY or HA_SWONLY +# - CRS_CONFIG : To register home and configure Grid Infrastructure for cluster +# - HA_CONFIG : To register home and configure Grid Infrastructure for stand alone server +# - UPGRADE : To register home and upgrade clusterware software of earlier release +# - CRS_SWONLY : To register Grid Infrastructure Software home (can be configured for cluster +# or stand alone server later) +# - HA_SWONLY : To register Grid Infrastructure Software home (can be configured for stand +# alone server later. This is only supported on Windows.) +# - CRS_ADDNODE : To add more nodes to the cluster +# - CRS_DELETE_NODE : To delete nodes to the cluster +#------------------------------------------------------------------------------- +oracle.install.option=CRS_CONFIG + +#------------------------------------------------------------------------------- +# Specify the complete path of the Oracle Base. +#------------------------------------------------------------------------------- +ORACLE_BASE=/u01/app/grid + +################################################################################ +# # +# SECTION B - GROUPS # +# # +# The following three groups need to be assigned for all GI installations. # +# OSDBA and OSOPER can be the same or different. OSASM must be different # +# than the other two. # +# The value to be specified for OSDBA, OSOPER and OSASM group is only for # +# Unix based Operating System. # +# These groups are not required for upgrades, as they will be determined # +# from the Oracle home to upgrade. # +# # +################################################################################ +#------------------------------------------------------------------------------- +# The OSDBA_GROUP is the OS group which is to be granted SYSDBA privileges. +#------------------------------------------------------------------------------- +oracle.install.asm.OSDBA=dba + +#------------------------------------------------------------------------------- +# The OSOPER_GROUP is the OS group which is to be granted SYSOPER privileges. +# The value to be specified for OSOPER group is optional. +# Value should not be provided if configuring Client Cluster - i.e. storageOption=CLIENT_ASM_STORAGE. +#------------------------------------------------------------------------------- +oracle.install.asm.OSOPER= + +#------------------------------------------------------------------------------- +# The OSASM_GROUP is the OS group which is to be granted SYSASM privileges. This +# must be different than the previous two. +#------------------------------------------------------------------------------- +oracle.install.asm.OSASM=asmadmin + +################################################################################ +# # +# SECTION C - SCAN # +# # +################################################################################ +#------------------------------------------------------------------------------- +# Specify the type of SCAN configuration for the cluster +# Allowed values : LOCAL_SCAN and SHARED_SCAN +#------------------------------------------------------------------------------- +oracle.install.crs.config.scanType=LOCAL_SCAN + +#------------------------------------------------------------------------------- +# Applicable only if SHARED_SCAN is being configured for cluster +# Specify the path to the SCAN client data file +#------------------------------------------------------------------------------- +oracle.install.crs.config.SCANClientDataFile= + +#------------------------------------------------------------------------------- +# Specify a name for SCAN +# Applicable if LOCAL_SCAN is being configured for the cluster +# If you choose to configure the cluster with GNS with Auto assigned Node VIPs(DHCP),then the scanName should be specified in the format of 'SCAN name.Cluster name.GNS sub-domain' +#------------------------------------------------------------------------------- +oracle.install.crs.config.gpnp.scanName=racnode-scan + +#------------------------------------------------------------------------------- +# Specify a unused port number for SCAN service +#------------------------------------------------------------------------------- + +oracle.install.crs.config.gpnp.scanPort=1521 + +################################################################################ +# # +# SECTION D - CLUSTER & GNS # +# # +################################################################################ +#------------------------------------------------------------------------------- +# Specify the required cluster configuration +# Allowed values: STANDALONE, DOMAIN, MEMBERDB, MEMBERAPP +#------------------------------------------------------------------------------- +oracle.install.crs.config.ClusterConfiguration=STANDALONE + +#------------------------------------------------------------------------------- +# Specify 'true' if you would like to configure the cluster as Extended, else +# specify 'false' +# +# Applicable only for STANDALONE and DOMAIN cluster configuration +#------------------------------------------------------------------------------- +oracle.install.crs.config.configureAsExtendedCluster=false + + +#------------------------------------------------------------------------------- +# Specify the Member Cluster Manifest file +# +# Applicable only for MEMBERDB and MEMBERAPP cluster configuration +#------------------------------------------------------------------------------- +oracle.install.crs.config.memberClusterManifestFile= + +#------------------------------------------------------------------------------- +# Specify a name for the Cluster you are creating. +# +# The maximum length allowed for clustername is 15 characters. The name can be +# any combination of lower and uppercase alphabets (A - Z), (0 - 9), hyphen(-) +# and underscore(_). +# +# Applicable only for STANDALONE and DOMAIN cluster configuration +#------------------------------------------------------------------------------- +oracle.install.crs.config.clusterName=rac01cluster + +#------------------------------------------------------------------------------- +# Applicable only for STANDALONE, DOMAIN, MEMBERDB cluster configuration. +# Specify 'true' if you would like to configure Grid Naming Service(GNS), else +# specify 'false' +#------------------------------------------------------------------------------- +oracle.install.crs.config.gpnp.configureGNS= + +#------------------------------------------------------------------------------- +# Applicable only for STANDALONE and DOMAIN cluster configuration if you choose to configure GNS. +# Specify 'true' if you would like to assign SCAN name VIP and Node VIPs by DHCP +# , else specify 'false' +#------------------------------------------------------------------------------- +oracle.install.crs.config.autoConfigureClusterNodeVIP=false + +#------------------------------------------------------------------------------- +# Applicable only if you choose to configure GNS. +# Specify the type of GNS configuration for cluster +# Allowed values are: CREATE_NEW_GNS and USE_SHARED_GNS +# Only USE_SHARED_GNS value is allowed for MEMBERDB cluster configuration. +#------------------------------------------------------------------------------- +oracle.install.crs.config.gpnp.gnsOption= + +#------------------------------------------------------------------------------- +# Applicable only if SHARED_GNS is being configured for cluster +# Specify the path to the GNS client data file +#------------------------------------------------------------------------------- +oracle.install.crs.config.gpnp.gnsClientDataFile= + +#------------------------------------------------------------------------------- +# Applicable only for STANDALONE and DOMAIN cluster configuration if you choose to +# configure GNS for this cluster oracle.install.crs.config.gpnp.gnsOption=CREATE_NEW_GNS +# Specify the GNS subdomain and an unused virtual hostname for GNS service +#------------------------------------------------------------------------------- +oracle.install.crs.config.gpnp.gnsSubDomain= +oracle.install.crs.config.gpnp.gnsVIPAddress= + +#------------------------------------------------------------------------------- +# Specify the list of sites - only if configuring an Extended Cluster +#------------------------------------------------------------------------------- +oracle.install.crs.config.sites= + +#------------------------------------------------------------------------------- +# Specify the list of nodes that have to be configured to be part of the cluster. +# +# The list should a comma-separated list of tuples. Each tuple should be a +# colon-separated string that contains +# - 1 field if you have chosen CRS_SWONLY as installation option, or +# - 1 field if configuring an Application Cluster, or +# - 3 fields if configuring a Flex Cluster +# - 3 fields if adding more nodes to the configured cluster, or +# - 4 fields if configuring an Extended Cluster +# +# The fields should be ordered as follows: +# 1. The first field should be the public node name. +# 2. The second field should be the virtual host name +# (Should be specified as AUTO if you have chosen 'auto configure for VIP' +# i.e. autoConfigureClusterNodeVIP=true) +# 3. The third field indicates the role of node (HUB,LEAF). This has to +# be provided only if Flex Cluster is being configured. +# For Extended Cluster only HUB should be specified for all nodes +# 4. The fourth field indicates the site designation for the node. To be specified only if configuring an Extended Cluster. +# The 2nd and 3rd fields are not applicable if you have chosen CRS_SWONLY as installation option +# The 2nd and 3rd fields are not applicable if configuring an Application Cluster +# +# Examples +# For registering GI for a cluster software: oracle.install.crs.config.clusterNodes=node1,node2 +# For adding more nodes to the configured cluster: oracle.install.crs.config.clusterNodes=node1:node1-vip:HUB,node2:node2-vip:LEAF +# For configuring Application Cluster: oracle.install.crs.config.clusterNodes=node1,node2 +# For configuring Flex Cluster: oracle.install.crs.config.clusterNodes=node1:node1-vip:HUB,node2:node2-vip:LEAF +# For configuring Extended Cluster: oracle.install.crs.config.clusterNodes=node1:node1-vip:HUB:site1,node2:node2-vip:HUB:site2 +# You can specify a range of nodes in the tuple using colon separated fields of format +# hostnameprefix:lowerbound-upperbound:hostnamesuffix:vipsuffix:role of node +# +#------------------------------------------------------------------------------- +oracle.install.crs.config.clusterNodes=racnode1:racnode1-vip:HUB,racnode2:racnode2-vip:HUB + +#------------------------------------------------------------------------------- +# The value should be a comma separated strings where each string is as shown below +# InterfaceName:SubnetAddress:InterfaceType +# where InterfaceType can be either "1", "2", "3", "4", or "5" +# InterfaceType stand for the following values +# - 1 : PUBLIC +# - 2 : PRIVATE +# - 3 : DO NOT USE +# - 4 : ASM +# - 5 : ASM & PRIVATE +# +# For example: eth0:140.87.24.0:1,eth1:10.2.1.0:2,eth2:140.87.52.0:3 +# +#------------------------------------------------------------------------------- +oracle.install.crs.config.networkInterfaceList=eth0:192.168.17.0:5,eth1:172.16.1.0:1 + +#------------------------------------------------------------------------------ +# Create a separate ASM DiskGroup to store GIMR data. +# Specify 'true' if you would like to separate GIMR data with clusterware data, +# else specify 'false' +# Value should be 'true' for DOMAIN cluster configurations +# Value can be true/false for STANDALONE cluster configurations. +#------------------------------------------------------------------------------ +oracle.install.asm.configureGIMRDataDG=false + +################################################################################ +# # +# SECTION E - STORAGE # +# # +################################################################################ + +#------------------------------------------------------------------------------- +# Specify the type of storage to use for Oracle Cluster Registry(OCR) and Voting +# Disks files +# - FLEX_ASM_STORAGE +# - CLIENT_ASM_STORAGE +# +# Applicable only for MEMBERDB cluster configuration +#------------------------------------------------------------------------------- +oracle.install.crs.config.storageOption= +################################################################################ +# # +# SECTION F - IPMI # +# # +################################################################################ + +#------------------------------------------------------------------------------- +# Specify 'true' if you would like to configure Intelligent Power Management interface +# (IPMI), else specify 'false' +#------------------------------------------------------------------------------- +oracle.install.crs.config.useIPMI=false + +#------------------------------------------------------------------------------- +# Applicable only if you choose to configure IPMI +# i.e. oracle.install.crs.config.useIPMI=true +# Specify the username and password for using IPMI service +#------------------------------------------------------------------------------- +oracle.install.crs.config.ipmi.bmcUsername= +oracle.install.crs.config.ipmi.bmcPassword= +################################################################################ +# # +# SECTION G - ASM # +# # +################################################################################ + +#------------------------------------------------------------------------------- +# ASM Storage Type +# Allowed values are : ASM and ASM_ON_NAS +# ASM_ON_NAS applicable only if +# oracle.install.crs.config.ClusterConfiguration=STANDALONE +#------------------------------------------------------------------------------- +oracle.install.asm.storageOption=ASM + +#------------------------------------------------------------------------------- +# NAS location to create ASM disk group for storing OCR/VDSK +# Specify the NAS location where you want the ASM disk group to be created +# to be used to store OCR/VDSK files +# Applicable only if oracle.install.asm.storageOption=ASM_ON_NAS +#------------------------------------------------------------------------------- +oracle.install.asmOnNAS.ocrLocation= +#------------------------------------------------------------------------------ +# Create a separate ASM DiskGroup on NAS to store GIMR data +# Specify 'true' if you would like to separate GIMR data with clusterware data, else +# specify 'false' +# Applicable only if oracle.install.asm.storageOption=ASM_ON_NAS +#------------------------------------------------------------------------------ +oracle.install.asmOnNAS.configureGIMRDataDG=false + +#------------------------------------------------------------------------------- +# NAS location to create ASM disk group for storing GIMR data +# Specify the NAS location where you want the ASM disk group to be created +# to be used to store the GIMR database +# Applicable only if oracle.install.asm.storageOption=ASM_ON_NAS +# and oracle.install.asmOnNAS.configureGIMRDataDG=true +#------------------------------------------------------------------------------- +oracle.install.asmOnNAS.gimrLocation= + +#------------------------------------------------------------------------------- +# Password for SYS user of Oracle ASM +#------------------------------------------------------------------------------- +oracle.install.asm.SYSASMPassword=Oracle_12c + +#------------------------------------------------------------------------------- +# The ASM DiskGroup +# +# Example: oracle.install.asm.diskGroup.name=data +# +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.name=DATA + +#------------------------------------------------------------------------------- +# Redundancy level to be used by ASM. +# It can be one of the following +# - NORMAL +# - HIGH +# - EXTERNAL +# - FLEX# - EXTENDED (required if oracle.install.crs.config.ClusterConfiguration=EXTENDED) +# Example: oracle.install.asm.diskGroup.redundancy=NORMAL +# +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.redundancy=EXTERNAL + +#------------------------------------------------------------------------------- +# Allocation unit size to be used by ASM. +# It can be one of the following values +# - 1 +# - 2 +# - 4 +# - 8 +# - 16 +# Example: oracle.install.asm.diskGroup.AUSize=4 +# size unit is MB +# +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.AUSize=4 + +#------------------------------------------------------------------------------- +# Failure Groups for the disk group +# If configuring for Extended cluster specify as list of "failure group name:site" +# tuples. +# Else just specify as list of failure group names +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.FailureGroups= + +#------------------------------------------------------------------------------- +# List of disks and their failure groups to create a ASM DiskGroup +# (Use this if each of the disks have an associated failure group) +# Failure Groups are not required if oracle.install.asm.diskGroup.redundancy=EXTERNAL +# Example: +# For Unix based Operating System: +# oracle.install.asm.diskGroup.disksWithFailureGroupNames=/oracle/asm/disk1,FGName,/oracle/asm/disk2,FGName +# For Windows based Operating System: +# oracle.install.asm.diskGroup.disksWithFailureGroupNames=\\.\ORCLDISKDATA0,FGName,\\.\ORCLDISKDATA1,FGName +# +#------------------------------------------------------------------------------- +#oracle.install.asm.diskGroup.disksWithFailureGroupNames=/dev/asm-disk1,,/dev/asm-disk2,,/dev/asm-disk3, +oracle.install.asm.diskGroup.disksWithFailureGroupNames= + +#------------------------------------------------------------------------------- +# List of disks to create a ASM DiskGroup +# (Use this variable only if failure groups configuration is not required) +# Example: +# For Unix based Operating System: +# oracle.install.asm.diskGroup.disks=/oracle/asm/disk1,/oracle/asm/disk2 +# For Windows based Operating System: +# oracle.install.asm.diskGroup.disks=\\.\ORCLDISKDATA0,\\.\ORCLDISKDATA1 +# +#------------------------------------------------------------------------------- +#oracle.install.asm.diskGroup.disks=/dev/asm-disk1,/dev/asm-disk2,/dev/asm-disk3 +oracle.install.asm.diskGroup.disks=/oradata/asm_disk01.img,/oradata/asm_disk02.img,/oradata/asm_disk03.img,/oradata/asm_disk04.img,/oradata/asm_disk05.img + +#------------------------------------------------------------------------------- +# List of failure groups to be marked as QUORUM. +# Quorum failure groups contain only voting disk data, no user data is stored +# Example: +# oracle.install.asm.diskGroup.quorumFailureGroupNames=FGName1,FGName2 +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.quorumFailureGroupNames= + +#------------------------------------------------------------------------------- +# The disk discovery string to be used to discover the disks used create a ASM DiskGroup +# +# Example: +# For Unix based Operating System: +# oracle.install.asm.diskGroup.diskDiscoveryString=/oracle/asm/* +# For Windows based Operating System: +# oracle.install.asm.diskGroup.diskDiscoveryString=\\.\ORCLDISK* +# +#------------------------------------------------------------------------------- +#oracle.install.asm.diskGroup.diskDiscoveryString=/dev/asm* +oracle.install.asm.diskGroup.diskDiscoveryString=/oradata/asm_* + +#------------------------------------------------------------------------------- +# Password for ASMSNMP account +# ASMSNMP account is used by Oracle Enterprise Manager to monitor Oracle ASM instances +#------------------------------------------------------------------------------- +oracle.install.asm.monitorPassword=Oracle_12c + +#------------------------------------------------------------------------------- +# GIMR Storage data ASM DiskGroup +# Applicable only when +# oracle.install.asm.configureGIMRDataDG=true +# Example: oracle.install.asm.GIMRDG.name=MGMT +# +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.name= + +#------------------------------------------------------------------------------- +# Redundancy level to be used by ASM. +# It can be one of the following +# - NORMAL +# - HIGH +# - EXTERNAL +# - FLEX# - EXTENDED (only if oracle.install.crs.config.ClusterConfiguration=EXTENDED) +# Example: oracle.install.asm.gimrDG.redundancy=NORMAL +# +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.redundancy= + +#------------------------------------------------------------------------------- +# Allocation unit size to be used by ASM. +# It can be one of the following values +# - 1 +# - 2 +# - 4 +# - 8 +# - 16 +# Example: oracle.install.asm.gimrDG.AUSize=4 +# size unit is MB +# +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.AUSize=1 + +#------------------------------------------------------------------------------- +# Failure Groups for the GIMR storage data ASM disk group +# If configuring for Extended cluster specify as list of "failure group name:site" +# tuples. +# Else just specify as list of failure group names +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.FailureGroups= + +#------------------------------------------------------------------------------- +# List of disks and their failure groups to create GIMR data ASM DiskGroup +# (Use this if each of the disks have an associated failure group) +# Failure Groups are not required if oracle.install.asm.gimrDG.redundancy=EXTERNAL +# Example: +# For Unix based Operating System: +# oracle.install.asm.gimrDG.disksWithFailureGroupNames=/oracle/asm/disk1,FGName,/oracle/asm/disk2,FGName +# For Windows based Operating System: +# oracle.install.asm.gimrDG.disksWithFailureGroupNames=\\.\ORCLDISKDATA0,FGName,\\.\ORCLDISKDATA1,FGName +# +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.disksWithFailureGroupNames= + +#------------------------------------------------------------------------------- +# List of disks to create GIMR data ASM DiskGroup +# (Use this variable only if failure groups configuration is not required) +# Example: +# For Unix based Operating System: +# oracle.install.asm.gimrDG.disks=/oracle/asm/disk1,/oracle/asm/disk2 +# For Windows based Operating System: +# oracle.install.asm.gimrDG.disks=\\.\ORCLDISKDATA0,\\.\ORCLDISKDATA1 +# +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.disks= + +#------------------------------------------------------------------------------- +# List of failure groups to be marked as QUORUM. +# Quorum failure groups contain only voting disk data, no user data is stored +# Example: +# oracle.install.asm.gimrDG.quorumFailureGroupNames=FGName1,FGName2 +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.quorumFailureGroupNames= + +#------------------------------------------------------------------------------- +# Configure AFD - ASM Filter Driver +# Applicable only for FLEX_ASM_STORAGE option +# Specify 'true' if you want to configure AFD, else specify 'false' +#------------------------------------------------------------------------------- +oracle.install.asm.configureAFD=false +#------------------------------------------------------------------------------- +# Configure RHPS - Rapid Home Provisioning Service +# Applicable only for DOMAIN cluster configuration +# Specify 'true' if you want to configure RHP service, else specify 'false' +#------------------------------------------------------------------------------- +oracle.install.crs.configureRHPS=false + +################################################################################ +# # +# SECTION H - UPGRADE # +# # +################################################################################ +#------------------------------------------------------------------------------- +# Specify whether to ignore down nodes during upgrade operation. +# Value should be 'true' to ignore down nodes otherwise specify 'false' +#------------------------------------------------------------------------------- +oracle.install.crs.config.ignoreDownNodes=false +################################################################################ +# # +# MANAGEMENT OPTIONS # +# # +################################################################################ + +#------------------------------------------------------------------------------- +# Specify the management option to use for managing Oracle Grid Infrastructure +# Options are: +# 1. CLOUD_CONTROL - If you want to manage your Oracle Grid Infrastructure with Enterprise Manager Cloud Control. +# 2. NONE -If you do not want to manage your Oracle Grid Infrastructure with Enterprise Manager Cloud Control. +#------------------------------------------------------------------------------- +oracle.install.config.managementOption=NONE + +#------------------------------------------------------------------------------- +# Specify the OMS host to connect to Cloud Control. +# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL +#------------------------------------------------------------------------------- +oracle.install.config.omsHost= + +#------------------------------------------------------------------------------- +# Specify the OMS port to connect to Cloud Control. +# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL +#------------------------------------------------------------------------------- +oracle.install.config.omsPort=0 + +#------------------------------------------------------------------------------- +# Specify the EM Admin user name to use to connect to Cloud Control. +# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL +#------------------------------------------------------------------------------- +oracle.install.config.emAdminUser= + +#------------------------------------------------------------------------------- +# Specify the EM Admin password to use to connect to Cloud Control. +# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL +#------------------------------------------------------------------------------- +oracle.install.config.emAdminPassword= +################################################################################ +# # +# Root script execution configuration # +# # +################################################################################ + +#------------------------------------------------------------------------------------------------------- +# Specify the root script execution mode. +# +# - true : To execute the root script automatically by using the appropriate configuration methods. +# - false : To execute the root script manually. +# +# If this option is selected, password should be specified on the console. +#------------------------------------------------------------------------------------------------------- +oracle.install.crs.rootconfig.executeRootScript=false + +#-------------------------------------------------------------------------------------- +# Specify the configuration method to be used for automatic root script execution. +# +# Following are the possible choices: +# - ROOT +# - SUDO +#-------------------------------------------------------------------------------------- +oracle.install.crs.rootconfig.configMethod=ROOT +#-------------------------------------------------------------------------------------- +# Specify the absolute path of the sudo program. +# +# Applicable only when SUDO configuration method was chosen. +#-------------------------------------------------------------------------------------- +oracle.install.crs.rootconfig.sudoPath= + +#-------------------------------------------------------------------------------------- +# Specify the name of the user who is in the sudoers list. +# +# Applicable only when SUDO configuration method was chosen. +#-------------------------------------------------------------------------------------- +oracle.install.crs.rootconfig.sudoUserName= +#-------------------------------------------------------------------------------------- +# Specify the nodes batch map. +# +# This should be a comma separated list of node:batch pairs. +# During upgrade, you can sequence the automatic execution of root scripts +# by pooling the nodes into batches. +# A maximum of three batches can be specified. +# Installer will execute the root scripts on all the nodes in one batch before +# proceeding to next batch. +# Root script execution on the local node must be in Batch 1. +# Only one type of node role can be used for each batch. +# Root script execution should be done first in all HUB nodes and then, when +# existent, in all the LEAF nodes. +# +# Examples: +# 1. oracle.install.crs.config.batchinfo=HUBNode1:1,HUBNode2:2,HUBNode3:2,LEAFNode4:3 +# 2. oracle.install.crs.config.batchinfo=HUBNode1:1,LEAFNode2:2,LEAFNode3:2,LEAFNode4:2 +# 3. oracle.install.crs.config.batchinfo=HUBNode1:1,HUBNode2:1,LEAFNode3:2,LEAFNode4:3 +# +# Applicable only for UPGRADE install option. +#-------------------------------------------------------------------------------------- +oracle.install.crs.config.batchinfo= +################################################################################ +# # +# APPLICATION CLUSTER OPTIONS # +# # +################################################################################ + +#------------------------------------------------------------------------------- +# Specify the Virtual hostname to configure virtual access for your Application +# The value to be specified for Virtual hostname is optional. +#------------------------------------------------------------------------------- +oracle.install.crs.app.applicationAddress= +################################################################################# +# # +# DELETE NODE OPTIONS # +# # +################################################################################# + +#-------------------------------------------------------------------------------- +# Specify the node names to delete nodes from cluster. +# Delete node will be performed only for the remote nodes from the cluster. +#-------------------------------------------------------------------------------- +oracle.install.crs.deleteNode.nodes= diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/grid_addnode.rsp b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/grid_addnode.rsp new file mode 100644 index 0000000000..7692346e3f --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/grid_addnode.rsp @@ -0,0 +1,672 @@ +############################################################################### +## Copyright(c) Oracle Corporation 1998,2018. All rights reserved. ## +## ## +## Specify values for the variables listed below to customize ## +## your installation. ## +## ## +## Each variable is associated with a comment. The comment ## +## can help to populate the variables with the appropriate ## +## values. ## +## ## +## IMPORTANT NOTE: This file contains plain text passwords and ## +## should be secured to have read permission only by oracle user ## +## or db administrator who owns this installation. ## +## ## +############################################################################### + +############################################################################### +## ## +## Instructions to fill this response file ## +## To register and configure 'Grid Infrastructure for Cluster' ## +## - Fill out sections A,B,C,D,E,F and G ## +## - Fill out section G if OCR and voting disk should be placed on ASM ## +## ## +## To register and configure 'Grid Infrastructure for Standalone server' ## +## - Fill out sections A,B and G ## +## ## +## To register software for 'Grid Infrastructure' ## +## - Fill out sections A,B and D ## +## - Provide the cluster nodes in section D when choosing CRS_SWONLY as ## +## installation option in section A ## +## ## +## To upgrade clusterware and/or Automatic storage management of earlier ## +## releases ## +## - Fill out sections A,B,C,D and H ## +## ## +## To add more nodes to the cluster ## +## - Fill out sections A and D ## +## - Provide the cluster nodes in section D when choosing CRS_ADDNODE as ## +## installation option in section A ## +## ## +############################################################################### + +#------------------------------------------------------------------------------ +# Do not change the following system generated value. +#------------------------------------------------------------------------------ +oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v18.0.0 + +############################################################################### +# # +# SECTION A - BASIC # +# # +############################################################################### + + +#------------------------------------------------------------------------------- +# Specify the location which holds the inventory files. +# This is an optional parameter if installing on +# Windows based Operating System. +#------------------------------------------------------------------------------- +INVENTORY_LOCATION=###INVENTORY### + +#------------------------------------------------------------------------------- +# Specify the installation option. +# Allowed values: CRS_CONFIG or HA_CONFIG or UPGRADE or CRS_SWONLY or HA_SWONLY +# - CRS_CONFIG : To register home and configure Grid Infrastructure for cluster +# - HA_CONFIG : To register home and configure Grid Infrastructure for stand alone server +# - UPGRADE : To register home and upgrade clusterware software of earlier release +# - CRS_SWONLY : To register Grid Infrastructure Software home (can be configured for cluster +# or stand alone server later) +# - HA_SWONLY : To register Grid Infrastructure Software home (can be configured for stand +# alone server later. This is only supported on Windows.) +# - CRS_ADDNODE : To add more nodes to the cluster +# - CRS_DELETE_NODE : To delete nodes to the cluster +#------------------------------------------------------------------------------- +oracle.install.option=CRS_ADDNODE + +#------------------------------------------------------------------------------- +# Specify the complete path of the Oracle Base. +#------------------------------------------------------------------------------- +ORACLE_BASE=###GRID_BASE### + +################################################################################ +# # +# SECTION B - GROUPS # +# # +# The following three groups need to be assigned for all GI installations. # +# OSDBA and OSOPER can be the same or different. OSASM must be different # +# than the other two. # +# The value to be specified for OSDBA, OSOPER and OSASM group is only for # +# Unix based Operating System. # +# These groups are not required for upgrades, as they will be determined # +# from the Oracle home to upgrade. # +# # +################################################################################ +#------------------------------------------------------------------------------- +# The OSDBA_GROUP is the OS group which is to be granted SYSDBA privileges. +#------------------------------------------------------------------------------- +oracle.install.asm.OSDBA=asmdba + +#------------------------------------------------------------------------------- +# The OSOPER_GROUP is the OS group which is to be granted SYSOPER privileges. +# The value to be specified for OSOPER group is optional. +# Value should not be provided if configuring Client Cluster - i.e. storageOption=CLIENT_ASM_STORAGE. +#------------------------------------------------------------------------------- +oracle.install.asm.OSOPER=asmoper + +#------------------------------------------------------------------------------- +# The OSASM_GROUP is the OS group which is to be granted SYSASM privileges. This +# must be different than the previous two. +#------------------------------------------------------------------------------- +oracle.install.asm.OSASM=asmadmin + +################################################################################ +# # +# SECTION C - SCAN # +# # +################################################################################ +#------------------------------------------------------------------------------- +# Specify the type of SCAN configuration for the cluster +# Allowed values : LOCAL_SCAN and SHARED_SCAN +#------------------------------------------------------------------------------- +oracle.install.crs.config.scanType= + +#------------------------------------------------------------------------------- +# Applicable only if SHARED_SCAN is being configured for cluster +# Specify the path to the SCAN client data file +#------------------------------------------------------------------------------- +oracle.install.crs.config.SCANClientDataFile= + +#------------------------------------------------------------------------------- +# Specify a name for SCAN +# Applicable if LOCAL_SCAN is being configured for the cluster +# If you choose to configure the cluster with GNS with Auto assigned Node VIPs(DHCP),then the scanName should be specified in the format of 'SCAN name.Cluster name.GNS sub-domain' +#------------------------------------------------------------------------------- +oracle.install.crs.config.gpnp.scanName= + +#------------------------------------------------------------------------------- +# Specify a unused port number for SCAN service +#------------------------------------------------------------------------------- + +oracle.install.crs.config.gpnp.scanPort= + +################################################################################ +# # +# SECTION D - CLUSTER & GNS # +# # +################################################################################ +#------------------------------------------------------------------------------- +# Specify the required cluster configuration +# Allowed values: STANDALONE, DOMAIN, MEMBERDB, MEMBERAPP +#------------------------------------------------------------------------------- +oracle.install.crs.config.ClusterConfiguration= + +#------------------------------------------------------------------------------- +# Specify 'true' if you would like to configure the cluster as Extended, else +# specify 'false' +# +# Applicable only for STANDALONE and DOMAIN cluster configuration +#------------------------------------------------------------------------------- +oracle.install.crs.config.configureAsExtendedCluster=false + + +#------------------------------------------------------------------------------- +# Specify the Member Cluster Manifest file +# +# Applicable only for MEMBERDB and MEMBERAPP cluster configuration +#------------------------------------------------------------------------------- +oracle.install.crs.config.memberClusterManifestFile= + +#------------------------------------------------------------------------------- +# Specify a name for the Cluster you are creating. +# +# The maximum length allowed for clustername is 15 characters. The name can be +# any combination of lower and uppercase alphabets (A - Z), (0 - 9), hyphen(-) +# and underscore(_). +# +# Applicable only for STANDALONE and DOMAIN cluster configuration +#------------------------------------------------------------------------------- +oracle.install.crs.config.clusterName= + +#------------------------------------------------------------------------------- +# Applicable only for STANDALONE, DOMAIN, MEMBERDB cluster configuration. +# Specify 'true' if you would like to configure Grid Naming Service(GNS), else +# specify 'false' +#------------------------------------------------------------------------------- +oracle.install.crs.config.gpnp.configureGNS=false + +#------------------------------------------------------------------------------- +# Applicable only for STANDALONE and DOMAIN cluster configuration if you choose to configure GNS. +# Specify 'true' if you would like to assign SCAN name VIP and Node VIPs by DHCP +# , else specify 'false' +#------------------------------------------------------------------------------- +oracle.install.crs.config.autoConfigureClusterNodeVIP=false + +#------------------------------------------------------------------------------- +# Applicable only if you choose to configure GNS. +# Specify the type of GNS configuration for cluster +# Allowed values are: CREATE_NEW_GNS and USE_SHARED_GNS +# Only USE_SHARED_GNS value is allowed for MEMBERDB cluster configuration. +#------------------------------------------------------------------------------- +oracle.install.crs.config.gpnp.gnsOption=CREATE_NEW_GNS + +#------------------------------------------------------------------------------- +# Applicable only if SHARED_GNS is being configured for cluster +# Specify the path to the GNS client data file +#------------------------------------------------------------------------------- +oracle.install.crs.config.gpnp.gnsClientDataFile= + +#------------------------------------------------------------------------------- +# Applicable only for STANDALONE and DOMAIN cluster configuration if you choose to +# configure GNS for this cluster oracle.install.crs.config.gpnp.gnsOption=CREATE_NEW_GNS +# Specify the GNS subdomain and an unused virtual hostname for GNS service +#------------------------------------------------------------------------------- +oracle.install.crs.config.gpnp.gnsSubDomain= +oracle.install.crs.config.gpnp.gnsVIPAddress= + +#------------------------------------------------------------------------------- +# Specify the list of sites - only if configuring an Extended Cluster +#------------------------------------------------------------------------------- +oracle.install.crs.config.sites= + +#------------------------------------------------------------------------------- +# Specify the list of nodes that have to be configured to be part of the cluster. +# +# The list should a comma-separated list of tuples. Each tuple should be a +# colon-separated string that contains +# - 1 field if you have chosen CRS_SWONLY as installation option, or +# - 1 field if configuring an Application Cluster, or +# - 3 fields if configuring a Flex Cluster +# - 3 fields if adding more nodes to the configured cluster, or +# - 4 fields if configuring an Extended Cluster +# +# The fields should be ordered as follows: +# 1. The first field should be the public node name. +# 2. The second field should be the virtual host name +# (Should be specified as AUTO if you have chosen 'auto configure for VIP' +# i.e. autoConfigureClusterNodeVIP=true) +# 3. The third field indicates the role of node (HUB,LEAF). This has to +# be provided only if Flex Cluster is being configured. +# For Extended Cluster only HUB should be specified for all nodes +# 4. The fourth field indicates the site designation for the node. To be specified only if configuring an Extended Cluster. +# The 2nd and 3rd fields are not applicable if you have chosen CRS_SWONLY as installation option +# The 2nd and 3rd fields are not applicable if configuring an Application Cluster +# +# Examples +# For registering GI for a cluster software: oracle.install.crs.config.clusterNodes=node1,node2 +# For adding more nodes to the configured cluster: oracle.install.crs.config.clusterNodes=node1:node1-vip:HUB,node2:node2-vip:LEAF +# For configuring Application Cluster: oracle.install.crs.config.clusterNodes=node1,node2 +# For configuring Flex Cluster: oracle.install.crs.config.clusterNodes=node1:node1-vip:HUB,node2:node2-vip:LEAF +# For configuring Extended Cluster: oracle.install.crs.config.clusterNodes=node1:node1-vip:HUB:site1,node2:node2-vip:HUB:site2 +# You can specify a range of nodes in the tuple using colon separated fields of format +# hostnameprefix:lowerbound-upperbound:hostnamesuffix:vipsuffix:role of node +# +#------------------------------------------------------------------------------- +#oracle.install.crs.config.clusterNodes=###PUBLIC_HOSTNAME###:###HOSTNAME_VIP###:HUB +oracle.install.crs.config.clusterNodes=###CRS_CONFIG_NODES### + +#------------------------------------------------------------------------------- +# The value should be a comma separated strings where each string is as shown below +# InterfaceName:SubnetAddress:InterfaceType +# where InterfaceType can be either "1", "2", "3", "4", or "5" +# InterfaceType stand for the following values +# - 1 : PUBLIC +# - 2 : PRIVATE +# - 3 : DO NOT USE +# - 4 : ASM +# - 5 : ASM & PRIVATE +# +# For example: eth0:140.87.24.0:1,eth1:10.2.1.0:2,eth2:140.87.52.0:3 +# +#------------------------------------------------------------------------------- +oracle.install.crs.config.networkInterfaceList= + +#------------------------------------------------------------------------------ +# Create a separate ASM DiskGroup to store GIMR data. +# Specify 'true' if you would like to separate GIMR data with clusterware data, +# else specify 'false' +# Value should be 'true' for DOMAIN cluster configurations +# Value can be true/false for STANDALONE cluster configurations. +#------------------------------------------------------------------------------ +oracle.install.asm.configureGIMRDataDG=false + +################################################################################ +# # +# SECTION E - STORAGE # +# # +################################################################################ + +#------------------------------------------------------------------------------- +# Specify the type of storage to use for Oracle Cluster Registry(OCR) and Voting +# Disks files +# - FLEX_ASM_STORAGE +# - CLIENT_ASM_STORAGE +# +# Applicable only for MEMBERDB cluster configuration +#------------------------------------------------------------------------------- +oracle.install.crs.config.storageOption= +################################################################################ +# # +# SECTION F - IPMI # +# # +################################################################################ + +#------------------------------------------------------------------------------- +# Specify 'true' if you would like to configure Intelligent Power Management interface +# (IPMI), else specify 'false' +#------------------------------------------------------------------------------- +oracle.install.crs.config.useIPMI=false + +#------------------------------------------------------------------------------- +# Applicable only if you choose to configure IPMI +# i.e. oracle.install.crs.config.useIPMI=true +# Specify the username and password for using IPMI service +#------------------------------------------------------------------------------- +oracle.install.crs.config.ipmi.bmcUsername= +oracle.install.crs.config.ipmi.bmcPassword= +################################################################################ +# # +# SECTION G - ASM # +# # +################################################################################ + +#------------------------------------------------------------------------------- +# ASM Storage Type +# Allowed values are : ASM and ASM_ON_NAS +# ASM_ON_NAS applicable only if +# oracle.install.crs.config.ClusterConfiguration=STANDALONE +#------------------------------------------------------------------------------- +oracle.install.asm.storageOption=ASM + +#------------------------------------------------------------------------------- +# NAS location to create ASM disk group for storing OCR/VDSK +# Specify the NAS location where you want the ASM disk group to be created +# to be used to store OCR/VDSK files +# Applicable only if oracle.install.asm.storageOption=ASM_ON_NAS +#------------------------------------------------------------------------------- +oracle.install.asmOnNAS.ocrLocation= +#------------------------------------------------------------------------------ +# Create a separate ASM DiskGroup on NAS to store GIMR data +# Specify 'true' if you would like to separate GIMR data with clusterware data, else +# specify 'false' +# Applicable only if oracle.install.asm.storageOption=ASM_ON_NAS +#------------------------------------------------------------------------------ +oracle.install.asmOnNAS.configureGIMRDataDG=false + +#------------------------------------------------------------------------------- +# NAS location to create ASM disk group for storing GIMR data +# Specify the NAS location where you want the ASM disk group to be created +# to be used to store the GIMR database +# Applicable only if oracle.install.asm.storageOption=ASM_ON_NAS +# and oracle.install.asmOnNAS.configureGIMRDataDG=true +#------------------------------------------------------------------------------- +oracle.install.asmOnNAS.gimrLocation= + +#------------------------------------------------------------------------------- +# Password for SYS user of Oracle ASM +#------------------------------------------------------------------------------- +oracle.install.asm.SYSASMPassword= + +#------------------------------------------------------------------------------- +# The ASM DiskGroup +# +# Example: oracle.install.asm.diskGroup.name=data +# +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.name=DATA + +#------------------------------------------------------------------------------- +# Redundancy level to be used by ASM. +# It can be one of the following +# - NORMAL +# - HIGH +# - EXTERNAL +# - FLEX# - EXTENDED (required if oracle.install.crs.config.ClusterConfiguration=EXTENDED) +# Example: oracle.install.asm.diskGroup.redundancy=NORMAL +# +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.redundancy= + +#------------------------------------------------------------------------------- +# Allocation unit size to be used by ASM. +# It can be one of the following values +# - 1 +# - 2 +# - 4 +# - 8 +# - 16 +# Example: oracle.install.asm.diskGroup.AUSize=4 +# size unit is MB +# +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.AUSize=1 + +#------------------------------------------------------------------------------- +# Failure Groups for the disk group +# If configuring for Extended cluster specify as list of "failure group name:site" +# tuples. +# Else just specify as list of failure group names +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.FailureGroups= + +#------------------------------------------------------------------------------- +# List of disks and their failure groups to create a ASM DiskGroup +# (Use this if each of the disks have an associated failure group) +# Failure Groups are not required if oracle.install.asm.diskGroup.redundancy=EXTERNAL +# Example: +# For Unix based Operating System: +# oracle.install.asm.diskGroup.disksWithFailureGroupNames=/oracle/asm/disk1,FGName,/oracle/asm/disk2,FGName +# For Windows based Operating System: +# oracle.install.asm.diskGroup.disksWithFailureGroupNames=\\.\ORCLDISKDATA0,FGName,\\.\ORCLDISKDATA1,FGName +# +#------------------------------------------------------------------------------- +#oracle.install.asm.diskGroup.disksWithFailureGroupNames=/dev/asm-disk1,,/dev/asm-disk2,,/dev/asm-disk3, +oracle.install.asm.diskGroup.disksWithFailureGroupNames= + +#------------------------------------------------------------------------------- +# List of disks to create a ASM DiskGroup +# (Use this variable only if failure groups configuration is not required) +# Example: +# For Unix based Operating System: +# oracle.install.asm.diskGroup.disks=/oracle/asm/disk1,/oracle/asm/disk2 +# For Windows based Operating System: +# oracle.install.asm.diskGroup.disks=\\.\ORCLDISKDATA0,\\.\ORCLDISKDATA1 +# +#------------------------------------------------------------------------------- +#oracle.install.asm.diskGroup.disks=/dev/asm-disk1,/dev/asm-disk2,/dev/asm-disk3 +oracle.install.asm.diskGroup.disks= + +#------------------------------------------------------------------------------- +# List of failure groups to be marked as QUORUM. +# Quorum failure groups contain only voting disk data, no user data is stored +# Example: +# oracle.install.asm.diskGroup.quorumFailureGroupNames=FGName1,FGName2 +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.quorumFailureGroupNames= + +#------------------------------------------------------------------------------- +# The disk discovery string to be used to discover the disks used create a ASM DiskGroup +# +# Example: +# For Unix based Operating System: +# oracle.install.asm.diskGroup.diskDiscoveryString=/oracle/asm/* +# For Windows based Operating System: +# oracle.install.asm.diskGroup.diskDiscoveryString=\\.\ORCLDISK* +# +#------------------------------------------------------------------------------- +#oracle.install.asm.diskGroup.diskDiscoveryString=/dev/asm* +oracle.install.asm.diskGroup.diskDiscoveryString= + +#------------------------------------------------------------------------------- +# Password for ASMSNMP account +# ASMSNMP account is used by Oracle Enterprise Manager to monitor Oracle ASM instances +#------------------------------------------------------------------------------- +oracle.install.asm.monitorPassword= + +#------------------------------------------------------------------------------- +# GIMR Storage data ASM DiskGroup +# Applicable only when +# oracle.install.asm.configureGIMRDataDG=true +# Example: oracle.install.asm.GIMRDG.name=MGMT +# +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.name= + +#------------------------------------------------------------------------------- +# Redundancy level to be used by ASM. +# It can be one of the following +# - NORMAL +# - HIGH +# - EXTERNAL +# - FLEX# - EXTENDED (only if oracle.install.crs.config.ClusterConfiguration=EXTENDED) +# Example: oracle.install.asm.gimrDG.redundancy=NORMAL +# +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.redundancy= + +#------------------------------------------------------------------------------- +# Allocation unit size to be used by ASM. +# It can be one of the following values +# - 1 +# - 2 +# - 4 +# - 8 +# - 16 +# Example: oracle.install.asm.gimrDG.AUSize=4 +# size unit is MB +# +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.AUSize=1 + +#------------------------------------------------------------------------------- +# Failure Groups for the GIMR storage data ASM disk group +# If configuring for Extended cluster specify as list of "failure group name:site" +# tuples. +# Else just specify as list of failure group names +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.FailureGroups= + +#------------------------------------------------------------------------------- +# List of disks and their failure groups to create GIMR data ASM DiskGroup +# (Use this if each of the disks have an associated failure group) +# Failure Groups are not required if oracle.install.asm.gimrDG.redundancy=EXTERNAL +# Example: +# For Unix based Operating System: +# oracle.install.asm.gimrDG.disksWithFailureGroupNames=/oracle/asm/disk1,FGName,/oracle/asm/disk2,FGName +# For Windows based Operating System: +# oracle.install.asm.gimrDG.disksWithFailureGroupNames=\\.\ORCLDISKDATA0,FGName,\\.\ORCLDISKDATA1,FGName +# +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.disksWithFailureGroupNames= + +#------------------------------------------------------------------------------- +# List of disks to create GIMR data ASM DiskGroup +# (Use this variable only if failure groups configuration is not required) +# Example: +# For Unix based Operating System: +# oracle.install.asm.gimrDG.disks=/oracle/asm/disk1,/oracle/asm/disk2 +# For Windows based Operating System: +# oracle.install.asm.gimrDG.disks=\\.\ORCLDISKDATA0,\\.\ORCLDISKDATA1 +# +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.disks= + +#------------------------------------------------------------------------------- +# List of failure groups to be marked as QUORUM. +# Quorum failure groups contain only voting disk data, no user data is stored +# Example: +# oracle.install.asm.gimrDG.quorumFailureGroupNames=FGName1,FGName2 +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.quorumFailureGroupNames= + +#------------------------------------------------------------------------------- +# Configure AFD - ASM Filter Driver +# Applicable only for FLEX_ASM_STORAGE option +# Specify 'true' if you want to configure AFD, else specify 'false' +#------------------------------------------------------------------------------- +oracle.install.asm.configureAFD=false +#------------------------------------------------------------------------------- +# Configure RHPS - Rapid Home Provisioning Service +# Applicable only for DOMAIN cluster configuration +# Specify 'true' if you want to configure RHP service, else specify 'false' +#------------------------------------------------------------------------------- +oracle.install.crs.configureRHPS=false + +################################################################################ +# # +# SECTION H - UPGRADE # +# # +################################################################################ +#------------------------------------------------------------------------------- +# Specify whether to ignore down nodes during upgrade operation. +# Value should be 'true' to ignore down nodes otherwise specify 'false' +#------------------------------------------------------------------------------- +oracle.install.crs.config.ignoreDownNodes=false +################################################################################ +# # +# MANAGEMENT OPTIONS # +# # +################################################################################ + +#------------------------------------------------------------------------------- +# Specify the management option to use for managing Oracle Grid Infrastructure +# Options are: +# 1. CLOUD_CONTROL - If you want to manage your Oracle Grid Infrastructure with Enterprise Manager Cloud Control. +# 2. NONE -If you do not want to manage your Oracle Grid Infrastructure with Enterprise Manager Cloud Control. +#------------------------------------------------------------------------------- +oracle.install.config.managementOption=NONE + +#------------------------------------------------------------------------------- +# Specify the OMS host to connect to Cloud Control. +# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL +#------------------------------------------------------------------------------- +oracle.install.config.omsHost= + +#------------------------------------------------------------------------------- +# Specify the OMS port to connect to Cloud Control. +# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL +#------------------------------------------------------------------------------- +oracle.install.config.omsPort=0 + +#------------------------------------------------------------------------------- +# Specify the EM Admin user name to use to connect to Cloud Control. +# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL +#------------------------------------------------------------------------------- +oracle.install.config.emAdminUser= + +#------------------------------------------------------------------------------- +# Specify the EM Admin password to use to connect to Cloud Control. +# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL +#------------------------------------------------------------------------------- +oracle.install.config.emAdminPassword= +################################################################################ +# # +# Root script execution configuration # +# # +################################################################################ + +#------------------------------------------------------------------------------------------------------- +# Specify the root script execution mode. +# +# - true : To execute the root script automatically by using the appropriate configuration methods. +# - false : To execute the root script manually. +# +# If this option is selected, password should be specified on the console. +#------------------------------------------------------------------------------------------------------- +oracle.install.crs.rootconfig.executeRootScript=false + +#-------------------------------------------------------------------------------------- +# Specify the configuration method to be used for automatic root script execution. +# +# Following are the possible choices: +# - ROOT +# - SUDO +#-------------------------------------------------------------------------------------- +oracle.install.crs.rootconfig.configMethod=ROOT +#-------------------------------------------------------------------------------------- +# Specify the absolute path of the sudo program. +# +# Applicable only when SUDO configuration method was chosen. +#-------------------------------------------------------------------------------------- +oracle.install.crs.rootconfig.sudoPath= + +#-------------------------------------------------------------------------------------- +# Specify the name of the user who is in the sudoers list. +# +# Applicable only when SUDO configuration method was chosen. +#-------------------------------------------------------------------------------------- +oracle.install.crs.rootconfig.sudoUserName= +#-------------------------------------------------------------------------------------- +# Specify the nodes batch map. +# +# This should be a comma separated list of node:batch pairs. +# During upgrade, you can sequence the automatic execution of root scripts +# by pooling the nodes into batches. +# A maximum of three batches can be specified. +# Installer will execute the root scripts on all the nodes in one batch before +# proceeding to next batch. +# Root script execution on the local node must be in Batch 1. +# Only one type of node role can be used for each batch. +# Root script execution should be done first in all HUB nodes and then, when +# existent, in all the LEAF nodes. +# +# Examples: +# 1. oracle.install.crs.config.batchinfo=HUBNode1:1,HUBNode2:2,HUBNode3:2,LEAFNode4:3 +# 2. oracle.install.crs.config.batchinfo=HUBNode1:1,LEAFNode2:2,LEAFNode3:2,LEAFNode4:2 +# 3. oracle.install.crs.config.batchinfo=HUBNode1:1,HUBNode2:1,LEAFNode3:2,LEAFNode4:3 +# +# Applicable only for UPGRADE install option. +#-------------------------------------------------------------------------------------- +oracle.install.crs.config.batchinfo= +################################################################################ +# # +# APPLICATION CLUSTER OPTIONS # +# # +################################################################################ + +#------------------------------------------------------------------------------- +# Specify the Virtual hostname to configure virtual access for your Application +# The value to be specified for Virtual hostname is optional. +#------------------------------------------------------------------------------- +oracle.install.crs.app.applicationAddress= +################################################################################# +# # +# DELETE NODE OPTIONS # +# # +################################################################################# + +#-------------------------------------------------------------------------------- +# Specify the node names to delete nodes from cluster. +# Delete node will be performed only for the remote nodes from the cluster. +#-------------------------------------------------------------------------------- +oracle.install.crs.deleteNode.nodes= diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/grid_addnode_21c.rsp b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/grid_addnode_21c.rsp new file mode 100644 index 0000000000..9aa74d2c44 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/grid_addnode_21c.rsp @@ -0,0 +1,67 @@ +oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v21.0.0 +INVENTORY_LOCATION=###INVENTORY### +oracle.install.option=CRS_ADDNODE +ORACLE_BASE=###GRID_BASE### +oracle.install.asm.OSDBA=asmdba +oracle.install.asm.OSOPER=asmoper +oracle.install.asm.OSASM=asmadmin +oracle.install.crs.config.scanType= +oracle.install.crs.config.SCANClientDataFile= +oracle.install.crs.config.gpnp.scanName= +oracle.install.crs.config.gpnp.scanPort= +oracle.install.crs.config.ClusterConfiguration= +oracle.install.crs.config.configureAsExtendedCluster=false +oracle.install.crs.config.clusterName= +oracle.install.crs.config.gpnp.configureGNS=false +oracle.install.crs.config.autoConfigureClusterNodeVIP=false +oracle.install.crs.config.gpnp.gnsOption=CREATE_NEW_GNS +oracle.install.crs.config.gpnp.gnsClientDataFile= +oracle.install.crs.config.gpnp.gnsSubDomain= +oracle.install.crs.config.gpnp.gnsVIPAddress= +oracle.install.crs.config.sites= +oracle.install.crs.config.clusterNodes=###CRS_CONFIG_NODES### +oracle.install.crs.config.networkInterfaceList= +oracle.install.crs.config.storageOption= +oracle.install.crs.exascale.vault.name= +oracle.install.crs.config.sharedFileSystemStorage.votingDiskLocations= +oracle.install.crs.config.sharedFileSystemStorage.ocrLocations= +oracle.install.asm.ClientDataFile= +oracle.install.crs.config.useIPMI=false +oracle.install.crs.config.ipmi.bmcBinpath= +oracle.install.crs.config.ipmi.bmcUsername= +oracle.install.crs.config.ipmi.bmcPassword= +oracle.install.asm.SYSASMPassword= +oracle.install.asm.diskGroup.name=DATA +oracle.install.asm.diskGroup.redundancy= +oracle.install.asm.diskGroup.AUSize=1 +oracle.install.asm.diskGroup.FailureGroups= +oracle.install.asm.diskGroup.disksWithFailureGroupNames= +oracle.install.asm.diskGroup.disks= +oracle.install.asm.diskGroup.quorumFailureGroupNames= +oracle.install.asm.diskGroup.diskDiscoveryString= +oracle.install.asm.monitorPassword= +oracle.install.asm.configureAFD=false +oracle.install.crs.configureRHPS=false +oracle.install.crs.config.ignoreDownNodes=false +oracle.install.crs.configureGIMR= +oracle.install.crs.configureRemoteGIMR= +oracle.install.crs.RemoteGIMRCredFile= +oracle.install.asm.configureGIMRDataDG= +oracle.install.asm.gimrDG.name= +oracle.install.asm.gimrDG.redundancy= +oracle.install.asm.gimrDG.AUSize=1 +oracle.install.asm.gimrDG.FailureGroups= +oracle.install.asm.gimrDG.disksWithFailureGroupNames= +oracle.install.asm.gimrDG.disks= +oracle.install.asm.gimrDG.quorumFailureGroupNames= +oracle.install.config.managementOption=NONE +oracle.install.config.omsHost= +oracle.install.config.omsPort=0 +oracle.install.config.emAdminUser= +oracle.install.config.emAdminPassword= +oracle.install.crs.rootconfig.executeRootScript=false +oracle.install.crs.rootconfig.configMethod=ROOT +oracle.install.crs.rootconfig.sudoPath= +oracle.install.crs.rootconfig.sudoUserName= +oracle.install.crs.config.batchinfo= +oracle.install.crs.deleteNode.nodes= diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/grid_sw_install_21c.rsp b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/grid_sw_install_21c.rsp new file mode 100644 index 0000000000..d93b93820b --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/grid_sw_install_21c.rsp @@ -0,0 +1,661 @@ +############################################################################### +## Copyright(c) Oracle Corporation 1998,2020. All rights reserved. ## +## ## +## Specify values for the variables listed below to customize ## +## your installation. ## +## ## +## Each variable is associated with a comment. The comment ## +## can help to populate the variables with the appropriate ## +## values. ## +## ## +## IMPORTANT NOTE: This file contains plain text passwords and ## +## should be secured to have read permission only by oracle user ## +## or db administrator who owns this installation. ## +## ## +############################################################################### + +############################################################################### +## ## +## Instructions to fill this response file ## +## To register and configure 'Grid Infrastructure for Cluster' ## +## - Fill out sections A,B,C,D,E,F and G ## +## - Fill out section G if OCR and voting disk should be placed on ASM ## +## ## +## To register and configure 'Grid Infrastructure for Standalone server' ## +## - Fill out sections A,B and G ## +## ## +## To register software for 'Grid Infrastructure' ## +## - Fill out sections A,B and D ## +## - Provide the cluster nodes in section D when choosing CRS_SWONLY as ## +## installation option in section A ## +## ## +## To upgrade clusterware and/or Automatic storage management of earlier ## +## releases ## +## - Fill out sections A,B,C,D and H ## +## ## +## To add more nodes to the cluster ## +## - Fill out sections A and D ## +## - Provide the cluster nodes in section D when choosing CRS_ADDNODE as ## +## installation option in section A ## +## ## +############################################################################### + +#------------------------------------------------------------------------------ +# Do not change the following system generated value. +#------------------------------------------------------------------------------ +oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v21.0.0 + +############################################################################### +# # +# SECTION A - BASIC # +# # +############################################################################### + + +#------------------------------------------------------------------------------- +# Specify the location which holds the inventory files. +# This is an optional parameter if installing on +# Windows based Operating System. +#------------------------------------------------------------------------------- +INVENTORY_LOCATION=###INVENTORY### + +#------------------------------------------------------------------------------- +# Specify the installation option. +# Allowed values: CRS_CONFIG or HA_CONFIG or UPGRADE or CRS_SWONLY or HA_SWONLY +# - CRS_CONFIG : To register home and configure Grid Infrastructure for cluster +# - HA_CONFIG : To register home and configure Grid Infrastructure for stand alone server +# - UPGRADE : To register home and upgrade clusterware software of earlier release +# - CRS_SWONLY : To register Grid Infrastructure Software home (can be configured for cluster +# or stand alone server later) +# - HA_SWONLY : To register Grid Infrastructure Software home (can be configured for stand +# alone server later. This is only supported on Windows.) +# - CRS_ADDNODE : To add more nodes to the cluster +# - CRS_DELETE_NODE : To delete nodes to the cluster +#------------------------------------------------------------------------------- +oracle.install.option=###INSTALL_TYPE### + +#------------------------------------------------------------------------------- +# Specify the complete path of the Oracle Base. +#------------------------------------------------------------------------------- +ORACLE_BASE=###GRID_BASE### + +################################################################################ +# # +# SECTION B - GROUPS # +# # +# The following three groups need to be assigned for all GI installations. # +# OSDBA and OSOPER can be the same or different. OSASM must be different # +# than the other two. # +# The value to be specified for OSDBA, OSOPER and OSASM group is only for # +# Unix based Operating System. # +# These groups are not required for upgrades, as they will be determined # +# from the Oracle home to upgrade. # +# # +################################################################################ +#------------------------------------------------------------------------------- +# The OSDBA_GROUP is the OS group which is to be granted SYSDBA privileges. +#------------------------------------------------------------------------------- +oracle.install.asm.OSDBA=asmdba + +#------------------------------------------------------------------------------- +# The OSOPER_GROUP is the OS group which is to be granted SYSOPER privileges. +# The value to be specified for OSOPER group is optional. +# Value should not be provided if configuring Client Cluster - i.e. storageOption=CLIENT_ASM_STORAGE. +#------------------------------------------------------------------------------- +oracle.install.asm.OSOPER=asmoper + +#------------------------------------------------------------------------------- +# The OSASM_GROUP is the OS group which is to be granted SYSASM privileges. This +# must be different than the previous two. +#------------------------------------------------------------------------------- +oracle.install.asm.OSASM=asmadmin + +################################################################################ +# # +# SECTION C - SCAN # +# # +################################################################################ +#------------------------------------------------------------------------------- +# Specify the type of SCAN configuration for the cluster +# Allowed values : LOCAL_SCAN and SHARED_SCAN +#------------------------------------------------------------------------------- +oracle.install.crs.config.scanType=LOCAL_SCAN + +#------------------------------------------------------------------------------- +# Applicable only if SHARED_SCAN is being configured for cluster +# Specify the path to the SCAN client data file +#------------------------------------------------------------------------------- +oracle.install.crs.config.SCANClientDataFile= + +#------------------------------------------------------------------------------- +# Specify a name for SCAN +# Applicable if LOCAL_SCAN is being configured for the cluster +# If you choose to configure the cluster with GNS with Auto assigned Node VIPs(DHCP),then the scanName should be specified in the format of 'SCAN name.Cluster name.GNS sub-domain' +#------------------------------------------------------------------------------- +oracle.install.crs.config.gpnp.scanName= + +#------------------------------------------------------------------------------- +# Specify a unused port number for SCAN service +#------------------------------------------------------------------------------- + +oracle.install.crs.config.gpnp.scanPort= + + +################################################################################ +# # +# SECTION D - CLUSTER & GNS # +# # +################################################################################ +#------------------------------------------------------------------------------- +# Specify the required cluster configuration +# Allowed values: STANDALONE, DOMAIN +#------------------------------------------------------------------------------- +oracle.install.crs.config.ClusterConfiguration= + +#------------------------------------------------------------------------------- +# Specify 'true' if you would like to configure the cluster as Extended, else +# specify 'false' +# +# Applicable only for STANDALONE and DOMAIN cluster configuration +#------------------------------------------------------------------------------- +oracle.install.crs.config.configureAsExtendedCluster= + + +#------------------------------------------------------------------------------- +# Specify a name for the Cluster you are creating. +# +# The maximum length allowed for clustername is 63 characters. The name can be +# any combination of lower and uppercase alphabets (A - Z), (0 - 9) and hyphens (-). +# +# Applicable only for STANDALONE and DOMAIN cluster configuration +#------------------------------------------------------------------------------- +oracle.install.crs.config.clusterName= + +#------------------------------------------------------------------------------- +# Applicable only for STANDALONE, DOMAIN cluster configuration. +# Specify 'true' if you would like to configure Grid Naming Service(GNS), else +# specify 'false' +#------------------------------------------------------------------------------- +oracle.install.crs.config.gpnp.configureGNS=false + +#------------------------------------------------------------------------------- +# Applicable only for STANDALONE and DOMAIN cluster configuration if you choose to configure GNS. +# Specify 'true' if you would like to assign SCAN name VIP and Node VIPs by DHCP +# , else specify 'false' +#------------------------------------------------------------------------------- +oracle.install.crs.config.autoConfigureClusterNodeVIP=false + +#------------------------------------------------------------------------------- +# Applicable only if you choose to configure GNS. +# Specify the type of GNS configuration for cluster +# Allowed values are: CREATE_NEW_GNS and USE_SHARED_GNS +#------------------------------------------------------------------------------- +oracle.install.crs.config.gpnp.gnsOption=CREATE_NEW_GNS + +#------------------------------------------------------------------------------- +# Applicable only if SHARED_GNS is being configured for cluster +# Specify the path to the GNS client data file +#------------------------------------------------------------------------------- +oracle.install.crs.config.gpnp.gnsClientDataFile= + +#------------------------------------------------------------------------------- +# Applicable only for STANDALONE and DOMAIN cluster configuration if you choose to +# configure GNS for this cluster oracle.install.crs.config.gpnp.gnsOption=CREATE_NEW_GNS +# Specify the GNS subdomain and an unused virtual hostname for GNS service +#------------------------------------------------------------------------------- +oracle.install.crs.config.gpnp.gnsSubDomain= +oracle.install.crs.config.gpnp.gnsVIPAddress= + +#------------------------------------------------------------------------------- +# Specify the list of sites - only if configuring an Extended Cluster +#------------------------------------------------------------------------------- +oracle.install.crs.config.sites= + +#------------------------------------------------------------------------------- +# Specify the list of nodes that have to be configured to be part of the cluster. +# +# The list should a comma-separated list of tuples. Each tuple should be a +# colon-separated string that contains +# - 1 field if you have chosen CRS_SWONLY as installation option, or +# - 2 fields if configuring a Flex Cluster +# - 2 fields if adding more nodes to the configured cluster, or +# - 3 fields if configuring an Extended Cluster +# +# The fields should be ordered as follows: +# 1. The first field should be the public node name. +# 2. The second field should be the virtual host name +# (Should be specified as AUTO if you have chosen 'auto configure for VIP' +# i.e. autoConfigureClusterNodeVIP=true) +# 3. The third field indicates the site designation for the node. To be specified only if configuring an Extended Cluster. +# Only the 1st field is applicable if you have chosen CRS_SWONLY as installation option + +# +# Examples +# For registering GI for a cluster software: oracle.install.crs.config.clusterNodes=node1,node2 +# For adding more nodes to the configured cluster: oracle.install.crs.config.clusterNodes=node1:node1-vip,node2:node2-vip +# For configuring Flex Cluster: oracle.install.crs.config.clusterNodes=node1:node1-vip,node2:node2-vip +# For configuring Extended Cluster: oracle.install.crs.config.clusterNodes=node1:node1-vip:site1,node2:node2-vip:site2 +# You can specify a range of nodes in the tuple using colon separated fields of format +# hostnameprefix:lowerbound-upperbound:hostnamesuffix:vipsuffix +# +#------------------------------------------------------------------------------- +oracle.install.crs.config.clusterNodes=###HOSTNAME### + +#------------------------------------------------------------------------------- +# The value should be a comma separated strings where each string is as shown below +# InterfaceName:SubnetAddress:InterfaceType +# where InterfaceType can be either "1", "2", "3", "4", or "5" +# InterfaceType stand for the following values +# - 1 : PUBLIC +# - 2 : PRIVATE +# - 3 : DO NOT USE +# - 4 : ASM +# - 5 : ASM & PRIVATE +# +# For example: eth0:140.87.24.0:1,eth1:10.2.1.0:2,eth2:140.87.52.0:3 +# +#------------------------------------------------------------------------------- +oracle.install.crs.config.networkInterfaceList= + +################################################################################ +# # +# SECTION E - STORAGE # +# # +################################################################################ + +#------------------------------------------------------------------------------- +# Specify the type of storage to use for Oracle Cluster Registry(OCR) and Voting +# Disks files. Only applicable for Standalone cluster. +# - FLEX_ASM_STORAGE +# - CLIENT_ASM_STORAGE +# - FILE_SYSTEM_STORAGE +# - EXASCALE_STORAGE +# +# Option FILE_SYSTEM_STORAGE is only for STANDALONE cluster configuration. +#------------------------------------------------------------------------------- +oracle.install.crs.config.storageOption= +#------------------------------------------------------------------------------- +# Specify the vault name if EXASCALE_STORAGE is selected as storage option. +# Example: +# oracle.install.crs.exascale.vault.name=myvault +#------------------------------------------------------------------------------- +oracle.install.crs.exascale.vault.name= +#------------------------------------------------------------------------------- +# These properties are applicable only if FILE_SYSTEM_STORAGE is chosen for +# storing OCR and voting disk +# Specify the location(s) for OCR and voting disks +# Three(3) or one(1) location(s) should be specified for OCR and voting disk, +# separated by commas. +# Example: +# For Unix based Operating System: +# oracle.install.crs.config.sharedFileSystemStorage.votingDiskLocations=/oradbocfs/storage/vdsk1,/oradbocfs/storage/vdsk2,/oradbocfs/storage/vdsk3 +# oracle.install.crs.config.sharedFileSystemStorage.ocrLocations=/oradbocfs/storage/ocr1,/oradbocfs/storage/ocr2,/oradbocfs/storage/ocr3 +# For Windows based Operating System OCR/VDSK on shared storage is not supported. +#------------------------------------------------------------------------------- +oracle.install.crs.config.sharedFileSystemStorage.votingDiskLocations= +oracle.install.crs.config.sharedFileSystemStorage.ocrLocations= + +#------------------------------------------------------------------------------- +# Applicable only if configuring CLIENT_ASM_STORAGE for OCR/Voting Disk storage +# Specify the path to Client ASM Data file +#------------------------------------------------------------------------------- +oracle.install.asm.ClientDataFile= +################################################################################ +# # +# SECTION F - IPMI # +# # +################################################################################ + +#------------------------------------------------------------------------------- +# Specify 'true' if you would like to configure Intelligent Power Management interface +# (IPMI), else specify 'false' +#------------------------------------------------------------------------------- +oracle.install.crs.config.useIPMI=false + +#------------------------------------------------------------------------------- +# Applicable only if you choose to configure IPMI +# i.e. oracle.install.crs.config.useIPMI=true +# Specify the location of the ipmiutil binary +# Specify the username and password for using IPMI service +#------------------------------------------------------------------------------- +oracle.install.crs.config.ipmi.bmcBinpath= +oracle.install.crs.config.ipmi.bmcUsername= +oracle.install.crs.config.ipmi.bmcPassword= + +################################################################################ +# # +# SECTION G - ASM # +# # +################################################################################ + + +#------------------------------------------------------------------------------- +# Password for SYS user of Oracle ASM +#------------------------------------------------------------------------------- +oracle.install.asm.SYSASMPassword= + +#------------------------------------------------------------------------------- +# The ASM DiskGroup +# +# Example: oracle.install.asm.diskGroup.name=data +# +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.name= + +#------------------------------------------------------------------------------- +# Redundancy level to be used by ASM. +# It can be one of the following +# - NORMAL +# - HIGH +# - EXTERNAL +# - FLEX +# - EXTENDED (required if oracle.install.crs.config.ClusterConfiguration=EXTENDED) +# Example: oracle.install.asm.diskGroup.redundancy=NORMAL +# +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.redundancy= + +#------------------------------------------------------------------------------- +# Allocation unit size to be used by ASM. +# It can be one of the following values +# - 1 +# - 2 +# - 4 +# - 8 +# - 16 +# Example: oracle.install.asm.diskGroup.AUSize=4 +# size unit is MB +# +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.AUSize=1 + +#------------------------------------------------------------------------------- +# Failure Groups for the disk group +# If configuring for Extended cluster specify as list of "failure group name:site" +# tuples. +# Else just specify as list of failure group names +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.FailureGroups= + +#------------------------------------------------------------------------------- +# List of disks and their failure groups to create a ASM DiskGroup +# (Use this if each of the disks have an associated failure group) +# Failure Groups are not required if oracle.install.asm.diskGroup.redundancy=EXTERNAL +# Example: +# For Unix based Operating System: +# oracle.install.asm.diskGroup.disksWithFailureGroupNames=/oracle/asm/disk1,FGName,/oracle/asm/disk2,FGName +# For Windows based Operating System: +# oracle.install.asm.diskGroup.disksWithFailureGroupNames=\\.\ORCLDISKDATA0,FGName,\\.\ORCLDISKDATA1,FGName +# +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.disksWithFailureGroupNames= + +#------------------------------------------------------------------------------- +# List of disks to create a ASM DiskGroup +# (Use this variable only if failure groups configuration is not required) +# Example: +# For Unix based Operating System: +# oracle.install.asm.diskGroup.disks=/oracle/asm/disk1,/oracle/asm/disk2 +# For Windows based Operating System: +# oracle.install.asm.diskGroup.disks=\\.\ORCLDISKDATA0,\\.\ORCLDISKDATA1 +# +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.disks= + +#------------------------------------------------------------------------------- +# List of failure groups to be marked as QUORUM. +# Quorum failure groups contain only voting disk data, no user data is stored +# Example: +# oracle.install.asm.diskGroup.quorumFailureGroupNames=FGName1,FGName2 +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.quorumFailureGroupNames= +#------------------------------------------------------------------------------- +# The disk discovery string to be used to discover the disks used create a ASM DiskGroup +# +# Example: +# For Unix based Operating System: +# oracle.install.asm.diskGroup.diskDiscoveryString=/oracle/asm/* +# For Windows based Operating System: +# oracle.install.asm.diskGroup.diskDiscoveryString=\\.\ORCLDISK* +# +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.diskDiscoveryString= + +#------------------------------------------------------------------------------- +# Password for ASMSNMP account +# ASMSNMP account is used by Oracle Enterprise Manager to monitor Oracle ASM instances +#------------------------------------------------------------------------------- +oracle.install.asm.monitorPassword= + +#------------------------------------------------------------------------------- +# Configure AFD - ASM Filter Driver +# Applicable only for FLEX_ASM_STORAGE option +# Specify 'true' if you want to configure AFD, else specify 'false' +#------------------------------------------------------------------------------- +oracle.install.asm.configureAFD=false +#------------------------------------------------------------------------------- +# Configure RHPS - Rapid Home Provisioning Service +# Applicable only for DOMAIN cluster configuration +# Specify 'true' if you want to configure RHP service, else specify 'false' +#------------------------------------------------------------------------------- +oracle.install.crs.configureRHPS=false + +################################################################################ +# # +# SECTION H - UPGRADE # +# # +################################################################################ +#------------------------------------------------------------------------------- +# Specify whether to ignore down nodes during upgrade operation. +# Value should be 'true' to ignore down nodes otherwise specify 'false' +#------------------------------------------------------------------------------- +oracle.install.crs.config.ignoreDownNodes= + +################################################################################ +# # +# SECTION I - GIMR # +# # +################################################################################ + +#------------------------------------------------------------------------------ +# Specify 'true' if you would like to configure Grid Infrastructure Management +# Repository (GIMR), else specify 'false'. Applicable only if CRS_CONFIG is +# chosen as install option and STANDALONE is chosen as cluster configuration. +# If you want to use or configure +# Local GIMR : oracle.install.crs.configureGIMR=true and oracle.install.crs.configureRemoteGIMR=false +# Remote GIMR : oracle.install.crs.configureGIMR=true, oracle.install.crs.configureRemoteGIMR=true +# and oracle.install.crs.RemoteGIMRCredFile= path of the GIMR cred file +# No GIMR : oracle.install.crs.configureGIMR=false +#------------------------------------------------------------------------------ +oracle.install.crs.configureGIMR= +oracle.install.crs.configureRemoteGIMR= +oracle.install.crs.RemoteGIMRCredFile= + +#------------------------------------------------------------------------------ +# Create a separate ASM DiskGroup to store GIMR data. +# Specify 'true' if you would like to separate GIMR data with clusterware data, +# else specify 'false' +# Value should be 'true' for DOMAIN cluster configurations +# Value can be true/false for STANDALONE cluster configurations. +#------------------------------------------------------------------------------ +oracle.install.asm.configureGIMRDataDG= + +#------------------------------------------------------------------------------- +# GIMR Storage data ASM DiskGroup +# Applicable only when +# oracle.install.asm.configureGIMRDataDG=true +# Example: oracle.install.asm.GIMRDG.name=MGMT +# +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.name= + +#------------------------------------------------------------------------------- +# Redundancy level to be used by ASM. +# It can be one of the following +# - NORMAL +# - HIGH +# - EXTERNAL +# - FLEX +# - EXTENDED (only if oracle.install.crs.config.ClusterConfiguration=EXTENDED) +# Example: oracle.install.asm.gimrDG.redundancy=NORMAL +# +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.redundancy= + +#------------------------------------------------------------------------------- +# Allocation unit size to be used by ASM. +# It can be one of the following values +# - 1 +# - 2 +# - 4 +# - 8 +# - 16 +# Example: oracle.install.asm.gimrDG.AUSize=4 +# size unit is MB +# +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.AUSize=1 + +#------------------------------------------------------------------------------- +# Failure Groups for the GIMR storage data ASM disk group +# If configuring for Extended cluster specify as list of "failure group name:site" +# tuples. +# Else just specify as list of failure group names +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.FailureGroups= + +#------------------------------------------------------------------------------- +# List of disks and their failure groups to create GIMR data ASM DiskGroup +# (Use this if each of the disks have an associated failure group) +# Failure Groups are not required if oracle.install.asm.gimrDG.redundancy=EXTERNAL +# Example: +# For Unix based Operating System: +# oracle.install.asm.gimrDG.disksWithFailureGroupNames=/oracle/asm/disk1,FGName,/oracle/asm/disk2,FGName +# For Windows based Operating System: +# oracle.install.asm.gimrDG.disksWithFailureGroupNames=\\.\ORCLDISKDATA0,FGName,\\.\ORCLDISKDATA1,FGName +# +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.disksWithFailureGroupNames= + +#------------------------------------------------------------------------------- +# List of disks to create GIMR data ASM DiskGroup +# (Use this variable only if failure groups configuration is not required) +# Example: +# For Unix based Operating System: +# oracle.install.asm.gimrDG.disks=/oracle/asm/disk1,/oracle/asm/disk2 +# For Windows based Operating System: +# oracle.install.asm.gimrDG.disks=\\.\ORCLDISKDATA0,\\.\ORCLDISKDATA1 +# +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.disks= + +#------------------------------------------------------------------------------- +# List of failure groups to be marked as QUORUM. +# Quorum failure groups contain only voting disk data, no user data is stored +# Example: +# oracle.install.asm.gimrDG.quorumFailureGroupNames=FGName1,FGName2 +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.quorumFailureGroupNames= +################################################################################ +# # +# MANAGEMENT OPTIONS # +# # +################################################################################ + +#------------------------------------------------------------------------------- +# Specify the management option to use for managing Oracle Grid Infrastructure +# Options are: +# 1. CLOUD_CONTROL - If you want to manage your Oracle Grid Infrastructure with Enterprise Manager Cloud Control. +# 2. NONE -If you do not want to manage your Oracle Grid Infrastructure with Enterprise Manager Cloud Control. +#------------------------------------------------------------------------------- +oracle.install.config.managementOption=NONE + +#------------------------------------------------------------------------------- +# Specify the OMS host to connect to Cloud Control. +# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL +#------------------------------------------------------------------------------- +oracle.install.config.omsHost= + +#------------------------------------------------------------------------------- +# Specify the OMS port to connect to Cloud Control. +# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL +#------------------------------------------------------------------------------- +oracle.install.config.omsPort=0 + +#------------------------------------------------------------------------------- +# Specify the EM Admin user name to use to connect to Cloud Control. +# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL +#------------------------------------------------------------------------------- +oracle.install.config.emAdminUser= + +#------------------------------------------------------------------------------- +# Specify the EM Admin password to use to connect to Cloud Control. +# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL +#------------------------------------------------------------------------------- +oracle.install.config.emAdminPassword= +################################################################################ +# # +# Root script execution configuration # +# # +################################################################################ + +#------------------------------------------------------------------------------------------------------- +# Specify the root script execution mode. +# +# - true : To execute the root script automatically by using the appropriate configuration methods. +# - false : To execute the root script manually. +# +# If this option is selected, password should be specified on the console. +#------------------------------------------------------------------------------------------------------- +oracle.install.crs.rootconfig.executeRootScript=false + +#-------------------------------------------------------------------------------------- +# Specify the configuration method to be used for automatic root script execution. +# +# Following are the possible choices: +# - ROOT +# - SUDO +#-------------------------------------------------------------------------------------- +oracle.install.crs.rootconfig.configMethod= +#-------------------------------------------------------------------------------------- +# Specify the absolute path of the sudo program. +# +# Applicable only when SUDO configuration method was chosen. +#-------------------------------------------------------------------------------------- +oracle.install.crs.rootconfig.sudoPath= + +#-------------------------------------------------------------------------------------- +# Specify the name of the user who is in the sudoers list. +# Applicable only when SUDO configuration method was chosen. +# Note:For Grid Infrastructure for Standalone server installations,the sudo user name must be the username of the user performing the installation. +#-------------------------------------------------------------------------------------- +oracle.install.crs.rootconfig.sudoUserName= +#-------------------------------------------------------------------------------------- +# Specify the nodes batch map. +# +# This should be a comma separated list of node:batch pairs. +# During upgrade, you can sequence the automatic execution of root scripts +# by pooling the nodes into batches. +# A maximum of three batches can be specified. +# Installer will execute the root scripts on all the nodes in one batch before +# proceeding to next batch. +# Root script execution on the local node must be in Batch 1. +# Examples: +# 1. oracle.install.crs.config.batchinfo=Node1:1,Node2:2,Node3:2,Node4:3 +# 2. oracle.install.crs.config.batchinfo=Node1:1,Node2:2,Node3:2,Node4:2 +# 3. oracle.install.crs.config.batchinfo=Node1:1,Node2:1,Node3:2,Node4:3 +# +# Applicable only for UPGRADE install option. +#-------------------------------------------------------------------------------------- +oracle.install.crs.config.batchinfo= +################################################################################# +# # +# DELETE NODE OPTIONS # +# # +################################################################################# + +#-------------------------------------------------------------------------------- +# Specify the node names to delete nodes from cluster. +# Delete node will be performed only for the remote nodes from the cluster. +#-------------------------------------------------------------------------------- +oracle.install.crs.deleteNode.nodes= diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/gridsetup_21c.rsp b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/gridsetup_21c.rsp new file mode 100644 index 0000000000..d982d76f52 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/gridsetup_21c.rsp @@ -0,0 +1,67 @@ +oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v21.0.0 +INVENTORY_LOCATION=###INVENTORY### +oracle.install.option=CRS_CONFIG +ORACLE_BASE=###GRID_BASE### +oracle.install.asm.OSDBA=asmdba +oracle.install.asm.OSOPER= +oracle.install.asm.OSASM=asmadmin +oracle.install.crs.config.scanType=###SCAN_TYPE### +oracle.install.crs.config.SCANClientDataFile=###SHARED_SCAN_FILE### +oracle.install.crs.config.gpnp.scanName=###SCAN_NAME### +oracle.install.crs.config.gpnp.scanPort=###SCAN_PORT### +oracle.install.crs.config.ClusterConfiguration=###CLUSTER_TYPE### +oracle.install.crs.config.configureAsExtendedCluster=false +oracle.install.crs.config.clusterName=###CLUSTER_NAME### +oracle.install.crs.config.gpnp.configureGNS=###CONFIGURE_GNS### +oracle.install.crs.config.autoConfigureClusterNodeVIP=###DHCP_CONF### +oracle.install.crs.config.gpnp.gnsOption=###GNS_OPTIONS### +oracle.install.crs.config.gpnp.gnsClientDataFile= +oracle.install.crs.config.gpnp.gnsSubDomain=###GNS_SUBDOMAIN### +oracle.install.crs.config.gpnp.gnsVIPAddress=###GNSVIP_HOSTNAME### +oracle.install.crs.config.sites= +oracle.install.crs.config.clusterNodes=###CRS_CONFIG_NODES### +oracle.install.crs.config.networkInterfaceList=###NETWORK_STRING### +oracle.install.crs.config.storageOption=###STORAGE_OPTIONS_FOR_MEMBERDB### +oracle.install.crs.exascale.vault.name= +oracle.install.crs.config.sharedFileSystemStorage.votingDiskLocations= +oracle.install.crs.config.sharedFileSystemStorage.ocrLocations= +oracle.install.asm.ClientDataFile= +oracle.install.crs.config.useIPMI= +oracle.install.crs.config.ipmi.bmcBinpath= +oracle.install.crs.config.ipmi.bmcUsername= +oracle.install.crs.config.ipmi.bmcPassword= +oracle.install.asm.SYSASMPassword=###PASSWORD### +oracle.install.asm.diskGroup.name=###DB_ASM_DISKGROUP### +oracle.install.asm.diskGroup.redundancy=###ASM_REDUNDANCY### +oracle.install.asm.diskGroup.AUSize=4 +oracle.install.asm.diskGroup.FailureGroups=###ASM_DG_FAILURE_GROUP### +oracle.install.asm.diskGroup.disksWithFailureGroupNames=###ASM_DISKGROUP_FG_DISKS### +oracle.install.asm.diskGroup.disks=###ASM_DISKGROUP_DISKS### +oracle.install.asm.diskGroup.quorumFailureGroupNames= +oracle.install.asm.diskGroup.diskDiscoveryString=###ASM_DISCOVERY_STRING### +oracle.install.asm.monitorPassword=###PASSWORD### +oracle.install.asm.configureAFD=false +oracle.install.crs.configureRHPS=false +oracle.install.crs.config.ignoreDownNodes= +oracle.install.crs.configureGIMR= +oracle.install.crs.configureRemoteGIMR= +oracle.install.crs.RemoteGIMRCredFile= +oracle.install.asm.configureGIMRDataDG= +oracle.install.asm.gimrDG.name=###GIMR_DG_NAME### +oracle.install.asm.gimrDG.redundancy=###GIMR_DG_REDUNDANCY### +oracle.install.asm.gimrDG.AUSize=1 +oracle.install.asm.gimrDG.FailureGroups=###GIMR_DG_FAILURE_GROUP### +oracle.install.asm.gimrDG.disksWithFailureGroupNames=###GIMR_DISKGROUP_FG_DISKS### +oracle.install.asm.gimrDG.disks=###GIMR_DISKGROUP_DISKS### +oracle.install.asm.gimrDG.quorumFailureGroupNames= +oracle.install.config.managementOption=NONE +oracle.install.config.omsHost= +oracle.install.config.omsPort=0 +oracle.install.config.emAdminUser= +oracle.install.config.emAdminPassword= +oracle.install.crs.rootconfig.executeRootScript=false +oracle.install.crs.rootconfig.configMethod=ROOT +oracle.install.crs.rootconfig.sudoPath= +oracle.install.crs.rootconfig.sudoUserName= +oracle.install.crs.config.batchinfo= +oracle.install.crs.deleteNode.nodes= diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/gridsetup_21cv1.rsp b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/gridsetup_21cv1.rsp new file mode 100644 index 0000000000..a2f14c610f --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/gridsetup_21cv1.rsp @@ -0,0 +1,653 @@ +############################################################################### +## Copyright(c) Oracle Corporation 1998,2019. All rights reserved. ## +## ## +## Specify values for the variables listed below to customize ## +## your installation. ## +## ## +## Each variable is associated with a comment. The comment ## +## can help to populate the variables with the appropriate ## +## values. ## +## ## +## IMPORTANT NOTE: This file contains plain text passwords and ## +## should be secured to have read permission only by oracle user ## +## or db administrator who owns this installation. ## +## ## +############################################################################### + +############################################################################### +## ## +## Instructions to fill this response file ## +## To register and configure 'Grid Infrastructure for Cluster' ## +## - Fill out sections A,B,C,D,E,F and G ## +## - Fill out section G if OCR and voting disk should be placed on ASM ## +## ## +## To register and configure 'Grid Infrastructure for Standalone server' ## +## - Fill out sections A,B and G ## +## ## +## To register software for 'Grid Infrastructure' ## +## - Fill out sections A,B and D ## +## - Provide the cluster nodes in section D when choosing CRS_SWONLY as ## +## installation option in section A ## +## ## +## To upgrade clusterware and/or Automatic storage management of earlier ## +## releases ## +## - Fill out sections A,B,C,D and H ## +## ## +## To add more nodes to the cluster ## +## - Fill out sections A and D ## +## - Provide the cluster nodes in section D when choosing CRS_ADDNODE as ## +## installation option in section A ## +## ## +############################################################################### + +#------------------------------------------------------------------------------ +# Do not change the following system generated value. +#------------------------------------------------------------------------------ +oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v21.0.0 + +############################################################################### +# # +# SECTION A - BASIC # +# # +############################################################################### + + +#------------------------------------------------------------------------------- +# Specify the location which holds the inventory files. +# This is an optional parameter if installing on +# Windows based Operating System. +#------------------------------------------------------------------------------- +INVENTORY_LOCATION= + +#------------------------------------------------------------------------------- +# Specify the installation option. +# Allowed values: CRS_CONFIG or HA_CONFIG or UPGRADE or CRS_SWONLY or HA_SWONLY +# - CRS_CONFIG : To register home and configure Grid Infrastructure for cluster +# - HA_CONFIG : To register home and configure Grid Infrastructure for stand alone server +# - UPGRADE : To register home and upgrade clusterware software of earlier release +# - CRS_SWONLY : To register Grid Infrastructure Software home (can be configured for cluster +# or stand alone server later) +# - HA_SWONLY : To register Grid Infrastructure Software home (can be configured for stand +# alone server later. This is only supported on Windows.) +# - CRS_ADDNODE : To add more nodes to the cluster +# - CRS_DELETE_NODE : To delete nodes to the cluster +#------------------------------------------------------------------------------- +oracle.install.option= + +#------------------------------------------------------------------------------- +# Specify the complete path of the Oracle Base. +#------------------------------------------------------------------------------- +ORACLE_BASE= + +################################################################################ +# # +# SECTION B - GROUPS # +# # +# The following three groups need to be assigned for all GI installations. # +# OSDBA and OSOPER can be the same or different. OSASM must be different # +# than the other two. # +# The value to be specified for OSDBA, OSOPER and OSASM group is only for # +# Unix based Operating System. # +# These groups are not required for upgrades, as they will be determined # +# from the Oracle home to upgrade. # +# # +################################################################################ +#------------------------------------------------------------------------------- +# The OSDBA_GROUP is the OS group which is to be granted SYSDBA privileges. +#------------------------------------------------------------------------------- +oracle.install.asm.OSDBA= + +#------------------------------------------------------------------------------- +# The OSOPER_GROUP is the OS group which is to be granted SYSOPER privileges. +# The value to be specified for OSOPER group is optional. +# Value should not be provided if configuring Client Cluster - i.e. storageOption=CLIENT_ASM_STORAGE. +#------------------------------------------------------------------------------- +oracle.install.asm.OSOPER= + +#------------------------------------------------------------------------------- +# The OSASM_GROUP is the OS group which is to be granted SYSASM privileges. This +# must be different than the previous two. +#------------------------------------------------------------------------------- +oracle.install.asm.OSASM= + +################################################################################ +# # +# SECTION C - SCAN # +# # +################################################################################ +#------------------------------------------------------------------------------- +# Specify the type of SCAN configuration for the cluster +# Allowed values : LOCAL_SCAN and SHARED_SCAN +#------------------------------------------------------------------------------- +oracle.install.crs.config.scanType= + +#------------------------------------------------------------------------------- +# Applicable only if SHARED_SCAN is being configured for cluster +# Specify the path to the SCAN client data file +#------------------------------------------------------------------------------- +oracle.install.crs.config.SCANClientDataFile= + +#------------------------------------------------------------------------------- +# Specify a name for SCAN +# Applicable if LOCAL_SCAN is being configured for the cluster +# If you choose to configure the cluster with GNS with Auto assigned Node VIPs(DHCP),then the scanName should be specified in the format of 'SCAN name.Cluster name.GNS sub-domain' +#------------------------------------------------------------------------------- +oracle.install.crs.config.gpnp.scanName= + +#------------------------------------------------------------------------------- +# Specify a unused port number for SCAN service +#------------------------------------------------------------------------------- + +oracle.install.crs.config.gpnp.scanPort= + +################################################################################ +# # +# SECTION D - CLUSTER & GNS # +# # +################################################################################ +#------------------------------------------------------------------------------- +# Specify the required cluster configuration +# Allowed values: STANDALONE, DOMAIN, MEMBERDB, MEMBERAPP +#------------------------------------------------------------------------------- +oracle.install.crs.config.ClusterConfiguration= + +#------------------------------------------------------------------------------- +# Specify 'true' if you would like to configure the cluster as Extended, else +# specify 'false' +# +# Applicable only for STANDALONE and DOMAIN cluster configuration +#------------------------------------------------------------------------------- +oracle.install.crs.config.configureAsExtendedCluster= + + +#------------------------------------------------------------------------------- +# Specify the Member Cluster Manifest file +# +# Applicable only for MEMBERDB and MEMBERAPP cluster configuration +#------------------------------------------------------------------------------- +oracle.install.crs.config.memberClusterManifestFile= + +#------------------------------------------------------------------------------- +# Specify a name for the Cluster you are creating. +# +# The maximum length allowed for clustername is 63 characters. The name can be +# any combination of lower and uppercase alphabets (A - Z), (0 - 9) and hyphens (-). +# +# Applicable only for STANDALONE and DOMAIN cluster configuration +#------------------------------------------------------------------------------- +oracle.install.crs.config.clusterName= + +#------------------------------------------------------------------------------- +# Applicable only for STANDALONE, DOMAIN, MEMBERDB cluster configuration. +# Specify 'true' if you would like to configure Grid Naming Service(GNS), else +# specify 'false' +#------------------------------------------------------------------------------- +oracle.install.crs.config.gpnp.configureGNS= + +#------------------------------------------------------------------------------- +# Applicable only for STANDALONE and DOMAIN cluster configuration if you choose to configure GNS. +# Specify 'true' if you would like to assign SCAN name VIP and Node VIPs by DHCP +# , else specify 'false' +#------------------------------------------------------------------------------- +oracle.install.crs.config.autoConfigureClusterNodeVIP= + +#------------------------------------------------------------------------------- +# Applicable only if you choose to configure GNS. +# Specify the type of GNS configuration for cluster +# Allowed values are: CREATE_NEW_GNS and USE_SHARED_GNS +# Only USE_SHARED_GNS value is allowed for MEMBERDB cluster configuration. +#------------------------------------------------------------------------------- +oracle.install.crs.config.gpnp.gnsOption= + +#------------------------------------------------------------------------------- +# Applicable only if SHARED_GNS is being configured for cluster +# Specify the path to the GNS client data file +#------------------------------------------------------------------------------- +oracle.install.crs.config.gpnp.gnsClientDataFile= + +#------------------------------------------------------------------------------- +# Applicable only for STANDALONE and DOMAIN cluster configuration if you choose to +# configure GNS for this cluster oracle.install.crs.config.gpnp.gnsOption=CREATE_NEW_GNS +# Specify the GNS subdomain and an unused virtual hostname for GNS service +#------------------------------------------------------------------------------- +oracle.install.crs.config.gpnp.gnsSubDomain= +oracle.install.crs.config.gpnp.gnsVIPAddress= + +#------------------------------------------------------------------------------- +# Specify the list of sites - only if configuring an Extended Cluster +#------------------------------------------------------------------------------- +oracle.install.crs.config.sites= + +#------------------------------------------------------------------------------- +# Specify the list of nodes that have to be configured to be part of the cluster. +# +# The list should a comma-separated list of tuples. Each tuple should be a +# colon-separated string that contains +# - 1 field if you have chosen CRS_SWONLY as installation option, or +# - 1 field if configuring an Application Cluster, or +# - 3 fields if configuring a Flex Cluster +# - 3 fields if adding more nodes to the configured cluster, or +# - 4 fields if configuring an Extended Cluster +# +# The fields should be ordered as follows: +# 1. The first field should be the public node name. +# 2. The second field should be the virtual host name +# (Should be specified as AUTO if you have chosen 'auto configure for VIP' +# i.e. autoConfigureClusterNodeVIP=true) +# 3. The third field indicates the site designation for the node. To be specified only if configuring an Extended Cluster. +# Only the 1st field is applicable if you have chosen CRS_SWONLY as installation option +# Only the 1st field is applicable if configuring an Application Cluster +# +# Examples +# For registering GI for a cluster software: oracle.install.crs.config.clusterNodes=node1,node2 +# For adding more nodes to the configured cluster: oracle.install.crs.config.clusterNodes=node1:node1-vip,node2:node2-vip +# For configuring Application Cluster: oracle.install.crs.config.clusterNodes=node1,node2 +# For configuring Flex Cluster: oracle.install.crs.config.clusterNodes=node1:node1-vip,node2:node2-vip +# For configuring Extended Cluster: oracle.install.crs.config.clusterNodes=node1:node1-vip:site1,node2:node2-vip:site2 +# You can specify a range of nodes in the tuple using colon separated fields of format +# hostnameprefix:lowerbound-upperbound:hostnamesuffix:vipsuffix:role of node +# +#------------------------------------------------------------------------------- +oracle.install.crs.config.clusterNodes= + +#------------------------------------------------------------------------------- +# The value should be a comma separated strings where each string is as shown below +# InterfaceName:SubnetAddress:InterfaceType +# where InterfaceType can be either "1", "2", "3", "4", or "5" +# InterfaceType stand for the following values +# - 1 : PUBLIC +# - 2 : PRIVATE +# - 3 : DO NOT USE +# - 4 : ASM +# - 5 : ASM & PRIVATE +# +# For example: eth0:140.87.24.0:1,eth1:10.2.1.0:2,eth2:140.87.52.0:3 +# +#------------------------------------------------------------------------------- +oracle.install.crs.config.networkInterfaceList= + +#------------------------------------------------------------------------------ +# Specify 'true' if you would like to configure Grid Infrastructure Management +# Repository (GIMR), else specify 'false'. +# This option is only applicable when CRS_CONFIG is chosen as install option, +# and STANDALONE is chosen as cluster configuration. +#------------------------------------------------------------------------------ +oracle.install.crs.configureGIMR= + +#------------------------------------------------------------------------------ +# Create a separate ASM DiskGroup to store GIMR data. +# Specify 'true' if you would like to separate GIMR data with clusterware data, +# else specify 'false' +# Value should be 'true' for DOMAIN cluster configurations +# Value can be true/false for STANDALONE cluster configurations. +#------------------------------------------------------------------------------ +oracle.install.asm.configureGIMRDataDG= + +################################################################################ +# # +# SECTION E - STORAGE # +# # +################################################################################ + +#------------------------------------------------------------------------------- +# Specify the type of storage to use for Oracle Cluster Registry(OCR) and Voting +# Disks files. Only applicable for Standalone and MemberDB cluster. +# - FLEX_ASM_STORAGE +# - CLIENT_ASM_STORAGE +# - FILE_SYSTEM_STORAGE +# +# Option FILE_SYSTEM_STORAGE is only for STANDALONE cluster configuration. +#------------------------------------------------------------------------------- +oracle.install.crs.config.storageOption= + +#------------------------------------------------------------------------------- +# These properties are applicable only if FILE_SYSTEM_STORAGE is chosen for +# storing OCR and voting disk +# Specify the location(s) for OCR and voting disks +# Three(3) or one(1) location(s) should be specified for OCR and voting disk, +# separated by commas. +# Example: +# For Unix based Operating System: +# oracle.install.crs.config.sharedFileSystemStorage.votingDiskLocations=/oradbocfs/storage/vdsk1,/oradbocfs/storage/vdsk2,/oradbocfs/storage/vdsk3 +# oracle.install.crs.config.sharedFileSystemStorage.ocrLocations=/oradbocfs/storage/ocr1,/oradbocfs/storage/ocr2,/oradbocfs/storage/ocr3 +# For Windows based Operating System OCR/VDSK on shared storage is not supported. +#------------------------------------------------------------------------------- +oracle.install.crs.config.sharedFileSystemStorage.votingDiskLocations= +oracle.install.crs.config.sharedFileSystemStorage.ocrLocations= +################################################################################ +# # +# SECTION F - IPMI # +# # +################################################################################ + +#------------------------------------------------------------------------------- +# Specify 'true' if you would like to configure Intelligent Power Management interface +# (IPMI), else specify 'false' +#------------------------------------------------------------------------------- +oracle.install.crs.config.useIPMI= + +#------------------------------------------------------------------------------- +# Applicable only if you choose to configure IPMI +# i.e. oracle.install.crs.config.useIPMI=true +# Specify the username and password for using IPMI service +#------------------------------------------------------------------------------- +oracle.install.crs.config.ipmi.bmcUsername= +oracle.install.crs.config.ipmi.bmcPassword= +################################################################################ +# # +# SECTION G - ASM # +# # +################################################################################ + + +#------------------------------------------------------------------------------- +# Password for SYS user of Oracle ASM +#------------------------------------------------------------------------------- +oracle.install.asm.SYSASMPassword= + +#------------------------------------------------------------------------------- +# The ASM DiskGroup +# +# Example: oracle.install.asm.diskGroup.name=data +# +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.name= + +#------------------------------------------------------------------------------- +# Redundancy level to be used by ASM. +# It can be one of the following +# - NORMAL +# - HIGH +# - EXTERNAL +# - FLEX# - EXTENDED (required if oracle.install.crs.config.ClusterConfiguration=EXTENDED) +# Example: oracle.install.asm.diskGroup.redundancy=NORMAL +# +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.redundancy= + +#------------------------------------------------------------------------------- +# Allocation unit size to be used by ASM. +# It can be one of the following values +# - 1 +# - 2 +# - 4 +# - 8 +# - 16 +# Example: oracle.install.asm.diskGroup.AUSize=4 +# size unit is MB +# +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.AUSize= + +#------------------------------------------------------------------------------- +# Failure Groups for the disk group +# If configuring for Extended cluster specify as list of "failure group name:site" +# tuples. +# Else just specify as list of failure group names +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.FailureGroups= + +#------------------------------------------------------------------------------- +# List of disks and their failure groups to create a ASM DiskGroup +# (Use this if each of the disks have an associated failure group) +# Failure Groups are not required if oracle.install.asm.diskGroup.redundancy=EXTERNAL +# Example: +# For Unix based Operating System: +# oracle.install.asm.diskGroup.disksWithFailureGroupNames=/oracle/asm/disk1,FGName,/oracle/asm/disk2,FGName +# For Windows based Operating System: +# oracle.install.asm.diskGroup.disksWithFailureGroupNames=\\.\ORCLDISKDATA0,FGName,\\.\ORCLDISKDATA1,FGName +# +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.disksWithFailureGroupNames= + +#------------------------------------------------------------------------------- +# List of disks to create a ASM DiskGroup +# (Use this variable only if failure groups configuration is not required) +# Example: +# For Unix based Operating System: +# oracle.install.asm.diskGroup.disks=/oracle/asm/disk1,/oracle/asm/disk2 +# For Windows based Operating System: +# oracle.install.asm.diskGroup.disks=\\.\ORCLDISKDATA0,\\.\ORCLDISKDATA1 +# +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.disks= + +#------------------------------------------------------------------------------- +# List of failure groups to be marked as QUORUM. +# Quorum failure groups contain only voting disk data, no user data is stored +# Example: +# oracle.install.asm.diskGroup.quorumFailureGroupNames=FGName1,FGName2 +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.quorumFailureGroupNames= +#------------------------------------------------------------------------------- +# The disk discovery string to be used to discover the disks used create a ASM DiskGroup +# +# Example: +# For Unix based Operating System: +# oracle.install.asm.diskGroup.diskDiscoveryString=/oracle/asm/* +# For Windows based Operating System: +# oracle.install.asm.diskGroup.diskDiscoveryString=\\.\ORCLDISK* +# +#------------------------------------------------------------------------------- +oracle.install.asm.diskGroup.diskDiscoveryString= + +#------------------------------------------------------------------------------- +# Password for ASMSNMP account +# ASMSNMP account is used by Oracle Enterprise Manager to monitor Oracle ASM instances +#------------------------------------------------------------------------------- +oracle.install.asm.monitorPassword= + +#------------------------------------------------------------------------------- +# GIMR Storage data ASM DiskGroup +# Applicable only when +# oracle.install.asm.configureGIMRDataDG=true +# Example: oracle.install.asm.GIMRDG.name=MGMT +# +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.name= + +#------------------------------------------------------------------------------- +# Redundancy level to be used by ASM. +# It can be one of the following +# - NORMAL +# - HIGH +# - EXTERNAL +# - FLEX# - EXTENDED (only if oracle.install.crs.config.ClusterConfiguration=EXTENDED) +# Example: oracle.install.asm.gimrDG.redundancy=NORMAL +# +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.redundancy= + +#------------------------------------------------------------------------------- +# Allocation unit size to be used by ASM. +# It can be one of the following values +# - 1 +# - 2 +# - 4 +# - 8 +# - 16 +# Example: oracle.install.asm.gimrDG.AUSize=4 +# size unit is MB +# +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.AUSize= + +#------------------------------------------------------------------------------- +# Failure Groups for the GIMR storage data ASM disk group +# If configuring for Extended cluster specify as list of "failure group name:site" +# tuples. +# Else just specify as list of failure group names +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.FailureGroups= + +#------------------------------------------------------------------------------- +# List of disks and their failure groups to create GIMR data ASM DiskGroup +# (Use this if each of the disks have an associated failure group) +# Failure Groups are not required if oracle.install.asm.gimrDG.redundancy=EXTERNAL +# Example: +# For Unix based Operating System: +# oracle.install.asm.gimrDG.disksWithFailureGroupNames=/oracle/asm/disk1,FGName,/oracle/asm/disk2,FGName +# For Windows based Operating System: +# oracle.install.asm.gimrDG.disksWithFailureGroupNames=\\.\ORCLDISKDATA0,FGName,\\.\ORCLDISKDATA1,FGName +# +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.disksWithFailureGroupNames= + +#------------------------------------------------------------------------------- +# List of disks to create GIMR data ASM DiskGroup +# (Use this variable only if failure groups configuration is not required) +# Example: +# For Unix based Operating System: +# oracle.install.asm.gimrDG.disks=/oracle/asm/disk1,/oracle/asm/disk2 +# For Windows based Operating System: +# oracle.install.asm.gimrDG.disks=\\.\ORCLDISKDATA0,\\.\ORCLDISKDATA1 +# +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.disks= + +#------------------------------------------------------------------------------- +# List of failure groups to be marked as QUORUM. +# Quorum failure groups contain only voting disk data, no user data is stored +# Example: +# oracle.install.asm.gimrDG.quorumFailureGroupNames=FGName1,FGName2 +#------------------------------------------------------------------------------- +oracle.install.asm.gimrDG.quorumFailureGroupNames= + +#------------------------------------------------------------------------------- +# Configure AFD - ASM Filter Driver +# Applicable only for FLEX_ASM_STORAGE option +# Specify 'true' if you want to configure AFD, else specify 'false' +#------------------------------------------------------------------------------- +oracle.install.asm.configureAFD= +#------------------------------------------------------------------------------- +# Configure RHPS - Rapid Home Provisioning Service +# Applicable only for DOMAIN cluster configuration +# Specify 'true' if you want to configure RHP service, else specify 'false' +#------------------------------------------------------------------------------- +oracle.install.crs.configureRHPS= + +################################################################################ +# # +# SECTION H - UPGRADE # +# # +################################################################################ +#------------------------------------------------------------------------------- +# Specify whether to ignore down nodes during upgrade operation. +# Value should be 'true' to ignore down nodes otherwise specify 'false' +#------------------------------------------------------------------------------- +oracle.install.crs.config.ignoreDownNodes= +################################################################################ +# # +# MANAGEMENT OPTIONS # +# # +################################################################################ + +#------------------------------------------------------------------------------- +# Specify the management option to use for managing Oracle Grid Infrastructure +# Options are: +# 1. CLOUD_CONTROL - If you want to manage your Oracle Grid Infrastructure with Enterprise Manager Cloud Control. +# 2. NONE -If you do not want to manage your Oracle Grid Infrastructure with Enterprise Manager Cloud Control. +#------------------------------------------------------------------------------- +oracle.install.config.managementOption= + +#------------------------------------------------------------------------------- +# Specify the OMS host to connect to Cloud Control. +# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL +#------------------------------------------------------------------------------- +oracle.install.config.omsHost= + +#------------------------------------------------------------------------------- +# Specify the OMS port to connect to Cloud Control. +# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL +#------------------------------------------------------------------------------- +oracle.install.config.omsPort= + +#------------------------------------------------------------------------------- +# Specify the EM Admin user name to use to connect to Cloud Control. +# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL +#------------------------------------------------------------------------------- +oracle.install.config.emAdminUser= + +#------------------------------------------------------------------------------- +# Specify the EM Admin password to use to connect to Cloud Control. +# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL +#------------------------------------------------------------------------------- +oracle.install.config.emAdminPassword= +################################################################################ +# # +# Root script execution configuration # +# # +################################################################################ + +#------------------------------------------------------------------------------------------------------- +# Specify the root script execution mode. +# +# - true : To execute the root script automatically by using the appropriate configuration methods. +# - false : To execute the root script manually. +# +# If this option is selected, password should be specified on the console. +#------------------------------------------------------------------------------------------------------- +oracle.install.crs.rootconfig.executeRootScript= + +#-------------------------------------------------------------------------------------- +# Specify the configuration method to be used for automatic root script execution. +# +# Following are the possible choices: +# - ROOT +# - SUDO +#-------------------------------------------------------------------------------------- +oracle.install.crs.rootconfig.configMethod= +#-------------------------------------------------------------------------------------- +# Specify the absolute path of the sudo program. +# +# Applicable only when SUDO configuration method was chosen. +#-------------------------------------------------------------------------------------- +oracle.install.crs.rootconfig.sudoPath= + +#-------------------------------------------------------------------------------------- +# Specify the name of the user who is in the sudoers list. +# Applicable only when SUDO configuration method was chosen. +# Note:For Grid Infrastructure for Standalone server installations,the sudo user name must be the username of the user performing the installation. +#-------------------------------------------------------------------------------------- +oracle.install.crs.rootconfig.sudoUserName= +#-------------------------------------------------------------------------------------- +# Specify the nodes batch map. +# +# This should be a comma separated list of node:batch pairs. +# During upgrade, you can sequence the automatic execution of root scripts +# by pooling the nodes into batches. +# A maximum of three batches can be specified. +# Installer will execute the root scripts on all the nodes in one batch before +# proceeding to next batch. +# Root script execution on the local node must be in Batch 1. +# +# Examples: +# 1. oracle.install.crs.config.batchinfo=Node1:1,Node2:2,Node3:2,Node4:3 +# 2. oracle.install.crs.config.batchinfo=Node1:1,Node2:2,Node3:2,Node4:2 +# 3. oracle.install.crs.config.batchinfo=Node1:1,Node2:1,Node3:2,Node4:3 +# +# Applicable only for UPGRADE install option. +#-------------------------------------------------------------------------------------- +oracle.install.crs.config.batchinfo= +################################################################################ +# # +# APPLICATION CLUSTER OPTIONS # +# # +################################################################################ + +#------------------------------------------------------------------------------- +# Specify the Virtual hostname to configure virtual access for your Application +# The value to be specified for Virtual hostname is optional. +#------------------------------------------------------------------------------- +oracle.install.crs.app.applicationAddress= +################################################################################# +# # +# DELETE NODE OPTIONS # +# # +################################################################################# + +#-------------------------------------------------------------------------------- +# Specify the node names to delete nodes from cluster. +# Delete node will be performed only for the remote nodes from the cluster. +#-------------------------------------------------------------------------------- +oracle.install.crs.deleteNode.nodes= diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/initsh b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/initsh new file mode 100755 index 0000000000..27f753d46b --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/initsh @@ -0,0 +1,10 @@ +#!/bin/bash +# Copyright (c) 2022, Oracle and/or its affiliates +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl/ + +echo "Creating env variables file /etc/rac_env_vars" +/bin/bash -c "cat /proc/1/environ | tr '\0' '\n' > /etc/rac_env_vars" +/bin/bash -c "sed -i -e 's/^/export /' /etc/rac_env_vars" + +echo "Starting Systemd" +exec /lib/systemd/systemd diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/installDBBinaries.sh b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/installDBBinaries.sh new file mode 100755 index 0000000000..cf92dd559c --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/installDBBinaries.sh @@ -0,0 +1,65 @@ +#!/bin/bash +# LICENSE UPL 1.0 +# +# Copyright (c) 2018,2021 Oracle and/or its affiliates. +# +# Since: December, 2018 +# Author: sanjay.singh@oracle.com, paramdeep.saini@oracle.com +# Description:Installing Oracle DB software +# +# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS HEADER. +# + +EDITION=$1 + +# Check whether edition has been passed on +if [ "$EDITION" == "" ]; then + echo "ERROR: No edition has been passed on!" + echo "Please specify the correct edition!" + exit 1; +fi; + +# Check whether correct edition has been passed on +# shellcheck disable=SC2166 +if [ "$EDITION" != "EE" -a "$EDITION" != "SE2" ]; then + echo "ERROR: Wrong edition has been passed on!" + echo "Edition $EDITION is no a valid edition!" + exit 1; +fi; + +# Check whether DB_BASE is set +if [ "$DB_BASE" == "" ]; then + echo "ERROR: DB_BASE has not been set!" + echo "You have to have the DB_BASE environment variable set to a valid value!" + exit 1; +fi; + +# Check whether DB_HOME is set +if [ "$DB_HOME" == "" ]; then + echo "ERROR: DB_HOME has not been set!" + echo "You have to have the DB_HOME environment variable set to a valid value!" + exit 1; +fi; + +# Replace place holders +# --------------------- +sed -i -e "s|###ORACLE_EDITION###|$EDITION|g" "$INSTALL_SCRIPTS"/"$DB_INSTALL_RSP" && \ +sed -i -e "s|###DB_BASE###|$DB_BASE|g" "$INSTALL_SCRIPTS"/"$DB_INSTALL_RSP" && \ +sed -i -e "s|###DB_HOME###|$DB_HOME|g" "$INSTALL_SCRIPTS"/"$DB_INSTALL_RSP" && \ +sed -i -e "s|###INVENTORY###|$INVENTORY|g" "$INSTALL_SCRIPTS"/"$DB_INSTALL_RSP" + +export ORACLE_HOME=${DB_HOME} +export PATH=${ORACLE_HOME}/bin:/bin:/sbin:/usr/bin +export LD_LIBRARY_PATH=${ORACLE_HOME}/lib:/lib:/usr/lib + +# Install Oracle binaries +if [ "${DB_USER}" != "${GRID_USER}" ]; then +mkdir -p /home/"${DB_USER}"/.ssh && \ +chmod 700 /home/"${DB_USER}"/.ssh +fi + + +# Install Oracle binaries +# shellcheck disable=SC2015 +unzip -q "$INSTALL_SCRIPTS"/"$INSTALL_FILE_2" -d "$DB_HOME" && \ +"$DB_HOME"/runInstaller -silent -force -waitforcompletion -responsefile "$INSTALL_SCRIPTS"/"$DB_INSTALL_RSP" -ignorePrereqFailure || true diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/installGridBinaries.sh b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/installGridBinaries.sh new file mode 100755 index 0000000000..15616d5f82 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/installGridBinaries.sh @@ -0,0 +1,59 @@ +#!/bin/bash +# LICENSE UPL 1.0 +# +# Copyright (c) 2018,2021 Oracle and/or its affiliates. +# +# Since: December, 2018 +# Author: sanjay.singh@oracle.com, paramdeep.saini@oracle.com +# Description: Install grid software inside the container. +# +# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS HEADER. +# + +EDITION=$1 +# shellcheck disable=SC2034 +PATCH_NUMBER=$2 + +# Check whether edition has been passed on +if [ "$EDITION" == "" ]; then + echo "ERROR: No edition has been passed on!" + echo "Please specify the correct edition!" + exit 1; +fi; + +# Check whether correct edition has been passed on +if [ "$EDITION" != "EE" ]; then + echo "ERROR: Wrong edition has been passed on!" + echo "Edition $EDITION is no a valid edition!" + exit 1; +fi; + +# Check whether GRID_BASE is set +if [ "$GRID_BASE" == "" ]; then + echo "ERROR: GRID_BASE has not been set!" + echo "You have to have the GRID_BASE environment variable set to a valid value!" + exit 1; +fi; + +# Check whether GRID_HOME is set +if [ "$GRID_HOME" == "" ]; then + echo "ERROR: GRID_HOME has not been set!" + echo "You have to have the GRID_HOME environment variable set to a valid value!" + exit 1; +fi; + + +temp_var1=`hostname` + +# Replace place holders +# --------------------- +sed -i -e "s|###HOSTNAME###|$temp_var1|g" "$INSTALL_SCRIPTS"/"$GRID_SW_INSTALL_RSP" && \ +sed -i -e "s|###INSTALL_TYPE###|CRS_SWONLY|g" "$INSTALL_SCRIPTS"/"$GRID_SW_INSTALL_RSP" && \ +sed -i -e "s|###GRID_BASE###|$GRID_BASE|g" "$INSTALL_SCRIPTS"/"$GRID_SW_INSTALL_RSP" && \ +sed -i -e "s|###INVENTORY###|$INVENTORY|g" "$INSTALL_SCRIPTS"/"$GRID_SW_INSTALL_RSP" + +# Install Oracle binaries +mkdir -p /home/grid/.ssh && \ +chmod 700 /home/grid/.ssh && \ +unzip -q "$INSTALL_SCRIPTS"/"$INSTALL_FILE_1" -d "$GRID_HOME" && \ +"$GRID_HOME"/gridSetup.sh -silent -responseFile "$INSTALL_SCRIPTS"/"$GRID_SW_INSTALL_RSP" -ignorePrereqFailure || true diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/runOracle.sh b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/runOracle.sh new file mode 100755 index 0000000000..112f50de0f --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/runOracle.sh @@ -0,0 +1,40 @@ +#!/bin/bash +# LICENSE UPL 1.0 +# +# Copyright (c) 2018,2022 Oracle and/or its affiliates. +# +# Since: January, 2018 +# Author: sanjay.singh@oracle.com, paramdeep.saini@oracle.com +# Description: Runs the Oracle RAC Database inside the container +# +# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS HEADER. +# + +if [ -f /etc/rac_env_vars ]; then +source /etc/rac_env_vars +fi + +################################### +# !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! # +############# MAIN ################ +# !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! # +################################### + +if [ -z ${BASE_DIR} ]; then + BASE_DIR=/opt/scripts/startup/scripts +else + BASE_DIR=$SCRIPT_DIR/scripts +fi + +if [ -z ${MAIN_SCRIPT} ]; then + SCRIPT_NAME="main.py" +fi + +if [ -z ${EXECUTOR} ]; then + EXECUTOR="python3" +fi +# shellcheck disable=SC2164 +cd $BASE_DIR +$EXECUTOR $SCRIPT_NAME + +# Tail on alert log and wait (otherwise container will exit) \ No newline at end of file diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/setupDB.sh b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/setupDB.sh new file mode 100755 index 0000000000..c26ce9f605 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/setupDB.sh @@ -0,0 +1,42 @@ +#!/bin/bash +# LICENSE UPL 1.0 +# +# Copyright (c) 2018,2021 Oracle and/or its affiliates. +# +# Since: January, 2018 +# Author: paramdeep.saini@oracle.com +# Description: Sets up the unix environment for DB installation. +# +# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS HEADER. +# + +# Create Directories +if [ "${SLIMMING}x" != 'truex' ]; then + mkdir -p "$DB_BASE" + mkdir -p "$DB_HOME" +fi + +usermod -g oinstall -G oinstall,dba,oper,backupdba,dgdba,kmdba,asmdba,asmoper,racdba,asmadmin "${DB_USER}" + +chmod 775 "$INSTALL_SCRIPTS" + + +if [ "${SLIMMING}x" != 'truex' ]; then + chown -R "${DB_USER}":oinstall "$DB_BASE" + chown -R "${DB_USER}":oinstall "$DB_HOME" + chown -R "${DB_USER}":oinstall "$INSTALL_SCRIPTS" + echo "export PATH=$DB_PATH" >> /home/"${DB_USER}"/.bashrc + echo "export LD_LIBRARY_PATH=$DB_LD_LIBRARY_PATH" >> /home/"${DB_USER}"/.bashrc + echo "export SCRIPT_DIR=$SCRIPT_DIR" >> /home/"${DB_USER}"/.bashrc + echo "export GRID_HOME=$GRID_HOME" >> /home/"${DB_USER}"/.bashrc + echo "export DB_BASE=$DB_BASE" >> /home/"${DB_USER}"/.bashrc + echo "export DB_HOME=$DB_HOME" >> /home/"${DB_USER}"/.bashrc +fi + +if [ "${SLIMMING}x" != 'truex' ]; then + if [ "${DB_USER}" == "${GRID_USER}" ]; then + sed -i '/PATH=/d' /home/"${DB_USER}"/.bashrc + echo "export PATH=$GRID_HOME/bin:$DB_PATH" >> /home/"${DB_USER}"/.bashrc + echo "export LD_LIBRARY_PATH=$GRID_HOME/lib:$DB_LD_LIBRARY_PATH" >> /home/"${DB_USER}"/.bashrc + fi +fi \ No newline at end of file diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/setupGrid.sh b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/setupGrid.sh new file mode 100755 index 0000000000..b64abed9ab --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/setupGrid.sh @@ -0,0 +1,43 @@ +#!/bin/bash +# LICENSE UPL 1.0 +# +# Copyright (c) 2018,2021 Oracle and/or its affiliates. +# +# Since: January, 2018 +# Author: paramdeep.saini@oracle.com +# Description: Sets up the unix environment for Grid installation. +# +# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS HEADER. +# +# shellcheck disable=SC2034 +EDITION=$1 + +# Create Directories +if [ "${SLIMMING}x" != 'truex' ] ; then + mkdir -p "$GRID_BASE" + mkdir -p "$GRID_HOME" +fi + +groupadd -g 54334 asmadmin +groupadd -g 54335 asmdba +groupadd -g 54336 asmoper +useradd -u 54332 -g oinstall -G oinstall,asmadmin,asmdba,asmoper,racdba,dba "${GRID_USER}" + +chmod 666 /etc/sudoers +echo "${DB_USER} ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers +echo "${GRID_USER} ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers +chmod 440 /etc/sudoers + +if [ "${SLIMMING}x" != 'truex' ] ; then + chown -R "${GRID_USER}":oinstall "$GRID_BASE" + chown -R "${GRID_USER}":oinstall "$GRID_HOME" + mkdir -p "$INVENTORY" + chown -R "${GRID_USER}":oinstall "$INVENTORY" + # shellcheck disable=SC2129 + echo "export PATH=$GRID_PATH" >> /home/"${GRID_USER}"/.bashrc + echo "export LD_LIBRARY_PATH=$GRID_LD_LIBRARY_PATH" >> /home/"${GRID_USER}"/.bashrc + echo "export SCRIPT_DIR=$SCRIPT_DIR" >> /home/"${GRID_USER}"/.bashrc + echo "export GRID_HOME=$GRID_HOME" >> /home/"${GRID_USER}"/.bashrc + echo "export GRID_BASE=$GRID_BASE" >> /home/"${GRID_USER}"/.bashrc + echo "export DB_HOME=$DB_HOME" >> /home/"${GRID_USER}"/.bashrc +fi \ No newline at end of file diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/setupLinuxEnv.sh b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/setupLinuxEnv.sh new file mode 100755 index 0000000000..27605f3914 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/setupLinuxEnv.sh @@ -0,0 +1,28 @@ +#!/bin/bash +# LICENSE UPL 1.0 +# +# Copyright (c) 2018,2021 Oracle and/or its affiliates. +# +# Since: January, 2018 +# Author: paramdeep.saini@oracle.com +# Description: Sets up the unix environment for DB installation. +# +# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS HEADER. +# + +# Setup filesystem and oracle user +# Adjust file permissions, go to /opt/oracle as user 'oracle' to proceed with Oracle installation +# ------------------------------------------------------------ +## Use OCI yum repos on OCI instead of public yum +region=$(curl --noproxy '*' -sfm 3 -H "Authorization: Bearer Oracle" http://169.254.169.254/opc/v2/instance/ | sed -nE 's/^ *"regionIdentifier": "([^"]+)".*/\1/p') +if [ -n "$region" ]; then + echo "Detected OCI Region: $region" + for proxy in $(printenv | grep -i _proxy | cut -d= -f1); do unset $proxy; done + echo "-$region" > /etc/yum/vars/ociregion +fi + +mkdir /asmdisks && \ +mkdir /responsefiles && \ +chmod ug+x /opt/scripts/startup/*.sh && \ +yum -y install systemd oracle-database-preinstall-21c vim passwd expect sudo passwd openssl openssh-server hostname python3 lsof rsync && \ +yum clean all diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/setupSSH.expect b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/setupSSH.expect new file mode 100644 index 0000000000..2e0537b190 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/21.3.0/setupSSH.expect @@ -0,0 +1,45 @@ +#!/usr/bin/expect -f +# LICENSE UPL 1.0 +# +# Copyright (c) 2018,2021 Oracle and/or its affiliates. +# +# Since: January, 2018 +# Author: sanjay.singh@oracle.com, paramdeep.saini@oracle.com +# Description: Setup SSH between nodes +# +# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS HEADER. +# + +set username [lindex $argv 0]; +set script_loc [lindex $argv 1]; +set cluster_nodes [lindex $argv 2]; +set ssh_pass [lindex $argv 3]; + +set timeout 120 + +# Procedure to setup ssh from server +proc sshproc { ssh_pass } { + expect { + # Send password at 'Password' prompt and tell expect to continue(i.e. exp_continue) + -re "\[P|p]assword:" { exp_send "$ssh_pass\r" + exp_continue } + # Tell expect stay in this 'expect' block and for each character that SCP prints while doing the copy + # reset the timeout counter back to 0. + -re . { exp_continue } + timeout { return 1 } + eof { return 0 } + } +} + +# Execute sshUserSetup.sh Script +set ssh_cmd "$script_loc/sshUserSetup.sh -user $username -hosts \"${cluster_nodes}\" -logfile /tmp/${username}_SetupSSH.log -advanced -exverify -noPromptPassphrase -confirm" + +eval spawn $ssh_cmd +set ssh_results [sshproc $ssh_pass] + +if { $ssh_results == 0 } { + exit 0 +} + +# Error attempting SSH, so exit with non-zero status +exit 1 diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/buildContainerImage.sh b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/buildContainerImage.sh new file mode 100755 index 0000000000..046cc132ff --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/buildContainerImage.sh @@ -0,0 +1,176 @@ +#!/bin/bash +# +# Since: November, 2018 +# Author: paramdeep.saini@oracle.com +# Description: Build script for building RAC container image +# +# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS HEADER. +# +# Copyright (c) 2014,2021 Oracle and/or its affiliates. +# + +usage() { + cat << EOF + +Usage: buildContainerImage.sh -v [version] -t [image_name:tag] [-o] [-i] +It builds a container image for a DNS server + +Parameters: + -v: version to build + -i: ignores the MD5 checksums + -t: user defined image name and tag (e.g., image_name:tag) + -o: passes on container build option (e.g., --build-arg SLIMMIMG=true for slim) + +LICENSE UPL 1.0 + +Copyright (c) 2014,2024 Oracle and/or its affiliates. + +EOF + exit 0 +} + +# Validate packages +checksumPackages() { + if hash md5sum 2>/dev/null; then + echo "Checking if required packages are present and valid..." + md5sum -c ${VERSION}/Checksum + # shellcheck disable=SC2181 + if [ "$?" -ne 0 ]; then + echo "MD5 for required packages to build this image did not match!" + echo "Make sure to download missing files in folder $VERSION." + # shellcheck disable=SC2320 + exit $? + fi + else + echo "Ignored MD5 sum, 'md5sum' command not available."; + fi +} + +############## +#### MAIN #### +############## + +if [ "$#" -eq 0 ]; then + usage; +fi + +# Parameters +VERSION="12.2.0.1" +SKIPMD5=0 +DOCKEROPS="" +IMAGE_NAME="" +SLIM="false" +DOCKEROPS=" --build-arg SLIMMING=false" + +while getopts "hiv:o:t:" optname; do + case "$optname" in + "h") + usage + ;; + "i") + SKIPMD5=1 + ;; + "v") + VERSION="$OPTARG" + ;; + "o") + DOCKEROPS="$OPTARG" + if [[ "$DOCKEROPS" != *"--build-arg SLIMMING="* ]]; then + DOCKEROPS+=" --build-arg SLIMMING=false" + SLIM="false" + fi + if [[ "$OPTARG" == *"--build-arg SLIMMING=true"* ]]; then + SLIM="true" + fi + ;; + "t") + IMAGE_NAME="$OPTARG" + ;; + "?") + usage; + ;; + *) + # Should not occur + echo "Unknown error while processing options inside buildContainerImage.sh" + ;; + esac +done + +# Oracle Database Image Name +if [ "${IMAGE_NAME}"x = "x" ] && [ "${SLIM}" == "true" ]; then + IMAGE_NAME="oracle/database-rac:${VERSION}-slim" +elif [ "${IMAGE_NAME}"x = "x" ] && [ "${SLIM}" == "false" ]; then + IMAGE_NAME="oracle/database-rac:${VERSION}" +else + echo "Image name is passed as an variable" +fi + + echo "Container Image set to : ${IMAGE_NAME}" + +# Go into version folder +#cd "$VERSION" || exit + +if [ ! "$SKIPMD5" -eq 1 ]; then + checksumPackages +else + echo "Ignored MD5 checksum." +fi +echo "==========================" +echo "DOCKER info:" +docker info +echo "==========================" + +# Proxy settings +PROXY_SETTINGS="" +# shellcheck disable=SC2154 +if [ "${http_proxy}" != "" ]; then + PROXY_SETTINGS="$PROXY_SETTINGS --build-arg http_proxy=${http_proxy}" +fi +# shellcheck disable=SC2154 +if [ "${https_proxy}" != "" ]; then + PROXY_SETTINGS="$PROXY_SETTINGS --build-arg https_proxy=${https_proxy}" +fi +# shellcheck disable=SC2154 +if [ "${ftp_proxy}" != "" ]; then + PROXY_SETTINGS="$PROXY_SETTINGS --build-arg ftp_proxy=${ftp_proxy}" +fi +# shellcheck disable=SC2154 +if [ "${no_proxy}" != "" ]; then + PROXY_SETTINGS="$PROXY_SETTINGS --build-arg no_proxy=${no_proxy}" +fi +# shellcheck disable=SC2154 +if [ "$PROXY_SETTINGS" != "" ]; then + echo "Proxy settings were found and will be used during the build." +fi + +# ################## # +# BUILDING THE IMAGE # +# ################## # +echo "Building image '$IMAGE_NAME' ..." + +# BUILD THE IMAGE (replace all environment variables) +BUILD_START=$(date '+%s') +# shellcheck disable=SC2086 +docker build --force-rm=true --no-cache=true ${DOCKEROPS} ${PROXY_SETTINGS} --build-arg VERSION="${VERSION}" -t ${IMAGE_NAME} -f "${VERSION}"/Containerfile . || { + echo "There was an error building the image." + exit 1 +} +BUILD_END=$(date '+%s') +# shellcheck disable=SC2154,SC2003 +BUILD_ELAPSED=$((BUILD_END - BUILD_START)) + +echo "" +# shellcheck disable=SC2181,SC2320 +if [ $? -eq 0 ]; then +cat << EOF + Oracle Database container Image for Real Application Clusters (RAC) version $VERSION is ready to be extended: + + --> $IMAGE_NAME + + Build completed in $BUILD_ELAPSED seconds. + +EOF + +else + echo "Oracle Database Real Application Clusters Container Image was NOT successfully created. Check the output and correct any reported problems with the container build operation." +fi diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/cmdExec b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/cmdExec new file mode 100755 index 0000000000..553c908f21 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/cmdExec @@ -0,0 +1,16 @@ +#!/bin/bash + +TIMESTAMP=`date "+%Y-%m-%d"` +LOGFILE="/tmp/oracle_rac_cmd_${TIMESTAMP}.log" +# shellcheck disable=SC2046,SC2068 +echo $(date -u) " : " $@ >> $LOGFILE +# shellcheck disable=SC2124 +cmd=$@ + +$cmd + +if [ $? -eq 0 ]; then + exit 0 +else + exit 127 +fi diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/main.py b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/main.py new file mode 100755 index 0000000000..7dde66cf98 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/main.py @@ -0,0 +1,304 @@ +#!/usr/bin/python + +############################# +# Copyright 2020 - 2024, Oracle Corporation and/or affiliates. All rights reserved. +# Licensed under the Universal Permissive License v 1.0 as shown at http://oss.oracle.com/licenses/upl +# Author: paramdeep.saini@oracle.com +# Contributor: saurabh.ahuja@oracle.com +############################ + +""" +This is the main file which calls other file to setup the real application clusters. +""" + +from oralogger import * +from orafactory import * +from oraenv import * +from oracommon import * + + +def main(): + + # Checking Comand line Args + opts="" + try: + opts, args = getopt.getopt(sys.argv[1:], '', ['help','resetpassword=','delracnode=','addtns=', 'checkracinst=', 'checkgilocal=','checkdbrole=','checkracdb=','checkracstatus','checkconnstr=','checkpdbconnstr=','setupdblsnr=','setuplocallsnr=','checkdbsvc=','modifydbsvc=','checkdbversion=','updatelsnrendp=','updateasmcount=','modifyscan=','updateasmdevices=','getasmdiskgroup=','getasmdisks=','getdgredundancy=','getasminstname=','getasminststatus=']) + except getopt.GetoptError: + pass + + # Initializing oraenv instance + oenv=OraEnv() + file_name = os.path.basename(__file__) + funcname = sys._getframe(1).f_code.co_name + + log_file_name = oenv.logfile_name("NONE") + + # Initialiing logger instance + oralogger = OraLogger(log_file_name) + console_handler = CHandler() + file_handler = FHandler() + stdout_handler = StdHandler() + # Setting next log handlers + stdout_handler.nextHandler = file_handler + file_handler.nextHandler = console_handler + console_handler.nextHandler = PassHandler() + + ocommon = OraCommon(oralogger,stdout_handler,oenv) + + for opt, arg in opts: + if opt in ('--help'): + oralogger.msg_ = '''{:^17}-{:^17} : You can pass parameter --help''' + stdout_handler.handle(oralogger) + elif opt in ('--resetpassword'): + file_name = oenv.logfile_name("RESET_PASSWORD") + oralogger.filename_ = file_name + ocommon.log_info_message("=======================================================================",file_name) + oenv.add_custom_variable("RESET_PASSWORD",arg) + elif opt in ('--delracnode'): + file_name = oenv.logfile_name("DEL_PARAMS") + oralogger.filename_ = file_name + ocommon.log_info_message("=======================================================================",file_name) + oenv.add_custom_variable("DEL_PARAMS",arg) + oenv.add_custom_variable("CUSTOM_RUN_FLAG","true") + oenv.add_custom_variable("DEL_RACHOME","true") + oenv.add_custom_variable("DEL_GIHOME","true") + if ocommon.check_key("OP_TYPE",oenv.get_env_dict()): + oenv.update_key("OP_TYPE","racdelnode") + else: + oenv.add_custom_variable("OP_TYPE","miscops") + elif opt in ('--addtns'): + file_name = oenv.logfile_name("ADD_TNS") + oralogger.filename_ = file_name + ocommon.log_info_message("=======================================================================",file_name) + oenv.add_custom_variable("TNS_PARAMS",arg) + oenv.add_custom_variable("CUSTOM_RUN_FLAG","true") + if ocommon.check_key("OP_TYPE",oenv.get_env_dict()): + oenv.update_key("OP_TYPE","racdelnode") + else: + oenv.add_custom_variable("OP_TYPE","miscops") + elif opt in ('--checkracinst'): + file_name = oenv.logfile_name("CHECK_RAC_INST") + oralogger.filename_ = file_name + ocommon.log_info_message("=======================================================================",file_name) + oenv.add_custom_variable("CHECK_RAC_INST",arg) + oenv.add_custom_variable("CUSTOM_RUN_FLAG","true") + if ocommon.check_key("OP_TYPE",oenv.get_env_dict()): + oenv.update_key("OP_TYPE","miscops") + else: + oenv.add_custom_variable("OP_TYPE","miscops") + elif opt in ('--checkgilocal'): + file_name = oenv.logfile_name("CHECK_GI_LOCAL") + oralogger.filename_ = file_name + ocommon.log_info_message("=======================================================================",file_name) + oenv.add_custom_variable("CHECK_GI_LOCAL",arg) + oenv.add_custom_variable("CUSTOM_RUN_FLAG","true") + if ocommon.check_key("OP_TYPE",oenv.get_env_dict()): + oenv.update_key("OP_TYPE","miscops") + else: + oenv.add_custom_variable("OP_TYPE","miscops") + elif opt in ('--checkracdb'): + file_name = oenv.logfile_name("CHECK_RAC_DB") + oralogger.filename_ = file_name + ocommon.log_info_message("=======================================================================",file_name) + oenv.add_custom_variable("CHECK_RAC_DB",arg) + oenv.add_custom_variable("CUSTOM_RUN_FLAG","true") + if ocommon.check_key("OP_TYPE",oenv.get_env_dict()): + oenv.update_key("OP_TYPE","miscops") + else: + oenv.add_custom_variable("OP_TYPE","miscops") + elif opt in ('--checkracstatus'): + file_name = oenv.logfile_name("CHECK_RAC_STATUS") + oralogger.filename_ = file_name + ocommon.log_info_message("=======================================================================",file_name) + oenv.add_custom_variable("CHECK_RAC_STATUS","true") + oenv.add_custom_variable("CUSTOM_RUN_FLAG","true") + if ocommon.check_key("OP_TYPE",oenv.get_env_dict()): + oenv.update_key("OP_TYPE","miscops") + else: + oenv.add_custom_variable("OP_TYPE","miscops") + elif opt in ('--checkdbrole'): + file_name = oenv.logfile_name("CHECK_DB_ROLE") + oralogger.filename_ = file_name + ocommon.log_info_message("=======================================================================",file_name) + oenv.add_custom_variable("CHECK_DB_ROLE",arg) + oenv.add_custom_variable("CUSTOM_RUN_FLAG","true") + if ocommon.check_key("OP_TYPE",oenv.get_env_dict()): + oenv.update_key("OP_TYPE","miscops") + else: + oenv.add_custom_variable("OP_TYPE","miscops") + elif opt in ('--checkconnstr'): + file_name = oenv.logfile_name("CHECK_CONNECT_STR") + oralogger.filename_ = file_name + ocommon.log_info_message("=======================================================================",file_name) + oenv.add_custom_variable("CHECK_CONNECT_STR",arg) + oenv.add_custom_variable("CUSTOM_RUN_FLAG","true") + if ocommon.check_key("OP_TYPE",oenv.get_env_dict()): + oenv.update_key("OP_TYPE","miscops") + else: + oenv.add_custom_variable("OP_TYPE","miscops") + elif opt in ('--checkpdbconnstr'): + file_name = oenv.logfile_name("CHECK_PDB_CONNECT_STR") + oralogger.filename_ = file_name + ocommon.log_info_message("=======================================================================",file_name) + oenv.add_custom_variable("CHECK_PDB_CONNECT_STR",arg) + oenv.add_custom_variable("CUSTOM_RUN_FLAG","true") + if ocommon.check_key("OP_TYPE",oenv.get_env_dict()): + oenv.update_key("OP_TYPE","miscops") + else: + oenv.add_custom_variable("OP_TYPE","miscops") + elif opt in ('--setupdblsnr'): + file_name = oenv.logfile_name("SETUP_DB_LSNR") + oralogger.filename_ = file_name + ocommon.log_info_message("=======================================================================",file_name) + oenv.add_custom_variable("NEW_DB_LSNR_ENDPOINTS",arg) + oenv.add_custom_variable("CUSTOM_RUN_FLAG","true") + if ocommon.check_key("OP_TYPE",oenv.get_env_dict()): + oenv.update_key("OP_TYPE","miscops") + elif opt in ('--setuplocallsnr'): + file_name = oenv.logfile_name("SETUP_LOCAL_LSNR") + oralogger.filename_ = file_name + ocommon.log_info_message("=======================================================================",file_name) + oenv.add_custom_variable("NEW_LOCAL_LISTENER",arg) + oenv.add_custom_variable("CUSTOM_RUN_FLAG","true") + if ocommon.check_key("OP_TYPE",oenv.get_env_dict()): + oenv.update_key("OP_TYPE","miscops") + else: + oenv.add_custom_variable("OP_TYPE","miscops") + elif opt in ('--checkdbsvc'): + file_name = oenv.logfile_name("CHECK_DB_SVC") + oralogger.filename_ = file_name + ocommon.log_info_message("=======================================================================",file_name) + oenv.add_custom_variable("CHECK_DB_SVC",arg) + oenv.add_custom_variable("CUSTOM_RUN_FLAG","true") + if ocommon.check_key("OP_TYPE",oenv.get_env_dict()): + oenv.update_key("OP_TYPE","miscops") + else: + oenv.add_custom_variable("OP_TYPE","miscops") + elif opt in ('--modifydbsvc'): + file_name = oenv.logfile_name("MODIFY_DB_SVC") + oralogger.filename_ = file_name + ocommon.log_info_message("=======================================================================",file_name) + oenv.add_custom_variable("MODIFY_DB_SVC",arg) + oenv.add_custom_variable("CUSTOM_RUN_FLAG","true") + if ocommon.check_key("OP_TYPE",oenv.get_env_dict()): + oenv.update_key("OP_TYPE","miscops") + else: + oenv.add_custom_variable("OP_TYPE","miscops") + elif opt in ('--checkdbversion'): + file_name = oenv.logfile_name("CHECK_DB_VERSION") + oralogger.filename_ = file_name + ocommon.log_info_message("=======================================================================",file_name) + oenv.add_custom_variable("CHECK_DB_VERSION",arg) + oenv.add_custom_variable("CUSTOM_RUN_FLAG","true") + if ocommon.check_key("OP_TYPE",oenv.get_env_dict()): + oenv.update_key("OP_TYPE","miscops") + else: + oenv.add_custom_variable("OP_TYPE","miscops") + elif opt in ('--modifyscan'): + file_name = oenv.logfile_name("MODIFY_SCAN") + oralogger.filename_ = file_name + ocommon.log_info_message("=======================================================================",file_name) + oenv.add_custom_variable("MODIFY_SCAN",arg) + oenv.add_custom_variable("CUSTOM_RUN_FLAG","true") + if ocommon.check_key("OP_TYPE",oenv.get_env_dict()): + oenv.update_key("OP_TYPE","miscops") + else: + oenv.add_custom_variable("OP_TYPE","miscops") + elif opt in ('--updateasmcount'): + file_name = oenv.logfile_name("UPDATE_ASMCOUNT") + oralogger.filename_ = file_name + ocommon.log_info_message("=======================================================================",file_name) + oenv.add_custom_variable("UPDATE_ASMCOUNT",arg) + oenv.add_custom_variable("CUSTOM_RUN_FLAG","true") + if ocommon.check_key("OP_TYPE",oenv.get_env_dict()): + oenv.update_key("OP_TYPE","miscops") + else: + oenv.add_custom_variable("OP_TYPE","miscops") + elif opt in ('--updateasmdevices'): + file_name = oenv.logfile_name("UPDATE_ASMDEVICES") + oralogger.filename_ = file_name + ocommon.log_info_message("=======================================================================",file_name) + oenv.add_custom_variable("UPDATE_ASMDEVICES",arg) + oenv.add_custom_variable("CUSTOM_RUN_FLAG","true") + if ocommon.check_key("OP_TYPE",oenv.get_env_dict()): + oenv.update_key("OP_TYPE","miscops") + else: + oenv.add_custom_variable("OP_TYPE","miscops") + elif opt in ('--getasmdiskgroups'): + file_name = oenv.logfile_name("LIST_ASMDG") + oralogger.filename_ = file_name + ocommon.log_info_message("=======================================================================",file_name) + oenv.add_custom_variable("LIST_ASMDG",arg) + oenv.add_custom_variable("CUSTOM_RUN_FLAG","true") + if ocommon.check_key("OP_TYPE",oenv.get_env_dict()): + oenv.update_key("OP_TYPE","miscops") + else: + oenv.add_custom_variable("OP_TYPE","miscops") + elif opt in ('--getasmdisks'): + file_name = oenv.logfile_name("LIST_ASMDISKS") + oralogger.filename_ = file_name + ocommon.log_info_message("=======================================================================",file_name) + oenv.add_custom_variable("LIST_ASMDISKS",arg) + oenv.add_custom_variable("CUSTOM_RUN_FLAG","true") + if ocommon.check_key("OP_TYPE",oenv.get_env_dict()): + oenv.update_key("OP_TYPE","miscops") + else: + oenv.add_custom_variable("OP_TYPE","miscops") + elif opt in ('--getdgredundancy'): + file_name = oenv.logfile_name("LIST_ASMDGREDUNDANCY") + oralogger.filename_ = file_name + ocommon.log_info_message("=======================================================================",file_name) + oenv.add_custom_variable("LIST_ASMDGREDUNDANCY",arg) + oenv.add_custom_variable("CUSTOM_RUN_FLAG","true") + if ocommon.check_key("OP_TYPE",oenv.get_env_dict()): + oenv.update_key("OP_TYPE","miscops") + else: + oenv.add_custom_variable("OP_TYPE","miscops") + elif opt in ('--getasminstname'): + file_name = oenv.logfile_name("LIST_ASMINSTNAME") + oralogger.filename_ = file_name + ocommon.log_info_message("=======================================================================",file_name) + oenv.add_custom_variable("LIST_ASMINSTNAME",arg) + oenv.add_custom_variable("CUSTOM_RUN_FLAG","true") + if ocommon.check_key("OP_TYPE",oenv.get_env_dict()): + oenv.update_key("OP_TYPE","miscops") + else: + oenv.add_custom_variable("OP_TYPE","miscops") + elif opt in ('--getasminststatus'): + file_name = oenv.logfile_name("LIST_ASMINSTSTATUS") + oralogger.filename_ = file_name + ocommon.log_info_message("=======================================================================",file_name) + oenv.add_custom_variable("LIST_ASMINSTSTATUS",arg) + oenv.add_custom_variable("CUSTOM_RUN_FLAG","true") + if ocommon.check_key("OP_TYPE",oenv.get_env_dict()): + oenv.update_key("OP_TYPE","miscops") + else: + oenv.add_custom_variable("OP_TYPE","miscops") + elif opt in ('--updatelsnrendp'): + file_name = oenv.logfile_name("UPDATE_LISTENERENDP") + oralogger.filename_ = file_name + ocommon.log_info_message("=======================================================================",file_name) + oenv.add_custom_variable("UPDATE_LISTENERENDP",arg) + oenv.add_custom_variable("CUSTOM_RUN_FLAG","true") + if ocommon.check_key("OP_TYPE",oenv.get_env_dict()): + oenv.update_key("OP_TYPE","miscops") + else: + oenv.add_custom_variable("OP_TYPE","miscops") + else: + pass + + # Initializing orafactory instances + oralogger.msg_ = '''{:^17}-{:^17} : Calling OraFactory to start the setup'''.format(file_name,funcname) + stdout_handler.handle(oralogger) + orafactory = OraFactory(oralogger,stdout_handler,oenv,ocommon) + + # Get the ora objects + ofactory=orafactory.get_ora_objs() + + # Traverse through returned factory objects and execute the setup function + for obj in ofactory: + obj.setup() + +# Using the special variable +if __name__=="__main__": + main() diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/oraasmca.py b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/oraasmca.py new file mode 100755 index 0000000000..05a6cc98dd --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/oraasmca.py @@ -0,0 +1,148 @@ +#!/usr/bin/python + +############################# +# Copyright 2021, Oracle Corporation and/or affiliates. All rights reserved. +# Licensed under the Universal Permissive License v 1.0 as shown at http://oss.oracle.com/licenses/upl +# Author: paramdeep.saini@oracle.com +############################ + +""" + This file contains to the code call different classes objects based on setup type +""" + +from oralogger import * +from oraenv import * +from oracommon import * +from oramachine import * +from orasetupenv import * +from orasshsetup import * +from oraracadd import * + +import os +import sys + +class OraAsmca: + """ + This class performs the ASMCA operations + """ + def __init__(self,oralogger,orahandler,oraenv,oracommon,oracvu,orasetupssh): + try: + self.ologger = oralogger + self.ohandler = orahandler + self.oenv = oraenv.get_instance() + self.ocommon = oracommon + self.ocvu = oracvu + self.orasetupssh = orasetupssh + self.ora_env_dict = oraenv.get_env_vars() + self.file_name = os.path.basename(__file__) + except BaseException as ex: + ex_type, ex_value, ex_traceback = sys.exc_info() + trace_back = traceback.extract_tb(ex_traceback) + stack_trace = list() + for trace in trace_back: + stack_trace.append("File : %s , Line : %d, Func.Name : %s, Message : %s" % (trace[0], trace[1], trace[2], trace[3])) + self.ocommon.log_info_message(ex_type.__name__,self.file_name) + self.ocommon.log_info_message(ex_value,self.file_name) + self.ocommon.log_info_message(stack_trace,self.file_name) + + def setup(self): + """ + This function setup the grid on this machine + """ + pass + + def validate_dg(self,device_list,device_prop,type): + """ + Check dg if it exist + """ + giuser,gihome,gbase,oinv=self.ocommon.get_gi_params() + device_prop,cname,cred,casm,crdbms,asdvm,cuasize=self.get_device_prop(device_prop,type) + self.ocommon.log_info_message("device prop set to :" + device_prop + " DG Name: " + cname + " Redudancy : " + cred, self.file_name) + cmd='''su - {0} -c "{1}/bin/asmcmd lsdg {2}"'''.format(giuser,gihome,cname) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,None) + if self.ocommon.check_substr_match(output,cname): + return True + else: + return False + + def create_dg(self,device_list,device_prop,type): + """ + This function creates the disk group + """ + giuser,gihome,gbase,oinv=self.ocommon.get_gi_params() + disk_lst=self.get_device_list(device_list) + self.ocommon.log_info_message("The type is set to :" + type,self.file_name) + device_prop,cname,cred,casm,crdbms,asdvm,cuasize=self.get_device_prop(device_prop,type) + self.ocommon.log_info_message("device prop set to :" + device_prop + " DG Name: " + cname + " Redudancy : " + cred, self.file_name) + cmd='''su - {0} -c "{1}/bin/asmca -silent -createDiskGroup {3} {2}"'''.format(giuser,gihome,disk_lst,device_prop) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,True) + + def get_device_list(self,device_list): + """ + This function returns the device_list + """ + disklst="" + for disk in device_list.split(','): + disklst +=""" -disk '{0}'""".format(disk) + + if disklst: + return disklst + else: + self.ocommon.log_error_message("disk string is set to None for diskgroup creation. Exiting..",self.file_name) + self.prog_exit("127") + + def get_device_prop(self,device_prop,type): + """ + This function returns the device_props + """ + cname="" + cred="" + casm="" + crdbms="" + cadvm="" + causize="" + cmd="" + + self.ocommon.log_info_message("The type is set to :" + type,self.file_name) + if device_prop: + cvar_dict=dict(item.split("=") for item in device_prop.split(";")) + for ckey in cvar_dict.keys(): + if ckey == 'name': + cname = cvar_dict[ckey] + if ckey == 'redundancy': + cred = cvar_dict[ckey] + if ckey == 'compatibleasm': + casm = cvar_dict[ckey] + if ckey == 'compatiblerdbms': + crdbms = cvar_dict[ckey] + if ckey == 'compatibleadvm': + cadvm = cvar_dict[ckey] + if ckey == 'au_size': + causize = cvar_dict[ckey] + + if not cname: + cmd +=''' -diskGroupName {0}'''.format(type) + cname=type + else: + cmd +=''' -diskGroupName {0}'''.format(cname) + if not cred: + cmd +=''' -redundancy {0}'''.format("EXTERNAL") + cred="EXTERNAL" + else: + cmd +=''' -redundancy {0}'''.format(cred) + if casm: + cmd +=""" -compatible.asm '{0}'""".format(casm) + if crdbms: + cmd +=""" -compatible.rdbms '{0}'""".format(crdbms) + if cadvm: + cmd +=""" -compatible.advm '{0}'""".format(cadvm) + if causize: + cmd +=""" -au_size '{0}'""".format(causize) + + if cmd: + return cmd,cname,cred,casm,crdbms,cadvm,causize + else: + self.ocommon.log_error_message("CMD is set to None for diskgroup creation. Exiting..",self.file_name) + self.ocommon.prog_exit("127") diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/oracommon.py b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/oracommon.py new file mode 100755 index 0000000000..2341a36d34 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/oracommon.py @@ -0,0 +1,3291 @@ +#!/usr/bin/python + +############################# +# Copyright 2020, Oracle Corporation and/or affiliates. All rights reserved. +# Licensed under the Universal Permissive License v 1.0 as shown at http://oss.oracle.com/licenses/upl +# Author: paramdeep.saini@oracle.com +############################ + +from oralogger import * +from oraenv import * +import subprocess +import sys +import time +import datetime +import os +import getopt +import shlex +import json +import logging +import socket +import re +import os.path +import socket +import stat +import itertools +import string +import random +import glob +import pathlib + +class OraCommon: + def __init__(self,oralogger,orahandler,oraenv): + self.ologger = oralogger + self.ohandler = orahandler + self.oenv = oraenv.get_instance() + self.ora_env_dict = oraenv.get_env_vars() + self.file_name = os.path.basename(__file__) + + def run_sqlplus(self,cmd,sql_cmd,dbenv): + """ + This function execute the ran sqlplus or rman script and return the output + """ + try: + message="Received Command : {0}\n{1}".format(self.mask_str(cmd),self.mask_str(sql_cmd)) + self.log_info_message(message,self.file_name) + sql_cmd=self.unmask_str(sql_cmd) + cmd=self.unmask_str(cmd) +# message="Received Command : {0}\n{1}".format(cmd,sql_cmd) +# self.log_info_message(message,self.file_name) + p = subprocess.Popen(cmd,stdin=subprocess.PIPE,stdout=subprocess.PIPE,stderr=subprocess.PIPE,env=dbenv,shell=True) + p.stdin.write(sql_cmd.encode()) + # (stdout,stderr), retcode = p.communicate(sqlplus_script.encode('utf-8')), p.returncode + (stdout,stderr),retcode = p.communicate(),p.returncode + # stdout_lines = stdout.decode('utf-8').split("\n") + except: + error_msg=sys.exc_info() + self.log_error_message(error_msg,self.file_name) + self.prog_exit(self) + + return stdout.decode(),stderr.decode(),retcode + + def execute_cmd(self,cmd,env,dir): + """ + Execute the OS command on host + """ + try: + message="Received Command : {0}".format(self.mask_str(cmd)) + self.log_info_message(message,self.file_name) + cmd=self.unmask_str(cmd) + out = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE) + (output,error),retcode = out.communicate(),out.returncode + except: + error_msg=sys.exc_info() + self.log_error_message(error_msg,self.file_name) + self.prog_exit(self) + + return output.decode(),error.decode(),retcode + + def mask_str(self,mstr): + """ + Function to mask the string. + """ + newstr=None + if self.oenv.encrypt_str__: + newstr=mstr.replace('HIDDEN_STRING','********') + # self.log_info_message(newstr,self.file_name) + if newstr: + # message = "Masked the string as encryption flag is set in the singleton class" + # self.log_info_message(message,self.file_name) + return newstr + else: + return mstr + + + def unmask_str(self,mstr): + """ + Function to unmask the string. + """ + newstr=None + if self.oenv.encrypt_str__: + newstr=mstr.replace('HIDDEN_STRING',self.oenv.original_str__.rstrip()) + # self.log_info_message(newstr,self.file_name) + if newstr: + # message = "Unmasked the encrypted string and returning original string from singleton class" + # self.log_info_message(message,self.file_name) + return newstr + else: + return mstr + + def set_mask_str(self,mstr): + """ + Function to unmask the string. + """ + if mstr: + # message = "Setting encrypted String flag to True and original string in singleton class" + # self.log_info_message(message,self.file_name) + self.oenv.encrypt_str__ = True + self.oenv.original_str__ = mstr + else: + message = "Masked String is empty so no change required in encrypted string flag and original string in singleton class" + self.log_info_message(message,self.file_name) + + def unset_mask_str(self): + """ + Function to unmask the string. + """ + # message = "Un-setting encrypted String flag and original string to None in Singleton class" + # self.log_info_message(message,self.file_name) + self.oenv.encrypt_str__ = None + self.oenv.original_str__ = None + + def prog_exit(self,message): + """ + This function exit the program because of some error + """ + self.update_statefile("failed") + sys.exit(127) + + def update_statefile(self,message): + """ + This function update the state file + """ + file=self.oenv.statelogfile_name() + if self.check_file(file,"local",None,None): + self.write_file(file,message) + + def log_info_message(self,lmessage,fname): + """ + Print the INFO message in the logger + """ + funcname = sys._getframe(1).f_code.co_name + message = '''{:^15}-{:^20}:{}'''.format(fname.split('.', 1)[0],funcname.replace("_", ""),lmessage) + self.ologger.msg_ = message + self.ologger.logtype_ = "INFO" + self.ohandler.handle(self.ologger) + + def log_error_message(self,lmessage,fname): + """ + Print the Error message in the logger + """ + funcname=sys._getframe(1).f_code.co_name + message='''{:^15}-{:^20}:{}'''.format(fname.split('.', 1)[0],funcname.replace("_", ""),lmessage) + self.ologger.msg_=message + self.ologger.logtype_="ERROR" + self.ohandler.handle(self.ologger) + + def log_warn_message(self,lmessage,fname): + """ + Print the Error message in the logger + """ + funcname=sys._getframe(1).f_code.co_name + message='''{:^15}-{:^20}:{}'''.format(fname.split('.', 1)[0],funcname.replace("_", ""),lmessage) + self.ologger.msg_=message + self.ologger.logtype_="WARN" + self.ohandler.handle(self.ologger) + + def check_sql_err(self,output,err,retcode,status): + """ + Check if there are any error in sql command output + """ + match=None + msg2='''Sql command failed.Flag is set not to ignore this error.Please Check the logs,Exiting the Program!''' + msg3='''Sql command failed.Flag is set to ignore this error!''' + self.log_info_message("output : " + str(output or "no Output"),self.file_name) + # self.log_info_message("Error : " + str(err or "no Error"),self.file_name) + # self.log_info_message("Sqlplus return code : " + str(retcode),self.file_name) + # self.log_info_message("Command Check Status Set to :" + str(status),self.file_name) + + if status: + if (retcode!=0): + self.log_info_message("Error : " + str(err or "no Error"),self.file_name) + self.log_error_message("Sql Login Failed.Please Check the logs,Exiting the Program!",self.file_name) + self.prog_exit(self) + + match=re.search("(?i)(?m)error",output) + if status: + if (match): + self.log_error_message(msg2,self.file_name) + self.prog_exit("error") + else: + self.log_info_message("Sql command completed successfully",self.file_name) + else: + if (match): + self.log_warn_message("Sql command failed. Flag is set to ignore the error.",self.file_name) + else: + self.log_info_message("Sql command completed sucessfully.",self.file_name) + + def check_dgmgrl_err(self,output,err,retcode,status): + """ + Check if there are any error in sql command output + """ + match=None + msg2='''DGMGRL command failed.Flag is set not to ignore this error.Please Check the logs,Exiting the Program!''' + msg3='''DGMGRL command failed.Flag is set to ignore this error!''' + self.log_info_message("output : " + str(output or "no Output"),self.file_name) + + if status: + if (retcode!=0): + self.log_info_message("Error : " + str(err or "no Error"),self.file_name) + self.log_error_message("DGMGRL Login Failed.Please Check the logs,Exiting the Program!",self.file_name) + self.prog_exit(self) + + match=re.search("(?i)(?m)failed",output) + if status: + if (match): + self.log_error_message(msg2,self.file_name) + self.prog_exit("error") + else: + self.log_info_message("DGMGRL command completed successfully",self.file_name) + else: + if (match): + self.log_warn_message("DGMGRL command failed. Flag is set to ignore the error.",self.file_name) + else: + self.log_info_message("DGGRL command completed sucessfully.",self.file_name) + + def check_os_err(self,output,err,retcode,status): + """ + Check if there are any error in OS command execution + """ + msg1='''OS command returned code : {0} and returned output : {1}'''.format(str(retcode),str(output or "no Output")) + msg2='''OS command returned code : {0}, returned error : {1} and returned output : {2}'''.format(str(retcode),str(err or "no returned error"),str(output or "no retruned output")) + msg3='''OS command failed. Flag is set to ignore this error!''' + + if status: + if (retcode != 0): + self.log_error_message(msg2,self.file_name) + self.prog_exit(self) + else: + self.log_info_message(msg1,self.file_name) + else: + if (retcode != 0): + self.log_warn_message(msg2,self.file_name) + self.log_warn_message(msg3,self.file_name) + else: + self.log_info_message(msg1,self.file_name) + + def check_key(self,key,env_dict): + """ + Check the key if it exist in dictionary. + Attributes: + key (string): String to check if key exist in dictionary + env_dict (dict): Contains the env variable related to seup + """ + if key in env_dict: + return True + else: + return False + + def empty_key(self,key): + """ + key is empty and print failure message. + Attributes: + key (string): String is empty + """ + msg='''Variable {0} is not defilned. Exiting!'''.format(key) + self.log_error_message(msg,self.file_name) + self.prog_exit(self) + + def add_key(self,key,value,env_dict): + """ + Add the key in the dictionary. + Attributes: + key (string): key String to add in the dictionary + value (String): value String to add in dictionary + + Return: + dict + """ + if self.check_key(key,env_dict): + msg='''Variable {0} already exist in the env variables'''.format(key) + self.log_info_message(msg,self.file_name) + else: + if value: + env_dict[key] = value + self.oenv.update_env_vars(env_dict) + else: + msg='''Variable {0} value is not defilned to add in the env variables. Exiting!'''.format(value) + self.log_error_message(msg,self.file_name) + self.prog_exit(self) + + return env_dict + + def update_key(self,key,value,env_dict): + """ + update the key in the dictionary. + Attributes: + key (string): key String to update in the dictionary + value (String): value String to update in dictionary + + Return: + dict + """ + if self.check_key(key,env_dict): + if value: + env_dict[key] = value + self.oenv.update_env_vars(env_dict) + else: + msg='''Variable {0} value is not defined to update in the env variables!'''.format(key) + self.log_warn_message(msg,self.file_name) + else: + msg='''Variable {0} does not exist in the env variables'''.format(key) + self.log_info_message(msg,self.file_name) + + return env_dict + + def read_file(self,fname): + """ + Read the contents of a file and returns the contents to end user + Attributes: + fname (string): file to be read + + Return: + file data (string) + """ + f1 = open(fname, 'r') + fdata = f1.read() + f1.close + return fdata + + def write_file(self,fname,fdata): + """ + write the contents to a file + Attributes: + fname (string): file to be written + fdata (string): COnetents to be written + + Return: + file data (string) + """ + f1 = open(fname, 'w') + f1.write(fdata) + f1.close + + def append_file(self,fname,fdata): + """ + appened the contents to a file + Attributes: + fname (string): file to be written + fdata (string): COnetents to be written + + Return: + file data (string) + """ + f1 = open(fname, 'a') + f1.write(fdata) + f1.close + + def create_dir(self,dir,local,remote,user,group): + """ + Create dir locally or remotely + Attributes: + dir (string): dir to be created + local (boolean): dir to craetes locally + remote (boolean): dir to be created remotely + node (string): remote node name on which dir to be created + user (string): remote user to be connected + """ + self.log_info_message("Inside create_dir()",self.file_name) + if local: + if not os.path.isdir(dir): + cmd='''mkdir -p {0}'''.format(dir) + output,error,retcode=self.execute_cmd(cmd,None,None) + self.check_os_err(output,error,retcode,True) + cmd='''chown -R {0}:{1} {2}'''.format(user,group,dir) + output,error,retcode=self.execute_cmd(cmd,None,None) + cmd='''chmod 755 {0}'''.format(dir) + output,error,retcode=self.execute_cmd(cmd,None,None) + self.check_os_err(output,error,retcode,True) + else: + msg='''Dir {0} already exist'''.format(dir) + self.log_info_message(msg,self.file_name) + + + def create_file(self,file,local,remote,user): + """ + Create dir locally or remotely + Attributes: + file (string): file to be created + local (boolean): dir to craetes locally + remote (boolean): dir to be created remotely + node (string): remote node name on which dir to be created + user (string): remote user to be connected + """ + self.log_info_message("Inside create_file()",self.file_name) + if local: + if not os.path.isfile(file): + cmd='''touch {0}'''.format(file) + output,error,retcode=self.execute_cmd(cmd,None,None) + self.check_os_err(output,error,retcode,True) + + def create_pfile(self,pfile,spfile): + """ + Create pfile from spfile locally + """ + self.log_info_message("Inside create_pfile()",self.file_name) + osuser,dbhome,dbbase,oinv=self.get_db_params() + osid=self.ora_env_dict["GOLD_SID_NAME"] + + sqlpluslogincmd=self.get_sqlplus_str(dbhome,osid,osuser,"sys",None,None,None,osid,None,None,None) + sqlcmd=""" + create pfile='{0}' from spfile='{1}'; + """.format(pfile,spfile) + self.log_info_message("Running the sqlplus command to create pfile from spfile: " + sqlcmd,self.file_name) + output,error,retcode=self.run_sqlplus(sqlpluslogincmd,sqlcmd,None) + self.log_info_message("Calling check_sql_err() to validate the sql command return status",self.file_name) + self.check_sql_err(output,error,retcode,True) + + def create_spfile(self,spfile,pfile): + """ + Create spfile from pfile locally + """ + self.log_info_message("Inside create_spfile()",self.file_name) + osuser,dbhome,dbbase,oinv=self.get_db_params() + osid=self.ora_env_dict["DB_NAME"] + "1" + + sqlpluslogincmd=self.get_sqlplus_str(dbhome,osid,osuser,"sys",None,None,None,osid,None,None,None) + sqlcmd=""" + create spfile='{0}' from pfile='{1}'; + """.format(spfile,pfile) + self.log_info_message("Running the sqlplus command to create spfile from pfile: " + sqlcmd,self.file_name) + output,error,retcode=self.run_sqlplus(sqlpluslogincmd,sqlcmd,None) + self.log_info_message("Calling check_sql_err() to validate the sql command return status",self.file_name) + self.check_sql_err(output,error,retcode,True) + + def resetlogs(self,osid): + """ + Reset the database logs + """ + self.log_info_message("Inside resetlogs()",self.file_name) + osuser,dbhome,dbbase,oinv=self.get_db_params() + + sqlpluslogincmd=self.get_sqlplus_str(dbhome,osid,osuser,"sys",None,None,None,osid,None,None,None) + sqlcmd=''' + alter database open resetlogs; + ''' + self.log_info_message("Running the sqlplus command to resetlogs" + sqlcmd,self.file_name) + output,error,retcode=self.run_sqlplus(sqlpluslogincmd,sqlcmd,None) + self.log_info_message("Calling check_sql_err() to validate the sql command return status",self.file_name) + self.check_sql_err(output,error,retcode,True) + + def check_file(self,file,local,remote,user): + """ + check locally or remotely + Attributes: + file (string): file to be created + local (boolean): dir to craetes locally + remote (boolean): dir to be created remotely + node (string): remote node name on which dir to be created + user (string): remote user to be connected + """ + self.log_info_message("Inside check_file()",self.file_name) + if local: + if os.path.isfile(file): + return True + else: + return False + + + def latest_file(self,dir,): + """ + List the latest file in a directory + """ + files = os.listdir(dir) + paths = [os.path.join(dir, basename) for basename in files] + return max(paths, key=os.path.getctime) + + def latest_dir(self,dir,subdir): + """ + Get the latest dir matching a regexp + """ + self.log_info_message(" Received Params : basedir=" + dir + " subdir=" + subdir,self.file_name) + if subdir is None: + subdir = '*/' + dir1=sorted(pathlib.Path(dir).glob(subdir), key=os.path.getmtime)[-1] + return dir1 + + def shutdown_db(self,osid): + """ + Shutdown the database + """ + osuser,dbhome,dbbase,oinv=self.get_db_params() + self.log_info_message("Inside shutdown_db()",self.file_name) + sqlpluslogincmd=self.get_sqlplus_str(dbhome,osid,osuser,"sys",None,None,None,osid,None,None,None) + + sqlcmd=''' + shutdown immediate; + ''' + self.log_info_message("Running the sqlplus command to shutdown the database: " + sqlcmd,self.file_name) + output,error,retcode=self.run_sqlplus(sqlpluslogincmd,sqlcmd,None) + self.log_info_message("Calling check_sql_err() to validate the sql command return status",self.file_name) + self.check_sql_err(output,error,retcode,False) + + def start_db(self,osid,mode,pfile=None): + """ + start the database + """ + osuser,dbhome,dbbase,oinv=self.get_db_params() + self.log_info_message("Inside start_db()",self.file_name) + cmd="" + if mode is None: + mode=" " + + if pfile is not None: + cmd='''startup {1} pfile={0}'''.format(pfile,mode) + else: + cmd='''startup {0}'''.format(mode) + + sqlpluslogincmd=self.get_sqlplus_str(dbhome,osid,osuser,"sys",None,None,None,osid,None,None,None) + sqlcmd=''' + {0}; + '''.format(cmd) + self.log_info_message("Running the sqlplus command to start the database: " + sqlcmd,self.file_name) + output,error,retcode=self.run_sqlplus(sqlpluslogincmd,sqlcmd,None) + self.log_info_message("Calling check_sql_err() to validate the sql command return status",self.file_name) + self.check_sql_err(output,error,retcode,True) + + def check_substr_match(self,source_str,sub_str): + """ + CHeck if substring exist + """ + # self.log_info_message("Inside check_substr_match()",self.file_name) + if (source_str.find(sub_str) != -1): + return True + else: + return False + + def check_status_value(self,match): + """ + return completed or notcompleted + """ + # self.log_info_message("Inside check_status_value()",self.file_name) + if match: + return 'completed' + else: + return 'notcompleted' + + def remove_file(self,fname): + """ + Remove if file exist + """ + self.log_info_message("Inside remove_file()",self.file_name) + if os.path.exists(fname): + os.remove(fname) + + def get_global_dbdomain(self,ohost,gdbname): + """ + get the global dbname + """ + domain = self.get_host_domain() + if domain: + global_dbname = gdbname + domain + else: + global_dbname = gdbname + + return gdbname + + +########## Checking variable is set ############ + def check_env_variable(self,key,eflag): + """ + Check if env variable is set. If not exit if eflag is not set + """ + #self.ora_env_dict=self.oenv.get_env_vars() + if self.check_key(key,self.ora_env_dict): + self.log_info_message("Env variable " + key + " is set. Check passed!",self.file_name) + else: + if eflag: + self.log_error_message("Env variable " + key + " is not set " + ".Exiting..", self.file_name) + self.prog_exit("127") + else: + self.log_warn_message("Env variable " + key + " is not set " + ".Ignoring the variable and procedding further..", self.file_name) + + return True + + def get_optype(self): + """AI is creating summary for get_optype + This function retruns the op_type based on nodes + """ + racenvfile=self.get_envfile() + if racenvfile: + pass + + def get_envfile(self): + """AI is creating summary for get_envfile + It returns the RAC Env file + Returns: + str: return the raaenv file + """ + racenvfile="" + if self.check_key("RAC_ENV_FILE",self.ora_env_dict): + racenvfile=self.ora_env_dict["RAC_ENV_FILE"] + else: + racenvfile="/etc/rac_env_vars/envfile" + + return racenvfile + + def populate_rac_env_vars(self): + """ + Populate RAC env vars as key value pair + """ + racenvfile=self.get_envfile() + + if os.path.isfile(racenvfile): + with open(racenvfile) as fp: + for line in fp: + newstr=None + d=None + newstr=line.replace("export ","").strip() + self.log_info_message(newstr + " newstr is populated: ",self.file_name) + if len(newstr.split("=")) == 2: + key=newstr.split("=")[0] + value=newstr.split("=")[1] + # self.log_info_message(key + " key is populated: " + self.ora_env_dict[key] ,self.file_name) + if not self.check_key(key,self.ora_env_dict): + self.ora_env_dict=self.add_key(key,value,self.ora_env_dict) + self.log_info_message(key + " key is populated: " + self.ora_env_dict[key] ,self.file_name) + else: + self.log_info_message(key + " key exist with value " + self.ora_env_dict[key] ,self.file_name) + pass + # self.ora_env_dict=self.ora_env_dict + # print(self.ora_env_dict + +########### Get the install Node ####### + def get_installnode(self): + """AI is creating summary for get_installnode + This function return the install node name + Returns: + string: returns the install node name + string : return public host name + """ + install_node=None + pubhost=None + + if self.check_key("INSTALL_NODE",self.ora_env_dict): + install_node=self.ora_env_dict["INSTALL_NODE"] + else: + pass + + pubhost=self.get_public_hostname() + + return install_node,pubhost + +########## Ping the IP ############### + def ping_ip(self,ip,status): + """ + Check if IP is pingable or not + """ + cmd='''ping -c 3 {0}'''.format(ip) + output,error,retcode=self.execute_cmd(cmd,None,None) + if status: + self.check_os_err(output,error,retcode,True) + else: + self.check_os_err(output,error,retcode,None) + +########## Ping the IP ############### + def ping_host(self,host): + """ + Check if IP is pingable or not + """ + cmd='''ping -c 3 {0}'''.format(host) + output,error,retcode=self.execute_cmd(cmd,None,None) + return retcode + +########### IP Validations ############ + def validate_ip(self,ip): + """ + validate the IP + """ + try: + socket.inet_pton(socket.AF_INET, ip) + except socket.error: # not a valid address + return False + + return True + +######### Block Device Check ############# + def disk_exists(self,path): + """ + Check if block device exist + """ + try: + if self.check_key("ASM_ON_NAS",self.ora_env_dict): + if self.ora_env_dict["ASM_ON_NAS"] == 'True': + return stat.S_ISREG(os.stat(path).st_mode) + else: + return False + else: + return stat.S_ISBLK(os.stat(path).st_mode) + except: + return False + +######### Get Password ############## + def get_os_password(self): + """ + get the OS password + """ + ospasswd=self.get_password(None) + return ospasswd + + def get_asm_passwd(self): + """ + get the ASM password + """ + asmpasswd=self.get_password(None) + return asmpasswd + + def get_db_passwd(self): + """ + get the DB password + """ + dbpasswd=self.get_password(None) + return dbpasswd + + def get_tde_passwd(self): + """ + get the tde password + """ + tdepasswd=self.get_password("TDE_PASSWORD") + return tdepasswd + + def get_sys_passwd(self): + """ + get the sys user password + """ + syspasswd=self.get_password(None) + return syspasswd + + def get_password(self,key): + """ + get the password + """ + svolume=None + pwdfile=None + pwdkey=None + passwdfile=None + keyvolume=None + + if key is not None: + if key == 'TDE_PASSWORD': + svolume,pwdfile,pwdkey,passwdfile,keyvolume=self.get_tde_passwd_details() + else: + svolume,pwdfile,pwdkey,passwdfile,keyvolume=self.get_db_passwd_details() + + if self.check_key("PWD_VOLUME",self.ora_env_dict): + pwd_volume=self.ora_env_dict["PWD_VOLUME"] + else: + pwd_volume="/var/tmp" + + password=self.set_password(svolume,pwdfile,pwdkey,passwdfile,keyvolume,pwd_volume) + return password + + def get_tde_passwd_details(self): + """ + This function return the TDE parameters + """ + if self.check_key("TDE_SECRET_VOLUME",self.ora_env_dict): + self.log_info_message("TDE_SECRET_VOLUME set to : ",self.ora_env_dict["TDE_SECRET_VOLUME"]) + msg='''TDE_SECRET_VOLUME passed as an env variable and set to {0}'''.format(self.ora_env_dict["TDE_SECRET_VOLUME"]) + else: + self.ora_env_dict=self.add_key("TDE_SECRET_VOLUME","/run/.tdesecret",self.ora_env_dict) + msg='''TDE_SECRET_VOLUME not passed as an env variable. Setting default to {0}'''.format(self.ora_env_dict["TDE_SECRET_VOLUME"]) + self.log_warn_message(msg,self.file_name) + + if self.check_key("TDE_KEY_SECRET_VOLUME",self.ora_env_dict): + self.log_info_message("Tde Secret_Volume set to : ",self.ora_env_dict["TDE_KEY_SECRET_VOLUME"]) + msg='''TDE_KEY_SECRET_VOLUME passed as an env variable and set to {0}'''.format(self.ora_env_dict["TDE_KEY_SECRET_VOLUME"]) + else: + if self.check_key("TDE_SECRET_VOLUME",self.ora_env_dict): + self.ora_env_dict=self.add_key("TDE_KEY_SECRET_VOLUME",self.ora_env_dict["TDE_SECRET_VOLUME"],self.ora_env_dict) + msg='''TDE_KEY_SECRET_VOLUME not passed as an env variable. Setting default to {0}'''.format(self.ora_env_dict["TDE_KEY_SECRET_VOLUME"]) + self.log_warn_message(msg,self.file_name) + + if self.check_key("TDE_PWD_FILE",self.ora_env_dict): + msg='''TDE_PWD_FILE passed as an env variable and set to {0}'''.format(self.ora_env_dict["TDE_PWD_FILE"]) + else: + self.ora_env_dict=self.add_key("TDE_PWD_FILE","tde_pwdfile.enc",self.ora_env_dict) + msg='''TDE_PWD_FILE not passed as an env variable. Setting default to {0}'''.format(self.ora_env_dict["TDE_PWD_FILE"]) + self.log_warn_message(msg,self.file_name) + + if self.check_key("TDE_PWD_KEY",self.ora_env_dict): + msg='''TDE_PWD_KEY passed as an env variable and set to {0}'''.format(self.ora_env_dict["TDE_PWD_KEY"]) + else: + self.ora_env_dict=self.add_key("TDE_PWD_KEY","tdepwd.key",self.ora_env_dict) + msg='''TDE_PWD_KEY not passed as an env variable. Setting default to {0}'''.format(self.ora_env_dict["TDE_PWD_KEY"]) + self.log_warn_message(msg,self.file_name) + + return self.ora_env_dict["TDE_SECRET_VOLUME"],self.ora_env_dict["TDE_PWD_FILE"],self.ora_env_dict["TDE_PWD_KEY"],"tdepwdfile",self.ora_env_dict["TDE_KEY_SECRET_VOLUME"] + + def get_db_passwd_details(self): + """ + This function return the db passwd paameters + """ + if self.check_key("SECRET_VOLUME",self.ora_env_dict): + self.log_info_message("Secret_Volume set to : ",self.ora_env_dict["SECRET_VOLUME"]) + msg='''SECRET_VOLUME passed as an env variable and set to {0}'''.format(self.ora_env_dict["SECRET_VOLUME"]) + else: + self.ora_env_dict=self.add_key("SECRET_VOLUME","/run/secrets",self.ora_env_dict) + msg='''SECRET_VOLUME not passed as an env variable. Setting default to {0}'''.format(self.ora_env_dict["SECRET_VOLUME"]) + self.log_warn_message(msg,self.file_name) + + if self.check_key("KEY_SECRET_VOLUME",self.ora_env_dict): + self.log_info_message("Secret_Volume set to : ",self.ora_env_dict["KEY_SECRET_VOLUME"]) + msg='''KEY_SECRET_VOLUME passed as an env variable and set to {0}'''.format(self.ora_env_dict["KEY_SECRET_VOLUME"]) + else: + if self.check_key("SECRET_VOLUME",self.ora_env_dict): + self.ora_env_dict=self.add_key("KEY_SECRET_VOLUME",self.ora_env_dict["SECRET_VOLUME"],self.ora_env_dict) + msg='''KEY_SECRET_VOLUME not passed as an env variable. Setting default to {0}'''.format(self.ora_env_dict["KEY_SECRET_VOLUME"]) + self.log_warn_message(msg,self.file_name) + + if self.check_key("DB_PWD_FILE",self.ora_env_dict): + msg='''DB_PWD_FILE passed as an env variable and set to {0}'''.format(self.ora_env_dict["DB_PWD_FILE"]) + else: + self.ora_env_dict=self.add_key("DB_PWD_FILE","common_os_pwdfile.enc",self.ora_env_dict) + msg='''DB_PWD_FILE not passed as an env variable. Setting default to {0}'''.format(self.ora_env_dict["DB_PWD_FILE"]) + self.log_warn_message(msg,self.file_name) + + if self.check_key("PWD_KEY",self.ora_env_dict): + msg='''PWD_KEY passed as an env variable and set to {0}'''.format(self.ora_env_dict["PWD_KEY"]) + else: + self.ora_env_dict=self.add_key("PWD_KEY","pwd.key",self.ora_env_dict) + msg='''PWD_KEY not passed as an env variable. Setting default to {0}'''.format(self.ora_env_dict["PWD_KEY"]) + self.log_warn_message(msg,self.file_name) + + if self.check_key("PASSWORD_FILE",self.ora_env_dict): + msg='''PASSWORD_FILE passed as an env variable and set to {0}'''.format(self.ora_env_dict["PASSWORD_FILE"]) + else: + self.ora_env_dict=self.add_key("PASSWORD_FILE","dbpasswd.file",self.ora_env_dict) + msg='''PASSWORD_FILE not passed as an env variable. Setting default to {0}'''.format(self.ora_env_dict["PASSWORD_FILE"]) + self.log_warn_message(msg,self.file_name) + + return self.ora_env_dict["SECRET_VOLUME"],self.ora_env_dict["DB_PWD_FILE"],self.ora_env_dict["PWD_KEY"],self.ora_env_dict["PASSWORD_FILE"],self.ora_env_dict["KEY_SECRET_VOLUME"] + + def set_password(self,secret_volume,passwd_file,key_file,dbpasswd_file,key_secret_volume,pwd_volume): + passwd_file_flag=False + password=None + password_file=None + passwordfile1='''{0}/{1}'''.format(secret_volume,passwd_file) + passwordkeyfile='''{0}/{1}'''.format(secret_volume,key_file) + passwordfile2='''{0}/{1}'''.format(secret_volume,dbpasswd_file) + self.log_info_message("Secret volume file set to : " + secret_volume,self.file_name) + self.log_info_message("Password file set to : " + passwd_file,self.file_name) + self.log_info_message("key file set to : " + key_file,self.file_name) + self.log_info_message("dbpasswd file set to : " + dbpasswd_file,self.file_name) + self.log_info_message("key secret volume set to : " + key_secret_volume,self.file_name) + self.log_info_message("pwd volume set : " + pwd_volume,self.file_name) + self.log_info_message("passwordfile1 set to : " + passwordfile1,self.file_name) + self.log_info_message("passwordkeyfile set to : " + passwordkeyfile,self.file_name) + self.log_info_message("passwordfile2 set to : " + passwordfile2,self.file_name) + if (os.path.isfile(passwordfile1)) and (os.path.isfile(passwordkeyfile)): + msg='''Passwd file {0} and key file {1} exist. Password file Check passed!'''.format(passwordfile1,passwordkeyfile) + self.log_info_message(msg,self.file_name) + msg='''Reading encrypted passwd from file {0}.'''.format(passwordfile1) + self.log_info_message(msg,self.file_name) + cmd=None + if self.check_key("ENCRYPTION_TYPE",self.ora_env_dict): + if self.ora_env_dict["ENCRYPTION_TYPE"].lower() == "aes256": + cmd='''openssl enc -d -aes-256-cbc -in \"{0}/{1}\" -out {2}/{1} -pass file:\"{3}/{4}\"'''.format(secret_volume,passwd_file,pwd_volume,key_secret_volume,key_file) + elif self.ora_env_dict["ENCRYPTION_TYPE"].lower() == "rsautl": + cmd ='''openssl rsautl -decrypt -in \"{0}/{1}\" -out {2}/{1} -inkey \"{3}/{4}\"'''.format(secret_volume,passwd_file,pwd_volume,key_secret_volume,key_file) + else: + pass + else: + cmd ='''openssl pkeyutl -decrypt -in \"{0}/{1}\" -out {2}/{1} -inkey \"{3}/{4}\"'''.format(secret_volume,passwd_file,pwd_volume,key_secret_volume,key_file) + + output,error,retcode=self.execute_cmd(cmd,None,None) + self.check_os_err(output,error,retcode,True) + passwd_file_flag = True + password_file='''{0}/{1}'''.format(pwd_volume,passwd_file) + elif os.path.isfile(passwordfile2): + msg='''Passwd file {0} exist. Password file Check passed!'''.format(dbpasswd_file) + self.log_info_message(msg,self.file_name) + msg='''Reading encrypted passwd from file {0}.'''.format(dbpasswd_file) + self.log_info_message(msg,self.file_name) + cmd='''openssl base64 -d -in \"{0}\" -out \"{2}/{1}\"'''.format(passwordfile2,dbpasswd_file,pwd_volume) + output,error,retcode=self.execute_cmd(cmd,None,None) + self.check_os_err(output,error,retcode,True) + passwd_file_flag = True + password_file='''{1}/{0}'''.format(dbpasswd_file,pwd_volume) + + if not passwd_file_flag: + # get random password pf length 8 with letters, digits, and symbols + characters1 = string.ascii_letters + string.digits + "_-%#" + str1 = ''.join(random.choice(string.ascii_uppercase) for i in range(4)) + str2 = ''.join(random.choice(characters1) for i in range(8)) + password=str1+str2 + else: + fname='''{0}'''.format(password_file) + fdata=self.read_file(fname) + password=fdata + self.remove_file(password_file) + + if self.check_key("ORACLE_PWD",self.ora_env_dict): + msg="ORACLE_PWD is passed as an env variable. Check Passed!" + self.log_info_message(msg,self.file_name) + else: + #self.ora_env_dict=self.add_key("ORACLE_PWD",password,self.ora_env_dict) + msg="ORACLE_PWD set to HIDDEN_STRING generated using encrypted password file" + self.log_info_message(msg,self.file_name) + + return password + +######### Get OS Password ############## + def reset_os_password(self,user): + """ + reset the OS password + """ + self.log_info_message('''Resetting OS user {0} password'''.format(user),self.file_name) + #proc = subprocess.Popen(['/usr/bin/passwd', user, '--stdin']) + #proc.communicate(passwd) + ospasswd=self.get_os_password() + self.set_mask_str(ospasswd) + cmd='''usermod --password $(openssl passwd -1 {1}) {0}'''.format(user,'HIDDEN_STRING') + #cmd='''bash -c \"echo -e '{1}\\n{1}' | passwd {0}\"'''.format(user,passwd) + output,error,retcode=self.execute_cmd(cmd,None,None) + self.check_os_err(output,error,retcode,True) + self.unset_mask_str() + +######### Copy the file to remote machine ############ + def scpfile(self,node,srcfile,destfile,user): + """ + copy file to remot machine + """ + cmd='''su - {0} -c "scp {2} {0}@{1}:{3}"'''.format(user,node,srcfile,destfile) + output,error,retcode=self.execute_cmd(cmd,None,None) + self.check_os_err(output,error,retcode,True) + +######### Copy file across cluster ######### + def copy_file_cluster(self,srcfile,destfile,user): + """ + copy file on all the machines of the cluster + """ + cluster_nodes=self.get_cluster_nodes() + for node in cluster_nodes.split(" "): + self.scpfile(node,srcfile,destfile,user) + +######### Get the existing Cluster Nodes ############## + def get_existing_clu_nodes(self,eflag): + """ + Checking existing Cluster nodes and returning cluster nodes + """ + cluster_nodes=None + self.log_info_message("Checking existing CRS nodes and returning cluster nodes",self.file_name) + if self.check_key("EXISTING_CLS_NODE",self.ora_env_dict): + return self.ora_env_dict["EXISTING_CLS_NODE"] + else: + if eflag: + self.log_error_message('''Existing CLS nodes are not set. Exiting..''',self.file_name) + self.prog_exit("127") + else: + self.log_warn_message('''Existing CLS nodes are not set.''',self.file_name) + return cluster_nodes + + +######### Return the existing Cluster Nodes using oldnodes ############## + def get_existing_cls_nodes(self,hostname,sshnode): + """ + Checking existing Cluster nodes using clsnodes + """ + giuser,gihome,gibase,oinv=self.get_gi_params() + cluster_nodes=None + cmd='''su - {0} -c "ssh {2} '{1}/bin/olsnodes'"'''.format(giuser,gihome,sshnode) + output,error,retcode=self.execute_cmd(cmd,None,None) + self.check_os_err(output,error,retcode,True) + crs_nodes="" + if not hostname: + hostname="" + + crs_node_list=output.split("\n") + for node in crs_node_list: + if hostname != node: + crs_nodes= crs_nodes + "," + node + + return crs_nodes.strip(",") + + +######### Get the Cluster Nodes ############## + def get_cluster_nodes(self): + """ + Checking Cluster nodes and returning cluster nodes + """ + cluster_nodes=None + self.log_info_message("Checking CRS nodes and returning cluster nodes",self.file_name) + if self.check_key("CRS_NODES",self.ora_env_dict): + cluster_nodes,vip_nodes,priv_nodes=self.process_cluster_vars("CRS_NODES") + else: + cluster_nodes = self.get_public_hostname() + + return cluster_nodes + +####### Get the nwIfaces and network ####### + def get_nwifaces(self): + """ + This function returns the oracle.install.crs.config.networkInterfaceList for prepare responsefile + """ + nwlist="" + nwname="" + nwflag=None + privnwlist="" + ipcidr="" + netmask="" + netmasklist="" + + if self.detect_k8s_env(): + if self.check_key("NW_CIDR",self.ora_env_dict): + ipcidr=self.get_cidr_info(self.ora_env_dict["NW_CIDR"]) + netmask=self.ora_env_dict["NW_CIDR"].split("/")[1] + if ipcidr: + self.log_info_message("Getting network card name for CIDR: " + ipcidr,self.file_name) + nwname=self.get_nw_name(ipcidr) + else: + pubmask,pubsubnet,nwname=self.get_nwlist("public") + ip_address=pubsubnet.split(".") + ipcidr=ip_address[0] + "." + ip_address[1] + ".0.0" + netmask_address=pubmask.split(".") + netmask=netmask_address[0] + "." + netmask_address[1] + ".0.0" + + privnwlist,privnetmasklist=self.get_priv_nwlist() + if nwname: + self.log_info_message("The network card: " + nwname + " for the ip: " + ipcidr,self.file_name) + nwlist='''{0}:{1}:1,{2}'''.format(nwname,ipcidr,privnwlist) + netmasklist='''{0}:{1},{2}'''.format(nwname,netmask,privnetmasklist) + else: + self.log_error_message("Failed to get network card matching for the subnet:" + ipcidr ,self.file_name) + self.prog_exit("127") + elif self.check_key("SINGLE_NETWORK",self.ora_env_dict): + pubmask,pubsubnet,pubnwname=self.get_nwlist("public") + nwlist='''{0}:{1}:1,{0}:{1}:5'''.format(pubnwname,pubsubnet) + else: + if self.check_key("CRS_GPC",self.ora_env_dict): + pubmask,pubsubnet,pubnwname=self.get_nwlist("public") + nwlist='''{0}:{1}:6'''.format(pubnwname,pubsubnet) + else: + pubmask,pubsubnet,pubnwname=self.get_nwlist("public") + privnwlist,privnetmasklist=self.get_priv_nwlist() + nwlist='''{0}:{1}:1,{2}'''.format(pubnwname,pubsubnet,privnwlist) + + + return nwlist,netmasklist + +###### Get the Private nwlist ####################### + def get_priv_nwlist(self): + """ + This function get the private nwlist + """ + privnwlist="" + netmasklist="" + if self.check_key("PRIVATE_HOSTS",self.ora_env_dict): + privmask,privsubnet,privnwname=self.get_nwlist("privatehost") + privnwlist='''{0}:{1}:5'''.format(privnwname,privsubnet) + netmasklist='''{0}:{1}'''.format(privnwname,privmask) + else: + if self.check_key("CRS_PRIVATE_IP1",self.ora_env_dict): + privmask,privsubnet,privnwname=self.get_nwlist("privateip1") + privnwlist='''{0}:{1}:5'''.format(privnwname,privsubnet) + netmasklist='''{0}:{1}'''.format(privnwname,privmask) + if self.check_key("CRS_PRIVATE_IP2",self.ora_env_dict): + privmask,privsubnet,privnwname=self.get_nwlist("privateip2") + privnwlist='''{0},{1}:{2}:5'''.format(privnwlist,privnwname,privsubnet) + netmasklist='''{0},{1}:{2}'''.format(netmasklist,privnwname,privmask) + + return privnwlist,netmasklist + +####### Detect K8s Env ################################ + def detect_k8s_env(self): + """ + This function detect the K8s env and return the True or False + """ + k8s_flag=None + f = open("/proc/self/cgroup", "r") + if "/kubepods" in f.read(): + k8s_flag=True + else: + if self.check_file("/run/secrets/kubernetes.io/serviceaccount/token","local",None,None): + k8s_flag=True + + return k8s_flag +######## Process the nwlist and return netmask,net subnet and ne card name ####### + def get_nwlist(self,checktype): + """ + This function returns the nwlist for prepare responsefile + """ + nwlist=None + nwflag=None + nwname=None + nmask=None + nwsubnet=None + domain=None + ipaddr="" + + if self.check_key("CRS_NODES",self.ora_env_dict): + pub_nodes,vip_nodes,priv_nodes=self.process_cluster_vars("CRS_NODES") + if checktype=="privatehost": + crs_nodes=priv_nodes.replace(" ",",") + nodelist=priv_nodes.split(" ") + domain=self.ora_env_dict["PRIVATE_HOSTS_DOMAIN"] if self.check_key("PRIVATE_HOSTS_DOMAIN",self.ora_env_dict) else self.get_host_domain() + elif checktype=="privateip1": + nodelist=self.ora_env_dict["CRS_PRIVATE_IP1"].split(",") + elif checktype=="privateip2": + nodelist=self.ora_env_dict["CRS_PRIVATE_IP2"].split(",") + else: + crs_nodes=pub_nodes.replace(" ",",") + nodelist=pub_nodes.split(" ") + domain=self.ora_env_dict["PUBLIC_HOSTS_DOMAIN"] if self.check_key("PUBLIC_HOSTS_DOMAIN",self.ora_env_dict) else self.get_host_domain() + print(nodelist) + for pubnode in nodelist: + self.log_info_message("Getting IP for the hostname: " + pubnode,self.file_name) + if checktype=="privateip1": + ipaddr=pubnode + elif checktype=="privateip2": + ipaddr=pubnode + else: + ipaddr=self.get_ip(pubnode,domain) + + if ipaddr: + self.log_info_message("Getting network name for the IP: " + ipaddr,self.file_name) + nwname=self.get_nw_name(ipaddr) + if nwname: + self.log_info_message("The network card: " + nwname + " for the ip: " + ipaddr,self.file_name) + nmask=self.get_netmask_info(nwname) + nwsubnet=self.get_subnet_info(ipaddr,nmask) + nwflag=True + break + else: + self.log_error_message("Failed to get the IP addr for public hostname: " + pubnode + ".Exiting..",self.file_name) + self.prog_exit("127") + + if nmask and nwsubnet and nwname and nwflag: + return nmask,nwsubnet,nwname + else: + self.log_error_message("Failed to get the required details. Exiting...",self.file_name) + self.prog_exit("127") + +######## Get the CRS Nodes ################## + def get_crsnodes(self): + """ + This function returns the oracle.install.crs.config.clusterNodes for prepare responsefile + """ + cluster_nodes="" + pub_nodes,vip_nodes,priv_nodes=self.process_cluster_vars("CRS_NODES") + if not self.check_key("CRS_GPC",self.ora_env_dict): + for (pubnode,vipnode) in zip(pub_nodes.split(" "),vip_nodes.split(" ")): + cluster_nodes += pubnode + ":" + vipnode + ":HUB" + "," + else: + cluster_nodes=self.get_public_hostname() + + return cluster_nodes.strip(',') + +######## Process host variables ############## + def process_cluster_vars(self,key): + """ + This function process CRS_NODES and return public hosts, or VIP hosts or Priv Hosts or cluser string + """ + pubhost=" " + viphost=" " + privhost=" " + self.log_info_message("Inside process_cluster_vars()",self.file_name) + if self.check_key("CRS_GPC",self.ora_env_dict): + return self.get_public_hostname(),None,None + else: + cvar_str=self.ora_env_dict[key] + for item in cvar_str.split(";"): + self.log_info_message("Cluster Node Desc: " + item ,self.file_name) + cvar_dict=dict(item1.split(":") for item1 in item.split(",")) + for ckey in cvar_dict.keys(): + # self.log_info_message("key:" + ckey ,self.file_name) + # self.log_info_message("Value:" + cvar_dict[ckey] ,self.file_name) + if ckey.replace('"','') == 'pubhost': + pubhost += cvar_dict[ckey].replace('"','') + " " + if ckey.replace('"','') == 'viphost': + viphost += cvar_dict[ckey].replace('"','') + " " + if ckey.replace('"','') == 'privhost': + privhost += cvar_dict[ckey].replace('"','') + " " + self.log_info_message("Pubhosts:" + pubhost.strip() + " Pubhost count:" + str(len(pubhost.strip().split(" "))),self.file_name) + self.log_info_message("Viphosts:" + viphost.strip() + "Viphost count:" + str(len(viphost.strip().split(" "))),self.file_name) + if len(pubhost.strip().split(" ")) == len(viphost.strip().split(" ")): + return pubhost.strip(),viphost.strip(),privhost.strip() + else: + self.log_error_message("Public hostname count is not matching:/Public hostname count is not matching with virtual hostname count.Exiting...",self.file_name) + self.prog_exit("127") + + +######### Get the Public Hostname############## + def get_public_hostname(self): + """ + Return Public Hostname + """ + return socket.gethostname() + + ######### Get the DOMAIN############## + def get_host_domain(self): + """ + Return Public Hostname + """ + domain=None + domain=self.extract_domain() + return domain + ######### extract domain ################# + def extract_domain(self): + domain=None + fqdn = subprocess.check_output(['hostname', '-f']).decode().strip() + self.log_info_message('''Fully Qualified Domain Name (FQDN): {0} '''.format(fqdn),self.file_name) + + parts = fqdn.split('.', 1) + if len(parts) < 2: + self.log_error_message("Error: FQDN does not contain a domain name.",self.file_name) + else: + domain = parts[1] + self.log_info_message('''Extracted Domain: {0} '''.format(domain),self.file_name) + return domain + ######### get the public IP ############## + def get_ip(self,hostname,domain): + """ + Return the Ip based on hostname + """ + if not domain: + domain=self.get_host_domain() + return socket.gethostbyname(hostname + '.' + domain) + +######### Get network card ############## + def get_nw_name(self,ip): + """ + Get the network card name based on IP + """ + self.log_info_message('''Getting network card name based on IP: {0} '''.format(ip),self.file_name) + cmd='''ifconfig | awk '/{0}/ {{ print $1 }}' RS="\n\n" | awk -F ":" '{{ print $1 }}' | head -1'''.format(ip) + output,error,retcode=self.execute_cmd(cmd,None,None) + self.check_os_err(output,error,retcode,True) + return output.strip() + +######### Get the netmask info ################ + def get_netmask_info(self,nwcard): + """ + Get the network mask + """ + self.log_info_message('''Getting netmask'''.format(nwcard),self.file_name) + cmd="""ifconfig {0} | awk '/netmask/ {{print $4}}'""".format(nwcard) + output,error,retcode=self.execute_cmd(cmd,None,None) + return output.strip() + +######### Get network subnet info ############## + def get_subnet_info(self,ip,netmask): + """ + Get the network card name based on IP + """ + self.log_info_message('''Getting network subnet info name based on IP {0} and netmask {1}'''.format(ip,netmask),self.file_name) + cmd="""ipcalc -np {0} {1} | grep NETWORK | awk -F '=' '{{ print $2 }}'""".format(ip,netmask) + output,error,retcode=self.execute_cmd(cmd,None,None) + self.check_os_err(output,error,retcode,True) + return output.strip() + +######### Get CIDR portion info ############## + def get_cidr_info(self,cidr): + """ + Get the non zero portion of the CIDR + """ + self.log_info_message('''Checking if network card exist with matching network details {0}'''.format(cidr),self.file_name) + iplist=cidr.split(".") + ipcidr="" + for ipo in iplist: + if ipo.startswith('0'): + break + else: + ipcidr += ipo + "." + + str1=ipcidr.strip() + ipcidr=str1.strip(".") + return ipcidr + +######## Build the ASM device list ######### + def build_asm_device(self,key,reduntype): + """ + Build the ASM device list + """ + self.log_info_message('''Building ASM device list''',self.file_name) + ASM_DISKGROUP_FG_DISKS="" + ASM_DISKGROUP_DISKS="" + asmdevlist=self.ora_env_dict[key].split(",") + for disk1 in asmdevlist: + disk=disk1.strip('"') + if self.check_key("ASM_DISK_CLEANUP_FLAG",self.ora_env_dict): + if self.ora_env_dict["ASM_DISK_CLEANUP_FLAG"] == "TRUE": + self.asm_disk_cleanup(disk) + if reduntype == 'NORMAL': + ASM_DISKGROUP_FG_DISKS+=disk + ",," + ASM_DISKGROUP_DISKS+=disk + "," + elif reduntype == 'HIGH': + ASM_DISKGROUP_FG_DISKS+=disk + ",," + ASM_DISKGROUP_DISKS+=disk + "," + else: + ASM_DISKGROUP_FG_DISKS+=disk + "," + ASM_DISKGROUP_DISKS+=disk + "," + + if reduntype != 'NORMAL' and reduntype != 'HIGH': + fdata=ASM_DISKGROUP_DISKS[:-1] + ASM_DISKGROUP_DISKS=fdata + + return ASM_DISKGROUP_FG_DISKS,ASM_DISKGROUP_DISKS + +######## Build the ASM device list ######### + def build_asm_discovery_str(self,key): + """ + Build the ASM device list + """ + asm_disk=None + asmdisk=self.ora_env_dict[key].split(",")[0] + asm_disk_dir=asmdisk.rsplit("/",1)[0] + asm_disk1=asmdisk.rsplit("/",1)[1] + if len(asm_disk1) <= 3: + asm_disk=asmdisk.rsplit("/",1)[1][:(len(asm_disk1)-1)] + else: + asm_disk=asmdisk.rsplit("/",1)[1][:(len(asm_disk1)-2)] + + disc_str=asm_disk_dir + '/' + asm_disk + '*' + return disc_str + +######## set the ASM device permission ############### + def set_asmdisk_perm(self,key,eflag): + """ + This function set the correct permissions for ASM Disks + """ + if self.check_key(key,self.ora_env_dict): + self.log_info_message (key + " variable is set",self.file_name) + for device1 in self.ora_env_dict[key].split(','): + device=device1.strip('"') + if self.disk_exists(device): + msg='''Changing device permission {0}'''.format(device) + self.log_info_message(msg,self.file_name) + oraversion=self.get_rsp_version("INSTALL",None) + version = oraversion.split(".", 1)[0].strip() + self.log_info_message("disk" + version, self.file_name) + + if int(version) == 19 or int(version) == 21: + cmd = '''chmod 660 {0};chown grid:asmadmin {0}'''.format(device) + else: + cmd = '''chmod 660 {0};chown grid:asmdba {0}'''.format(device) + + self.log_info_message("Executing command:" + cmd , self.file_name) + output,error,retcode=self.execute_cmd(cmd,None,None) + self.check_os_err(output,error,retcode,True) + else: + self.log_error_message('''ASM device {0} is passed but disk doesn't exist. Exiting..'''.format(device),self.file_name) + self.prog_exit("None") + else: + if eflag: + self.log_error_message(key + " is not passed. Exiting....",self.file_name) + self.prog_exit("None") + +######## CLeanup the disks ############### + def asm_disk_cleanup(self,disk): + """ + This function cleanup the ASM Disks + """ + cmd='''dd if=/dev/zero of={0} bs=8k count=10000 '''.format(disk) + output,error,retcode=self.execute_cmd(cmd,None,None) + self.check_os_err(output,error,retcode,True) + + +######## Get the GI Image ############### + def get_gi_params(self): + """ + This function return the GI home + """ + gihome=self.ora_env_dict["GRID_HOME"] + gibase=self.ora_env_dict["GRID_BASE"] + giuser=self.ora_env_dict["GRID_USER"] + oinv=self.ora_env_dict["INVENTORY"] + + return giuser,gihome,gibase,oinv + +######## Get the TMPDIR ################ + def get_tmpdir(self): + """ + This function returns the TMPDIR + Returns: + tmpdir: return tmpdir + """ + return self.ora_env_dict["TMPDIR"] if self.check_key("TMPDIR",self.ora_env_dict) else "/var/tmp" + +######## Get the DB Image ############### + def get_db_params(self): + """ + This function return the DB home + """ + dbhome=self.ora_env_dict["DB_HOME"] + dbbase=self.ora_env_dict["DB_BASE"] + dbuser=self.ora_env_dict["DB_USER"] + oinv=self.ora_env_dict["INVENTORY"] + + return dbuser,dbhome,dbbase,oinv + +######## Get the cmd ############### + def get_sw_cmd(self,key,rspfile,node,netmasklist): + """ + This function return the installation cmd + """ + cmd="" + copyflag="" + pwdparam='''oracle.install.asm.SYSASMPassword={0} oracle.install.asm.monitorPassword={0}'''.format("HIDDEN_STRING") + + if self.check_key("COPY_GRID_SOFTWARE",self.ora_env_dict): + copyflag=" -noCopy " + + prereq=" " + if self.check_key("IGNORE_CRS_PREREQS",self.ora_env_dict): + prereq=" -ignorePreReq " + + giuser,gihome,gbase,oinv=self.get_gi_params() + snic="-J-Doracle.install.crs.allowSingleNIC=true" if self.check_key("SINGLENIC",self.ora_env_dict) else "" + runCmd="" + if key == "INSTALL": + if self.check_key("APPLY_RU_LOCATION",self.ora_env_dict): + self.opatch_apply() + ruLoc=self.ora_env_dict["APPLY_RU_LOCATION"] + runCmd='''gridSetup.sh -applyRU "{0}"'''.format(self.ora_env_dict["APPLY_RU_LOCATION"]) + else: + runCmd='''gridSetup.sh ''' + + + if self.check_key("DEBUG_MODE",self.ora_env_dict): + dbgCmd='''{0} -debug '''.format(runCmd) + runCmd=dbgCmd + + self.log_info_message("runCmd set to : {0}".format(runCmd),self.file_name) + if self.detect_k8s_env(): + #param1="-skipPrereqs -J-Doracle.install.grid.validate.all=false oracle.install.crs.config.netmaskList=eth0:255.255.0.0,eth0:255.255.0.0" + if netmasklist is not None: + param1='''oracle.install.crs.config.netmaskList={0}'''.format(netmasklist) + else: + param1='''oracle.install.crs.config.netmaskList=eth0:255.255.0.0,eth1:255.255.255.0,eth2:255.255.255.0'''.format(netmasklist) + + cmd='''su - {0} -c "{1}/{6} -waitforcompletion {4} -silent {3} -responseFile {2} {5} {7}"'''.format(giuser,gihome,rspfile,snic,copyflag,param1,runCmd,pwdparam) + else: + if self.check_key("APPLY_RU_LOCATION",self.ora_env_dict): + cmd='''su - {0} -c "{1}/{5} -waitforcompletion {4} -silent {6} {3} -responseFile {2} {7}"'''.format(giuser,gihome,rspfile,snic,copyflag,runCmd,prereq,pwdparam) + else: + cmd='''su - {0} -c "{1}/{5} -waitforcompletion {4} -silent {6} {3} -responseFile {2} {7}"'''.format(giuser,gihome,rspfile,snic,copyflag,runCmd,prereq,pwdparam) + elif key == 'ADDNODE': + status=self.check_home_inv(None,gihome,giuser) + if status: + copyflag=" -noCopy " + cmd='''su - {0} -c "ssh {1} '{2}/gridSetup.sh -silent -waitForCompletion {3} {5} -responseFile {4}'"'''.format(giuser,node,gihome,copyflag,rspfile,prereq) + else: + copyflag=" " + cmd='''su - {0} -c "ssh {1} '{2}/gridSetup.sh -silent -waitForCompletion {3} {5} -responseFile {4} '"'''.format(giuser,node,gihome,copyflag,rspfile,prereq) + else: + pass + return cmd + +########## Installing Grid Software on Individual nodes + def crs_sw_install_on_node(self,giuser,copyflag,crs_nodes,oinv,gihome,gibase,osdba,osoper,osasm,version,node): + """ + This function install crs sw on every node and register with oraInventory + """ + cmd=None + prereq=" " + if self.check_key("IGNORE_CRS_PREREQS",self.ora_env_dict): + prereq=" -ignorePreReq " + if int(version) < 23: + rspdata='''su - {0} -c "ssh {10} {1}/gridSetup.sh {11} -waitforcompletion {2} -silent + oracle.install.option=CRS_SWONLY + INVENTORY_LOCATION={4} + ORACLE_HOME={5} + ORACLE_BASE={6} + oracle.install.asm.OSDBA={7} + oracle.install.asm.OSOPER={8} + oracle.install.asm.OSASM={9}"'''.format(giuser,gihome,copyflag,crs_nodes,oinv,gihome,gibase,osdba,osoper,osasm,node,prereq) + + cmd=rspdata.replace('\n'," ") + else: + cmd='''su - {0} -c "ssh {10} '{1}/gridSetup.sh -silent -setupHome -OSDBA {7} -OSOPER {8} -OSASM {9} -ORACLE_BASE {6} -INVENTORY_LOCATION {4} -clusterNodes {10} {2}\'"'''.format(giuser,gihome,copyflag,crs_nodes,oinv,gihome,gibase,osdba,osoper,osasm,node) + + output,error,retcode=self.execute_cmd(cmd,None,None) + self.check_os_err(output,error,retcode,None) + self.check_crs_sw_install(output) + + def opatch_apply(self): + """This function apply opatch before apply RU + """ + giuser,gihome,gbase,oinv=self.get_gi_params() + today=datetime.date.today() + if self.check_key("OPATCH_ZIP_FILE",self.ora_env_dict): + cmd1='''su - {2} -c "mv {0}/OPatch {0}/OPatch_{1}_old"'''.format(gihome,today,giuser) + cmd2='''su - {2} -c "unzip {0} -d {1}/"'''.format(self.ora_env_dict["OPATCH_ZIP_FILE"],gihome,giuser) + for cmd in cmd1,cmd2: + output,error,retcode=self.execute_cmd(cmd,None,True) + self.check_os_err(output,error,retcode,True) + + def check_crs_sw_install(self,swdata): + """ + This function check the if the sw install went fine + """ + if not self.check_substr_match(swdata,"root.sh"): + self.log_error_message("Grid software install failed. Exiting...",self.file_name) + self.prog_exit("127") + + def run_orainstsh_local(self,giuser,node,oinv): + """ + This function run the orainst after grid setup + """ + cmd='''su - {0} -c "sudo {2}/orainstRoot.sh"'''.format(giuser,node,oinv) + output,error,retcode=self.execute_cmd(cmd,None,None) + self.check_os_err(output,error,retcode,True) + + def run_rootsh_local(self,gihome,giuser,node): + """ + This function run the root.sh after grid setup + """ + self.log_info_message("Running root.sh on node " + node,self.file_name) + cmd='''su - {0} -c "sudo {2}/root.sh"'''.format(giuser,node,gihome) + output,error,retcode=self.execute_cmd(cmd,None,None) + self.check_os_err(output,error,retcode,True) + +######## Get the oraversion ############### + def get_rsp_version(self,key,node): + """ + This function return the oraVersion + """ + cmd="" + giuser,gihome,gbase,oinv=self.get_gi_params() + if key == "INSTALL": + cmd='''su - {0} -c "{1}/bin/oraversion -majorVersion"'''.format(giuser,gihome) + elif key == 'ADDNODE': + cmd='''su - {0} -c "ssh {2} {1}/bin/oraversion -majorVersion"'''.format(giuser,gihome,node) + else: + pass + + vdata="" + output,error,retcode=self.execute_cmd(cmd,None,None) + self.check_os_err(output,error,retcode,None) + if output.strip() == "12.2": + vdata="12.2.0" + elif output.strip() == "21": + vdata = "21.0.0" + elif output.strip() == "23": + vdata = "23.0.0" + elif output.strip() == "26": + vdata = "26.0.0" + elif output.strip() == "19": + vdata = "19.0.0" + elif output.strip() == "18": + vdata = "18.0.0" + else: + self.log_error_message("The SW major version is not matching {12.2|18.3|19.3|21.3|23|26}. Exiting....",self.file_name) + self.prog_exit("None") + + return vdata + +######### Check if GI is already installed on this machine ########### + def check_gi_installed(self,retcode1,gihome,giuser,node,oinv): + """ + Check if the Gi is installed on this machine + """ + if retcode1 == 0: + if os.path.isdir("/etc/oracle"): + bstr="Grid is already installed on this machine and /etc/oracle also exist. Skipping Grid setup.." + self.log_info_message(self.print_banner(bstr),self.file_name) + return True + else: + dir = os.listdir(gihome) + if len(dir) != 0: + status=self.check_home_inv(None,gihome,giuser) + if status: + status=self.restore_gi_files(gihome,giuser) + if not status: + return False + else: + self.run_orainstsh_local(giuser,node,oinv) + status=self.start_crs(gihome,giuser) + if status: + return True + else: + return False + else: + bstr="Grid is not configured on this machine and /etc/oracle does not exist." + self.log_info_message(self.print_banner(bstr),self.file_name) + return False + else: + self.log_info_message("Grid is not installed on this machine. Proceeding further...",self.file_name) + return False + +######## Restoring GI FIles ####################### + def restore_gi_files(self,gihome,giuser): + """ + Restoring GI Files + """ + cmd='''{1}/crs/install/rootcrs.sh -updateosfiles'''.format(giuser,gihome) + output,error,retcode=self.execute_cmd(cmd,None,None) + self.check_os_err(output,error,retcode,None) + if retcode == 0: + return True + else: + return False + +###### Starting Crs ############### + def start_crs(self,gihome,giuser): + """ + starting CRS + """ + cmd='''{1}/bin/crsctl start crs'''.format(giuser,gihome) + output,error,retcode=self.execute_cmd(cmd,None,None) + self.check_os_err(output,error,retcode,None) + if retcode == 0: + return True + else: + return False + +######### Check if GI is already installed on this machine ########### + def check_rac_installed(self,retcode1): + """ + Check if the RAC is installed on this machine + """ + if retcode1 == 0: + bstr="RAC HOME is already installed on this machine!" + self.log_info_message(self.print_banner(bstr),self.file_name) + return True + else: + self.log_info_message("Oracle RAC home is not installed on this machine. Proceeding further...",self.file_name) + return False + + +######## Print the banner ############### + def print_banner(self,btext): + """ + print the banner + """ + strlen=len(btext) + sep='=' + sepchar=sep * strlen + banner_text=''' + {0} + {1} + {0} + '''.format(sepchar,btext) + return banner_text + +######### Sqlplus connect string ########### + def get_sqlplus_str(self,home,osid,osuser,dbuser,password,hostname,port,svc,osep,role,wallet): + """ + return the sqlplus connect string + """ + path='''/usr/bin:/bin:/sbin:/usr/local/sbin:{0}/bin'''.format(home) + ldpath='''{0}/lib:/lib:/usr/lib'''.format(home) + export_cmd='''export ORACLE_HOME={0};export PATH={1};export LD_LIBRARY_PATH={2};export ORACLE_SID={3}'''.format(home,path,ldpath,osid) + if dbuser == 'sys' and password and hostname and port and svc: + return '''su - {7} -c "{5};{6}/bin/sqlplus -S {0}/{1}@//{2}:{3}/{4} as sysdba"'''.format(dbuser,password,hostname,port,svc,export_cmd,home,osuser) + elif dbuser != 'sys' and password and hostname and svc: + return '''su - {7} -c "{5};{6}/bin/sqlplus -S {0}/{1}@//{2}:{3}/{4}"'''.format(dbuser,password,hostname,"1521",svc,export_cmd,home,osuser) + elif dbuser and osep: + return dbuser + elif dbuser == 'sys' and not password: + return '''su - {2} -c "{1};{0}/bin/sqlplus -S '/ as sysdba'"'''.format(home,export_cmd,osuser) + elif dbuser == 'sys' and password: + return '''su - {4} -c "{1};{0}/bin/sqlplus -S {2}/{3} as sysdba"'''.format(home,export_cmd,dbuser,password,osuser) + elif dbuser != 'sys' and password: + return '''su - {4} -c "{1};{0}/bin/sqlplus -S {2}/{3}"'''.format(home,export_cmd,dbuser,password,osuser) + else: + self.log_info_message("Atleast specify db user and password for db connectivity. Exiting...",self.file_name) + self.prog_exit("127") + +######### RMAN connect string ########### + def get_rman_str(self,home,osid,osuser,dbuser,password,hostname,port,svc,osep,role,wallet): + """ + return the rman connect string + """ + path='''/usr/bin:/bin:/sbin:/usr/local/sbin:{0}/bin'''.format(home) + ldpath='''{0}/lib:/lib:/usr/lib'''.format(home) + export_cmd='''export ORACLE_HOME={0};export PATH={1};export LD_LIBRARY_PATH={2};export ORACLE_SID={3}'''.format(home,path,ldpath,osid) + if dbuser == 'sys' and password and hostname and port and svc: + return '''su - {7} -c "{5};{6}/bin/rman {0}/{1}@//{2}:{3}/{4}"'''.format(dbuser,password,hostname,port,svc,export_cmd,home,osuser) + elif dbuser != 'sys' and password and hostname and svc: + return '''su - {7} -c "{5};{6}/bin/rman {0}/{1}@//{2}:{3}/{4}"'''.format(dbuser,password,hostname,"1521",svc,export_cmd,home +,osuser) + elif dbuser == 'sys' and not password: + return '''su - {2} -c "{1};{0}/bin/rman target /"'''.format(home,export_cmd,osuser) + elif dbuser == 'sys' and password: + return '''su - {4} -c "{1};{0}/bin/rman target {2}/{3}"'''.format(home,export_cmd,dbuser,password,osuser) + elif dbuser != 'sys' and password: + return '''su - {4} -c "{1};{0}/bin/rman target {2}/{3}"'''.format(home,export_cmd,dbuser,password,osuser) + else: + self.log_info_message("Atleast specify db user and password for db connectivity. Exiting...",self.file_name) + self.prog_exit("127") + +######### dgmgrl connect string ########### + def get_dgmgr_str(self,home,osid,osuser,dbuser,password,hostname,port,svc,osep,role,wallet): + """ + return the dgmgrl connect string + """ + path='''/usr/bin:/bin:/sbin:/usr/local/sbin:{0}/bin'''.format(home) + ldpath='''{0}/lib:/lib:/usr/lib'''.format(home) + if role is None: + role='sysdg' + + export_cmd='''export ORACLE_HOME={0};export PATH={1};export LD_LIBRARY_PATH={2};export ORACLE_SID={3}'''.format(home,path,ldpath,osid) + if dbuser == 'sys' and password and hostname and port and svc: + return '''su - {7} -c "{5};{6}/bin/dgmgrl {0}/{1}@//{2}:{3}/{4} as {8}"'''.format(dbuser,password,hostname,port,svc,export_cmd,home,osuser,role) + elif dbuser != 'sys' and password and hostname and svc: + return '''su - {7} -c "{5};{6}/bin/dgmgrl {0}/{1}@//{2}:{3}/{4} as {8}"'''.format(dbuser,password,hostname,"1521",svc,export_cmd,home,osuser,role) + elif dbuser and osep: + return dbuser + elif dbuser == 'sys' and not password: + return '''su - {2} -c "{1};{0}/bin/dgmgrl /"'''.format(home,export_cmd,osuser) + elif dbuser == 'sys' and password: + return '''su - {4} -c "{1};{0}/bin/dgmgrl {2}/{3} as {5}"'''.format(home,export_cmd,dbuser,password,osuser,role) + elif dbuser != 'sys' and password: + return '''su - {4} -c "{1};{0}/bin/dgmgrl {2}/{3}"'''.format(home,export_cmd,dbuser,password,osuser) + else: + self.log_info_message("Atleast specify db user and password for db connectivity. Exiting...",self.file_name) + self.prog_exit("127") + +######## function to get tnssvc str ###### + def get_tnssvc_str(self,dbsvc,dbport,dbscan): + """ + return tnssvc + """ + tnssvc='''(DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = {0})(PORT = {1})) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = {2})))'''.format(dbscan,dbport,dbsvc) + return tnssvc + +######### Sqlplus ########### + def get_inst_sid(self,dbuser,dbhome,osid,hostname): + """ + return the sid + """ + path='''/usr/bin:/bin:/sbin:/usr/local/sbin:{0}/bin'''.format(dbhome) + ldpath='''{0}/lib:/lib:/usr/lib'''.format(dbhome) + cmd='''su - {5} -c "export ORACLE_HOME={0};export PATH={1};export LD_LIBRARY_PATH={2}; {0}/bin/srvctl status database -d {3} | grep {4}"'''.format(dbhome,path,ldpath,osid,hostname,dbuser) + output,error,retcode=self.execute_cmd(cmd,None,None) + self.check_os_err(output,error,retcode,None) + if len(output.split(" ")) > 1: + inst_sid=output.split(" ")[1] + return inst_sid + else: + return None + +######### Stop RAC DB ######## + def stop_rac_db(self,dbuser,dbhome,osid,hostname): + """ + stop the Database + """ + path='''/usr/bin:/bin:/sbin:/usr/local/sbin:{0}/bin'''.format(dbhome) + ldpath='''{0}/lib:/lib:/usr/lib'''.format(dbhome) + cmd='''su - {5} -c "export ORACLE_HOME={0};export PATH={1};export LD_LIBRARY_PATH={2}; {0}/bin/srvctl stop database -d {3}"'''.format(dbhome,path,ldpath,osid,hostname,dbuser) + output,error,retcode=self.execute_cmd(cmd,None,None) + self.check_os_err(output,error,retcode,True) + +######### Stop RAC DB ######## + def get_host_dbsid(self,hname,connect_str): + """ + get the host sid based on hostname + """ + if hname is None: + cmd='''select instance_name from gv$instance;''' + else: + cmd="""select instance_name from gv$instance where HOST_NAME='{0}';""".format(hname) + sqlcmd=''' + set heading off; + set pagesize 0; + {0} + exit; + '''.format(cmd) + self.set_mask_str(self.get_sys_passwd()) + output,error,retcode=self.run_sqlplus(connect_str,sqlcmd,None) + self.log_info_message("Calling check_sql_err() to validate the sql command return status",self.file_name) + self.check_sql_err(output,error,retcode,None) + self.unset_mask_str() + return output.strip() + + +######### Get SVC Domain ######## + def get_svc_domain(self,hname): + """ + get the host domain baded on service name + """ + svc_dom=None + cmd='''nslookup {0}'''.format(hname) + output,error,retcode=self.execute_cmd(cmd,None,None) + self.check_os_err(output,error,retcode,None) + for line in output.split('\n'): + if "Name:" in line: + svc_dom=line.split(':')[1].strip() + return svc_dom + +######### Stop RAC DB ######## + def start_rac_db(self,dbuser,dbhome,osid,node=None,startoption=None): + """ + Start the Database + """ + path='''/usr/bin:/bin:/sbin:/usr/local/sbin:{0}/bin'''.format(dbhome) + ldpath='''{0}/lib:/lib:/usr/lib'''.format(dbhome) + + if node is None: + nodename="" + else: + nodename=node + + if startoption is None: + startflag="" + else: + startflag=''' -o {0}'''.format(startoption) + + cmd='''su - {5} -c "export ORACLE_HOME={0};export PATH={1};export LD_LIBRARY_PATH={2}; {0}/bin/srvctl start database -d {3} {6}"'''.format(dbhome,path,ldpath,osid,nodename,dbuser,startflag) + output,error,retcode=self.execute_cmd(cmd,None,None) + self.check_os_err(output,error,retcode,True) + +######### DB-Status ########### + def get_db_status(self,dbuser,dbhome,osid): + """ + return the status of the database + """ + path='''/usr/bin:/bin:/sbin:/usr/local/sbin:{0}/bin'''.format(dbhome) + ldpath='''{0}/lib:/lib:/usr/lib'''.format(dbhome) + + cmd='''su - {4} -c "export ORACLE_HOME={0};export PATH={1};export LD_LIBRARY_PATH={2}; {0}/bin/srvctl status database -d {3}"'''.format(dbhome,path,ldpath,osid,dbuser) + output,error,retcode=self.execute_cmd(cmd,None,None) + self.check_os_err(output,error,retcode,True) + + def get_dbinst_status(self,dbuser,dbhome,inst_sid,sqlpluslogincmd): + """ + return the status of the local dbinstance + """ + sqlcmd=''' + set heading off; + set pagesize 0; + select status from v$instance; + exit; + ''' + self.set_mask_str(self.get_sys_passwd()) + output,error,retcode=self.run_sqlplus(sqlpluslogincmd,sqlcmd,None) + self.log_info_message("Calling check_sql_err() to validate the sql command return status",self.file_name) + self.check_sql_err(output,error,retcode,None) + self.unset_mask_str() + return output + +##### DB-Config ###### + def get_db_config(self,dbuser,dbhome,osid): + """ + return the db-config + """ + path='''/usr/bin:/bin:/sbin:/usr/local/sbin:{0}/bin'''.format(dbhome) + ldpath='''{0}/lib:/lib:/usr/lib'''.format(dbhome) + + cmd='''su - {4} -c "export ORACLE_HOME={0};export PATH={1};export LD_LIBRARY_PATH={2}; {0}/bin/srvctl config database -d {3}"'''.format(dbhome,path,ldpath,osid,dbuser) + output,error,retcode=self.execute_cmd(cmd,None,None) + self.check_os_err(output,error,retcode,True) + +##### Get service name ##### + def get_service_name(self): + """ + This function get the service_name. + """ + self.log_info_message("Inside get_service_name()",self.file_name) + service_name=None + osid=None + opdb=None + sparams=None + + reg_exp= self.service_regex() + for key in self.ora_env_dict.keys(): + if(reg_exp.match(key)): + rac_service_exist=None + service_name,osid,opdb,uniformflag,sparams=self.process_service_vars(key,None) + + return service_name,osid,opdb,sparams + +##### Setup DB Service ###### + def setup_db_service(self,type): + """ + This function setup the Oracle RAC database service. + """ + self.log_info_message("Inside setup_db_service()",self.file_name) + status=False + service_name=None + reg_exp= self.service_regex() + for key in self.ora_env_dict.keys(): + if(reg_exp.match(key)): + rac_service_exist=None + service_name,osid,opdb,uniformflag,sparams=self.process_service_vars(key,type) + rac_service_exist=self.check_db_service_exist(service_name,osid) + if not rac_service_exist: + if type.lower() == "create": + self.create_db_service(service_name,osid,opdb,sparams) + else: + if type.lower() == "modify" and uniformflag is not True: + self.modify_db_service(service_name,osid,opdb,sparams) + else: + pass + rac_service_exist=self.check_db_service_exist(service_name,osid) + if rac_service_exist: + msg='''RAC db service exist''' + else: + msg='''RAC db service does not exist or creation failed''' + +##### Process DB Service ###### + def process_service_vars(self,key,type): + """ + This function process the service parameters for RAC service creation + """ + service=None + preferred=None + available=None + cardinality=None + tafpolicy=None + role=None + policy=None + resetstate=None + failovertype=None + failoverdelay=None + failoverretry=None + failover_restore=None + failback=None + pdb=None + clbgoal=None + rlbgoal=None + dtp=None + notification=None + commit_outcome=None + commit_outcome_fastpath=None + replay_init_time=None + session_state=None + drain_timeout=None + db=None + sparam="" + uniformflag=None + + if type is None: + type="create" + + self.log_info_message("Inside process_service_vars()",self.file_name) + cvar_str=self.ora_env_dict[key] + cvar_dict=dict(item.split(":") for item in cvar_str.split(";")) + for ckey in cvar_dict.keys(): + if type.lower() == 'modify': + if ckey == 'service': + service = cvar_dict[ckey] + sparam=sparam + " -service " + service + if ckey == 'preferred': + preferred = cvar_dict[ckey] + sparam=sparam +" -modifyconfig -preferred " + preferred + if ckey == 'available': + available = cvar_dict[ckey] + sparam=sparam +" -available " + available + else: + if ckey == 'service': + service = cvar_dict[ckey] + sparam=sparam + " -service " + service + if ckey == 'role': + role = cvar_dict[ckey] + sparam=sparam +" -role " + role + if ckey == 'preferred': + preferred = cvar_dict[ckey] + sparam=sparam +" -preferred " + preferred + if ckey == 'available': + available = cvar_dict[ckey] + sparam=sparam +" -available " + available + if ckey == 'cardinality': + cardinality = cvar_dict[ckey] + sparam=sparam +" -cardinality " + cardinality + uniformflag=True + if ckey == 'policy': + policy = cvar_dict[ckey] + sparam=sparam +" -policy " + policy + if ckey == 'tafpolicy': + tafpolicy = cvar_dict[ckey] + sparam=sparam +" -tafpolicy " + tafpolicy + if ckey == 'resetstate': + resetstate = cvar_dict[ckey] + sparam=sparam +" -resetstate " + resetstate + if ckey == 'failovertype': + failovertype = cvar_dict[ckey] + sparam=sparam +" -failovertype " + failovertype + if ckey == 'failoverdelay': + failoverdelay = cvar_dict[ckey] + sparam=sparam +" -failoverdelay " + failoverdelay + if ckey == 'failoverretry': + failoverretry = cvar_dict[ckey] + sparam=sparam +" -failoverretry " + failoverretry + if ckey == 'failback': + failback = cvar_dict[ckey] + sparam=sparam +" -failback " + failback + if ckey == 'failover_restore': + failover_restore = cvar_dict[ckey] + sparam=sparam +" -failover_restore " + failover_restore + if ckey == 'pdb': + pdb = cvar_dict[ckey] + if ckey == 'clbgoal': + clbgoal = cvar_dict[ckey] + sparam=sparam +" -clbgoal " + clbgoal + if ckey == 'rlbgoal': + rlbgoal = cvar_dict[ckey] + sparam=sparam +" -rlbgoal " + rlbgoal + if ckey == 'dtp': + dtp = cvar_dict[ckey] + sparam=sparam +" -dtp " + dtp + if ckey == 'notification': + notification = cvar_dict[ckey] + sparam=sparam +" -notification " + notification + if ckey == 'commit_outcome': + commit_outcome = cvar_dict[ckey] + sparam=sparam +" -commit_outcome " +commit_outcome + if ckey == 'commit_outcome_fastpath': + commit_outcome_fastpath = cvar_dict[ckey] + sparam=sparam +" -commit_outcome_fastpath " + commit_outcome_fastpath + if ckey == 'replay_init_time': + replay_init_time = cvar_dict[ckey] + sparam=sparam +" -replay_init_time " + replay_init_time + if ckey == 'session_state': + session_state = cvar_dict[ckey] + sparam=sparam +" -session_state " + session_state + if ckey == 'drain_timeout': + drain_timeout = cvar_dict[ckey] + sparam=sparam +" -drain_timeout " + drain_timeout + if ckey == 'db': + db = cvar_dict[ckey] + sparam=sparam +" -db " + db + + ### Check values must be set + if uniformflag is not True: + if pdb is None: + pdb = self.ora_env_dict["ORACLE_PDB_NAME"] if self.check_key("ORACLE_PDB_NAME",self.ora_env_dict) else "ORCLPDB" + sparam=sparam +" -pdb " + pdb + else: + sparam=sparam +" -pdb " + pdb + else: + pdb = self.ora_env_dict["ORACLE_PDB_NAME"] if self.check_key("ORACLE_PDB_NAME",self.ora_env_dict) else "ORCLPDB" + + if preferred is None: + osuser,dbhome,dbbase,oinv=self.get_db_params() + dbname,osid,dbuname=self.getdbnameinfo() + hostname = self.get_public_hostname() + inst_sid=self.get_inst_sid(osuser,dbhome,osid,hostname) + connect_str=self.get_sqlplus_str(dbhome,inst_sid,osuser,"sys",None,None,None,None,None,None,None) + dbsid=self.get_host_dbsid(None,connect_str) + preferred=",".join(dbsid.splitlines()) + if type.lower() == 'modify': + sparam=sparam +" -modifyconfig -preferred " + preferred + else: + sparam=sparam +" -preferred " + preferred + + if db is None: + db=self.ora_env_dict["DB_NAME"] if self.check_key("DB_NAME",self.ora_env_dict) else "ORCLCDB" + sparam=sparam +" -db " + db + + if service and db and pdb: + return service,db,pdb,uniformflag,sparam + else: + msg1='''service={0},pdb={1},db={2}'''.format((service or "Missing Value"),(pdb or "Missing Value"),(db or "Missing Value")) + msg='''RAC service params {0} is not set correctly. One or more value is missing {1}'''.format(key,msg1) + self.log_error_message(msg,self.file_name) + self.prog_exit("Error occurred") + +#### Process Service Regex #### + def service_regex(self): + """ + This function return the rgex to search the DB_SERVICE + """ + self.log_info_message("Inside service_regex()",self.file_name) + return re.compile('DB_SERVICE') + +##### craete DB service ###### + def create_db_service(self,service_name,osid,opdb,sparams): + """ + create database service + """ + dbuser,dbhome,dbase,oinv=self.get_db_params() + path='''/usr/bin:/bin:/sbin:/usr/local/sbin:{0}/bin'''.format(dbhome) + ldpath='''{0}/lib:/lib:/usr/lib'''.format(dbhome) + cmd='''su - {4} -c "export ORACLE_HOME={0};export PATH={1};export LD_LIBRARY_PATH={2}; {0}/bin/srvctl add service {5}"'''.format(dbhome,path,ldpath,osid,dbuser,sparams,opdb,service_name) + output,error,retcode=self.execute_cmd(cmd,None,None) + self.check_os_err(output,error,retcode,None) + +##### craete DB service ###### + def modify_db_service(self,service_name,osid,opdb,sparams): + """ + modify database service + """ + dbuser,dbhome,dbase,oinv=self.get_db_params() + path='''/usr/bin:/bin:/sbin:/usr/local/sbin:{0}/bin'''.format(dbhome) + ldpath='''{0}/lib:/lib:/usr/lib'''.format(dbhome) + cmd='''su - {4} -c "export ORACLE_HOME={0};export PATH={1};export LD_LIBRARY_PATH={2}; {0}/bin/srvctl modify service {5}"'''.format(dbhome,path,ldpath,osid,dbuser,sparams,opdb,service_name) + output,error,retcode=self.execute_cmd(cmd,None,None) + self.check_os_err(output,error,retcode,None) + +##### check Db service ###### + def check_db_service_exist(self,service_name,osid): + """ + check if db service exist + """ + dbuser,dbhome,dbase,oinv=self.get_db_params() + path='''/usr/bin:/bin:/sbin:/usr/local/sbin:{0}/bin'''.format(dbhome) + ldpath='''{0}/lib:/lib:/usr/lib'''.format(dbhome) + cmd='''su - {4} -c "export ORACLE_HOME={0};export PATH={1};export LD_LIBRARY_PATH={2}; {0}/bin/srvctl status service -db {3} -s {5}"'''.format(dbhome,path,ldpath,osid,dbuser,service_name) + output,error,retcode=self.execute_cmd(cmd,None,None) + self.check_os_err(output,error,retcode,None) + msg='''PRKO-2017'''.format(service_name,osid) + if self.check_substr_match(output.lower(),msg.lower()): + return False + else: + return True + +##### check service ###### + def check_db_service_status(self,service_name,osid): + """ + check if db service is running + """ + dbuser,dbhome,dbase,oinv=self.get_db_params() + path='''/usr/bin:/bin:/sbin:/usr/local/sbin:{0}/bin'''.format(dbhome) + ldpath='''{0}/lib:/lib:/usr/lib'''.format(dbhome) + cmd='''su - {4} -c "export ORACLE_HOME={0};export PATH={1};export LD_LIBRARY_PATH={2}; {0}/bin/srvctl status service -db {3} -s {5}"'''.format(dbhome,path,ldpath,osid,dbuser,service_name) + output,error,retcode=self.execute_cmd(cmd,None,None) + self.check_os_err(output,error,retcode,None) + msg='''Service {0} is running on'''.format(service_name) + if self.check_substr_match(output.lower(),msg.lower()): + return True,output.lower() + else: + return False,output.lower() + +##### check service ###### + def start_db_service(self,service_name,osid): + """ + start the DB service + """ + dbuser,dbhome,dbase,oinv=self.get_db_params() + path='''/usr/bin:/bin:/sbin:/usr/local/sbin:{0}/bin'''.format(dbhome) + ldpath='''{0}/lib:/lib:/usr/lib'''.format(dbhome) + cmd='''su - {4} -c "export ORACLE_HOME={0};export PATH={1};export LD_LIBRARY_PATH={2}; {0}/bin/srvctl start service -db {3} -s {5}"'''.format(dbhome,path,ldpath,osid,dbuser,service_name) + output,error,retcode=self.execute_cmd(cmd,None,None) + self.check_os_err(output,error,retcode,None) + +######### Add RAC DB ######## + def add_rac_db(self,dbuser,dbhome,osid,spfile): + """ + add the Database + """ + path='''/usr/bin:/bin:/sbin:/usr/local/sbin:{0}/bin'''.format(dbhome) + ldpath='''{0}/lib:/lib:/usr/lib'''.format(dbhome) + cmd='''su - {5} -c "export ORACLE_HOME={0};export PATH={1};export LD_LIBRARY_PATH={2}; {0}/bin/srvctl add database -d {3} -oraclehome {0} -dbtype RAC -spfile '{4}'"'''.format(dbhome,path,ldpath,osid,spfile,dbuser) + output,error,retcode=self.execute_cmd(cmd,None,None) + self.check_os_err(output,error,retcode,True) + +######### Add RAC DB ######## + def add_rac_db_lsnr(self,dbuser,dbhome,osid,endpoints,lsnrname): + """ + add the Database + """ + path='''/usr/bin:/bin:/sbin:/usr/local/sbin:{0}/bin'''.format(dbhome) + ldpath='''{0}/lib:/lib:/usr/lib'''.format(dbhome) + cmd='''su - {3} -c "export ORACLE_HOME={0};export PATH={1};export LD_LIBRARY_PATH={2}; {0}/bin/srvctl add listener -listener {4} -endpoints {5}; {0}/bin/srvctl start listener -listener {4}"'''.format(dbhome,path,ldpath,dbuser,lsnrname,endpoints) + output,error,retcode=self.execute_cmd(cmd,None,None) + self.check_os_err(output,error,retcode,True) + +######### Add RAC DB ######## + def modify_rac_db_lsnr(self,dbuser,dbhome,osid,endpoints,lsnrname): + """ + add the Database + """ + path='''/usr/bin:/bin:/sbin:/usr/local/sbin:{0}/bin'''.format(dbhome) + ldpath='''{0}/lib:/lib:/usr/lib'''.format(dbhome) + cmd='''su - {3} -c "export ORACLE_HOME={0};export PATH={1};export LD_LIBRARY_PATH={2}; {0}/bin/srvctl modify listener -listener {4} -endpoints {5}"'''.format(dbhome,path,ldpath,dbuser,lsnrname,endpoints) + output,error,retcode=self.execute_cmd(cmd,None,None) + self.check_os_err(output,error,retcode,True) + +######### Add RAC DB ######## + def check_rac_db_lsnr(self,dbuser,dbhome,osid,endpoints,lsnrname): + """ + add the Database + """ + path='''/usr/bin:/bin:/sbin:/usr/local/sbin:{0}/bin'''.format(dbhome) + ldpath='''{0}/lib:/lib:/usr/lib'''.format(dbhome) + cmd='''su - {3} -c "export ORACLE_HOME={0};export PATH={1};export LD_LIBRARY_PATH={2}; {0}/bin/srvctl status listener -listener {6}"'''.format(dbhome,path,ldpath,dbuser,lsnrname,endpoints,lsnrname) + output,error,retcode=self.execute_cmd(cmd,None,None) + self.check_os_err(output,error,retcode,None) + msg='''Listener {0} is enabled'''.format(lsnrname) + if self.check_substr_match(output.lower(),msg.lower()): + return True + else: + return False + +######### Add RAC DB ######## + def update_scan(self,user,home,endpoints,node): + """ + Update Scan + """ + path='''/usr/bin:/bin:/sbin:/usr/local/sbin:{0}/bin'''.format(home) + ldpath='''{0}/lib:/lib:/usr/lib'''.format(home) + scanname=self.ora_env_dict["SCAN_NAME"] + cmd='''su - {3} -c "ssh {6} 'export ORACLE_HOME={0};export PATH={1};export LD_LIBRARY_PATH={2}; sudo {0}/bin/srvctl modify scan -scanname {4}'"'''.format(home,path,ldpath,user,scanname,endpoints,node) + output,error,retcode=self.execute_cmd(cmd,None,None) + self.check_os_err(output,error,retcode,None) + +######### Add RAC DB ######## + def start_scan(self,user,home,node): + """ + Update Scan + """ + path='''/usr/bin:/bin:/sbin:/usr/local/sbin:{0}/bin'''.format(home) + ldpath='''{0}/lib:/lib:/usr/lib'''.format(home) + scanname=self.ora_env_dict["SCAN_NAME"] + cmd='''su - {3} -c "ssh {5} 'export ORACLE_HOME={0};export PATH={1};export LD_LIBRARY_PATH={2};sudo {0}/bin/srvctl start scan'"'''.format(home,path,ldpath,user,scanname,node) + output,error,retcode=self.execute_cmd(cmd,None,None) + self.check_os_err(output,error,retcode,None) + +######### Add RAC DB ######## + def update_scan_lsnr(self,user,home,node): + """ + Update Scan + """ + path='''/usr/bin:/bin:/sbin:/usr/local/sbin:{0}/bin'''.format(home) + ldpath='''{0}/lib:/lib:/usr/lib'''.format(home) + scanname=self.ora_env_dict["SCAN_NAME"] + cmd='''su - {3} -c "ssh {4} 'export ORACLE_HOME={0};export PATH={1};export LD_LIBRARY_PATH={2};{0}/bin/srvctl modify scan_listener -update'"'''.format(home,path,ldpath,user,node) + output,error,retcode=self.execute_cmd(cmd,None,None) + self.check_os_err(output,error,retcode,None) +######### Add RAC DB ######## + def start_scan_lsnr(self,user,home,node): + """ + start Scan listener + """ + path='''/usr/bin:/bin:/sbin:/usr/local/sbin:{0}/bin'''.format(home) + ldpath='''{0}/lib:/lib:/usr/lib'''.format(home) + scanname=self.ora_env_dict["SCAN_NAME"] + cmd='''su - {3} -c "ssh {4} 'export ORACLE_HOME={0};export PATH={1};export LD_LIBRARY_PATH={2};{0}/bin/srvctl start scan_listener'"'''.format(home,path,ldpath,user,node) + output,error,retcode=self.execute_cmd(cmd,None,None) + self.check_os_err(output,error,retcode,None) + +######### Set DB Lsnr ######## + def setup_db_lsnr(self): + """ + Create and Setup DB lsnr + """ + giuser,gihome,gibase,oinv =self.get_gi_params() + status,osid,host,mode=self.check_dbinst() + endpoints=self.ora_env_dict["DB_LISTENER_ENDPOINTS"] if self.check_key("DB_LISTENER_ENDPOINTS",self.ora_env_dict) else None + lsnrname=self.ora_env_dict["DB_LISTENER_NAME"] if self.check_key("DB_LISTENER_NAME",self.ora_env_dict) else "dblsnr" + + if status: + if endpoints is not None and lsnrname is not None: + status1=self.check_rac_db_lsnr(giuser,gihome,osid,endpoints,lsnrname) + if not status1: + self.add_rac_db_lsnr(giuser,gihome,osid,endpoints,lsnrname) + else: + self.modify_rac_db_lsnr(giuser,gihome,osid,endpoints,lsnrname) + else: + self.log_info_message("DB Instance is not up",self.file_name) + +######### Add RACDB Instance ######## + def add_rac_instance(self,dbuser,dbhome,osid,instance_number,nodename): + """ + add the RAC Database Instance + """ + path='''/usr/bin:/bin:/sbin:/usr/local/sbin:{0}/bin'''.format(dbhome) + ldpath='''{0}/lib:/lib:/usr/lib'''.format(dbhome) + cmd='''su - {5} -c "export ORACLE_HOME={0};export PATH={1};export LD_LIBRARY_PATH={2}; {0}/bin/srvctl add instance -d {3} -i {4} -node {6}"'''.format(dbhome,path,ldpath,osid,osid+instance_number,dbuser,nodename) + output,error,retcode=self.execute_cmd(cmd,None,None) + self.check_os_err(output,error,retcode,True) + +######### get DB Role ######## + def get_db_role(self,dbuser,dbhome,inst_sid,sqlpluslogincmd): + """ + return the + """ + sqlcmd=''' + set heading off; + set pagesize 0; + select database_role from v$database; + exit; + ''' + self.set_mask_str(self.get_sys_passwd()) + output,error,retcode=self.run_sqlplus(sqlpluslogincmd,sqlcmd,None) + self.log_info_message("Calling check_sql_err() to validate the sql command return status",self.file_name) + self.check_sql_err(output,error,retcode,None) + self.unset_mask_str() + return output + +######### Sqlplus ########### + def check_setup_status(self,dbuser,dbhome,inst_sid,sqlpluslogincmd): + """ + return the RAC setup status. It check a status in the table. + """ + fname='''/tmp/{0}'''.format("rac_setup.txt") + self.remove_file(fname) + self.set_mask_str(self.get_sys_passwd()) + msg='''Checking racsetup table in CDB''' + self.log_info_message(msg,self.file_name) + sqlcmd=''' + set heading off + set feedback off + set term off + SET NEWPAGE NONE + spool {0} + select * from system.racsetup WHERE ROWNUM = 1; + spool off + exit; + '''.format(fname) + output,error,retcode=self.run_sqlplus(sqlpluslogincmd,sqlcmd,None) + self.log_info_message("Calling check_sql_err() to validate the sql command return status",self.file_name) + self.check_sql_err(output,error,retcode,None) + + if os.path.isfile(fname): + fdata=self.read_file(fname) + else: + fdata='nosetup' + + ### Unsetting the encrypt value to None + self.unset_mask_str() + + if re.search('completed',fdata): + #status = self.catalog_pdb_setup_check(host,ccdb,svc,port) + #if status == 'completed': + return 'completed' + #else: + # return 'notcompleted' + else: + return 'notcompleted' + +#### Get DB Parameters ####### + def get_init_params(self,paramname,sqlpluslogincmd): + """ + return the + """ + sqlcmd=''' + set heading off; + set pagesize 0; + set feedback off + select value from v$parameter where upper(name)=upper('{0}'); + exit; + '''.format(paramname) + + self.set_mask_str(self.get_sys_passwd()) + output,error,retcode=self.run_sqlplus(sqlpluslogincmd,sqlcmd,None) + self.log_info_message("Calling check_sql_err() to validate the sql command return status",self.file_name) + self.check_sql_err(output,error,retcode,None) + self.unset_mask_str() + return output.strip() + +#### set DB Params ####### + def run_sql_cmd(self,sqlcmd,sqlpluslogincmd): + """ + return the + """ + self.set_mask_str(self.get_sys_passwd()) + output,error,retcode=self.run_sqlplus(sqlpluslogincmd,sqlcmd,None) + self.log_info_message("Calling check_sql_err() to validate the sql command return status",self.file_name) + self.check_sql_err(output,error,retcode,None) + self.unset_mask_str() + return output + +#### Set sqlcmd ######## + def get_sqlsetcmd(self): + """ + return the sql set commands + """ + sqlsetcmd=''' + set heading off + set pagesize 0 + set feedback off + ''' + return sqlsetcmd + +#### Check DB Inst ############# + def check_dbinst(self): + """ + This function the db inst + """ + osuser,dbhome,dbbase,oinv=self.get_db_params() + dbname,osid,dbuname=self.getdbnameinfo() + hostname = self.get_public_hostname() + inst_sid=self.get_inst_sid(osuser,dbhome,osid,hostname) + connect_str=self.get_sqlplus_str(dbhome,inst_sid,osuser,"sys",None,None,None,None,None,None,None) + if inst_sid: + status=self.get_dbinst_status(osuser,dbhome,inst_sid,connect_str) + if not self.check_substr_match(status,"OPEN"): + return False,inst_sid,hostname,status + else: + return True,inst_sid,hostname,status + else: + return False,inst_sid,hostname,"" + +######## Set Remote Listener ###### + def set_remote_listener(self): + """ + This function set the remote listener + """ + if self.check_key("CMAN_HOST",self.ora_env_dict): + cmanhost=self.ora_env_dict["CMAN_HOST"] + osuser,dbhome,dbbase,oinv=self.get_db_params() + dbname,osid,dbuname=self.getdbnameinfo() + scanname=self.ora_env_dict["SCAN_NAME"] if self.check_key("SCAN_NAME",self.ora_env_dict) else self.prog_exit("127") + scanport=self.ora_env_dict["SCAN_PORT"] if self.check_key("SCAN_PORT",self.ora_env_dict) else "1521" + cmanport=self.ora_env_dict["CMAN_PORT"] if self.check_key("CMAN_PORT",self.ora_env_dict) else "1521" + hostname = self.get_public_hostname() + inst_sid=self.get_inst_sid(osuser,dbhome,osid,hostname) + connect_str=self.get_sqlplus_str(dbhome,inst_sid,osuser,"sys",None,None,None,None,None,None,None) + sqlcmd=''' + set heading off; + set pagesize 0; + alter system set remote_listener='{0}:{1},(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST={2})(PORT={3}))))' scope=both; + alter system register; + alter system register; + exit; + '''.format(scanname,scanport,cmanhost,cmanport) + self.set_mask_str(self.get_sys_passwd()) + output,error,retcode=self.run_sqlplus(connect_str,sqlcmd,None) + self.log_info_message("Calling check_sql_err() to validate the sql command return status",self.file_name) + self.check_sql_err(output,error,retcode,None) + self.unset_mask_str() + +######## Set Remote Listener ###### + def set_local_listener(self): + """ + This function set the remote listener + """ + if self.check_key("LOCAL_LISTENER",self.ora_env_dict): + lsnrstr=self.ora_env_dict["LOCAL_LISTENER"].split(";") + for str1 in lsnrstr: + if len(str1.split(":")) == 2: + hname=(str1.split(":")[0]).strip() + lport=(str1.split(":")[1]).strip() + osuser,dbhome,dbbase,oinv=self.get_db_params() + dbname,osid,dbuname=self.getdbnameinfo() + hostname = self.get_public_hostname() + inst_sid=self.get_inst_sid(osuser,dbhome,osid,hostname) + connect_str=self.get_sqlplus_str(dbhome,inst_sid,osuser,"sys",None,None,None,None,None,None,None) + dbsid=self.get_host_dbsid(hname,connect_str) + svcdom=self.get_svc_domain(hname) + hname1=svcdom + lstr='''(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST={0})(PORT={1}))))'''.format(hname1,lport) + dbsid1 = re.sub(r"[\n\t\s]*", "", dbsid) + self.log_info_message("the local_listener string set to : " + lstr, self.file_name) + sqlcmd=''' + set heading off; + set pagesize 0; + alter system set local_listener='{0}' scope=both sid='{1}'; + alter system register; + alter system register; + exit; + '''.format(lstr,dbsid1) + self.set_mask_str(self.get_sys_passwd()) + output,error,retcode=self.run_sqlplus(connect_str,sqlcmd,None) + self.log_info_message("Calling check_sql_err() to validate the sql command return status",self.file_name) + self.check_sql_err(output,error,retcode,None) + self.unset_mask_str() + +####### Perform DB Check + def perform_db_check(self,type): + """ + This function check the DB and print the message" + """ + status,osid,host,mode=self.check_dbinst() + if status: + dbuser,dbhome,dbase,oinv=self.get_db_params() + if type == "INSTALL": + self.rac_setup_complete() + self.set_remote_listener() + self.run_custom_scripts("CUSTOM_DB_SCRIPT_DIR","CUSTOM_DB_SCRIPT_FILE",dbuser) + msg='''Oracle Database {0} is up and running on {1}.'''.format(osid,host) + self.log_info_message(self.print_banner(msg),self.file_name) + os.system("echo ORACLE RAC DATABASE IS READY TO USE > /dev/pts/0") + msg='''ORACLE RAC DATABASE IS READY TO USE''' + self.log_info_message(self.print_banner(msg),self.file_name) + else: + msg='''Oracle Database {0} is not up and running on {1}.'''.format(osid,host) + self.log_info_message(self.print_banner(msg),self.file_name) + self.prog_exit("127") + +######## Complete RAC Setup + def rac_setup_complete(self): + """ + This function complete the RAC setup by creating a table inside the DB + """ + osuser,dbhome,dbbase,oinv=self.get_db_params() + dbname,osid,dbuname=self.getdbnameinfo() + hostname = self.get_public_hostname() + inst_sid=self.get_inst_sid(osuser,dbhome,osid,hostname) + connect_str=self.get_sqlplus_str(dbhome,inst_sid,osuser,"sys",None,None,None,None,None,None,None) + sqlcmd=''' + set heading off + set feedback off + create table system.racsetup (status varchar2(10)); + insert into system.racsetup values('completed'); + commit; + exit; + ''' + self.set_mask_str(self.get_sys_passwd()) + output,error,retcode=self.run_sqlplus(connect_str,sqlcmd,None) + self.log_info_message("Calling check_sql_err() to validate the sql command return status",self.file_name) + self.check_sql_err(output,error,retcode,None) + self.unset_mask_str() + +######## Complete RAC Setup + def get_dbversion(self): + """ + This function returns the DB version + """ + osuser,dbhome,dbbase,oinv=self.get_db_params() + dbname,osid,dbuname=self.getdbnameinfo() + hostname = self.get_public_hostname() + inst_sid=self.get_inst_sid(osuser,dbhome,osid,hostname) + connect_str=self.get_sqlplus_str(dbhome,inst_sid,osuser,"sys",None,None,None,None,None,None,None) + sqlcmd=''' + set heading off + set feedback off + SELECT version_full FROM v$instance; + exit; + ''' + self.set_mask_str(self.get_sys_passwd()) + output,error,retcode=self.run_sqlplus(connect_str,sqlcmd,None) + self.log_info_message("Calling check_sql_err() to validate the sql command return status",self.file_name) + self.check_sql_err(output,error,retcode,None) + self.unset_mask_str() + if not error: + return output.strip("\r\n") + else: + return "NOTAVAILABLE" + +######## Complete RAC Setup + def reset_dbuser_passwd(self,user,pdb,type): + """ + This function reset the password + """ + passwdcmd=None + osuser,dbhome,dbbase,oinv=self.get_db_params() + dbname,osid,dbuname=self.getdbnameinfo() + hostname = self.get_public_hostname() + inst_sid=self.get_inst_sid(osuser,dbhome,osid,hostname) + connect_str=self.get_sqlplus_str(dbhome,inst_sid,osuser,"sys",None,None,None,None,None,None,None) + if pdb: + passwdcmd='''alter session set container={0};alter user {1} identified by HIDDEN_STRING;'''.format(pdb,user) + else: + if type == 'all': + passwdcmd='''alter user {0} identified by HIDDEN_STRING container=all;'''.format(user) + else: + passwdcmd='''alter user {0} identified by HIDDEN_STRING;'''.format(user) + sqlcmd=''' + set heading off + set feedback off + {0} + exit; + '''.format(passwdcmd) + self.set_mask_str(self.get_sys_passwd()) + output,error,retcode=self.run_sqlplus(connect_str,sqlcmd,None) + self.log_info_message("Calling check_sql_err() to validate the sql command return status",self.file_name) + self.check_sql_err(output,error,retcode,None) + self.unset_mask_str() + if not error: + return output.strip("\r\n") + + +####### Setup Primary for standby + def set_primary_for_standby(self): + """ + Perform the task on primary for standby + """ + dgname=self.ora_env_dict["CRS_ASM_DISKGROUP"] if self.check_key("CRS_ASM_DISKGROUP",self.ora_env_dict) else "+DATA" + dbrdest=self.ora_env_dict["DB_RECOVERY_FILE_DEST"] if self.check_key("DB_RECOVERY_FILE_DEST",self.ora_env_dict) else dgname + dbrdestsize=self.ora_env_dict["DB_RECOVERY_FILE_DEST_SIZE"] if self.check_key("DB_RECOVERY_FILE_DEST_SIZE",self.ora_env_dict) else "10G" + dbname,osid,dbuname=self.getdbnameinfo() + + osuser,dbhome,dbbase,oinv=self.get_db_params() + dbname,osid,dbuname=self.getdbnameinfo() + hostname = self.get_public_hostname() + inst_sid=self.get_inst_sid(osuser,dbhome,osid,hostname) + connect_str=self.get_dgmgr_str(dbhome,inst_sid,osuser,"sys",None,None,None,None,None,None,None) + dgcmd=''' + PREPARE DATABASE FOR DATA GUARD + WITH DB_UNIQUE_NAME IS {0} + DB_RECOVERY_FILE_DEST IS "{1}" + DB_RECOVERY_FILE_DEST_SIZE is {2} + BROKER_CONFIG_FILE_1 IS "{3}" + BROKER_CONFIG_FILE_2 IS "{3}"; + exit; + '''.format(dbuname,dbrdest,dbrdestsize,dbrdest) + output,error,retcode=self.run_sqlplus(connect_str,dgcmd,None) + self.log_info_message("Calling check_sql_err() to validate the sql command return status",self.file_name) + self.check_dgmgrl_err(output,error,retcode,True) + +######## Check INV Home ######## + def check_home_inv(self,node,dbhome,dbuser): + """ + This function the db home with inventory + """ + if not node: + cmd='''su - {0} -c "{1}/OPatch/opatch lsinventory"'''.format(dbuser,dbhome) + else: + cmd='''su - {0} -c "ssh {2} '{1}/OPatch/opatch lsinventory'"'''.format(dbuser,dbhome,node) + + output,error,retcode=self.execute_cmd(cmd,None,None) + self.check_os_err(output,error,retcode,None) + if self.check_substr_match(output,"OPatch succeeded"): + return True + else: + return False + +######## Process delete node param variables ############## + def del_node_params(self,key): + """ + This function process DEL_PARAMS and set the keys + """ + cvar_str=self.ora_env_dict[key] + cvar_dict=dict(item.split("=") for item in cvar_str.split(";")) + for ckey in cvar_dict.keys(): + if ckey == 'del_rachome': + if self.check_key("DEL_RACHOME",self.ora_env_dict): + self.ora_env_dict["DEL_RACHOME"]="true" + else: + self.ora_env_dict=self.add_key("DEL_RACHOME","true",self.ora_env_dict) + if ckey == 'del_gridnode': + if self.check_key("DEL_GRIDNODE",self.ora_env_dict): + self.ora_env_dict["DEL_GRIDNODE"]="true" + else: + self.ora_env_dict=self.add_key("DEL_GRIDNODE","true",self.ora_env_dict) + +######## Process delete node param variables ############## + def populate_existing_cls_nodes(self): + """ + This function populate the nodes witht he existing cls nodes + """ + hostname=self.get_public_hostname() + crs_node_list=self.get_existing_cls_nodes(hostname,hostname) + if self.check_key("EXISTING_CLS_NODE",self.ora_env_dict): + self.ora_env_dict["EXISTING_CLS_NODE"]=crs_node_list + else: + self.ora_env_dict=self.add_key("EXISTING_CLS_NODE",crs_node_list,self.ora_env_dict) + +######## Run the custom scripts ############## + def run_custom_scripts(self,dirkey,filekey,user): + """ + This function run the custom scripts after Grid or DB setup based on env variables + """ +# self.log_info_message("Inside run_custom_scripts()",self.file_name) + if self.check_key(dirkey,self.ora_env_dict): + scrdir=self.ora_env_dict[dirkey] + if self.check_key(filekey,self.ora_env_dict): + scrfile=self.ora_env_dict[filekey] + script_file = '''{0}/{1}'''.format(scrdir,scrfile) + if os.path.isfile(script_file): + msg='''Custom script exist {0}'''.format(script_file) + self.log_info_message(msg,self.file_name) + cmd='''su - {0} -c "sh {0}"'''.format(user,script_file) + output,error,retcode=self.execute_cmd(cmd,None,None) + self.check_os_err(output,error,retcode,True) +# else: +# self.log_info_message("Custom script dir is specified " + self.ora_env_dict[dirkey] + " but no user script file is specified. Not executing any user specified script.",self.file_name) +# else: +# self.log_info_message("No custom script dir specified to execute user specified scripts. Not executing any user specified script.",self.file_name) + +######### Synching Oracle Home + def sync_gi_home(self,node,ohome,user): + """ + This home sync GI home during addnode from source machine to remote machine + """ + install_node,pubhost=self.get_installnode() + cmd='''su - {0} -c "ssh {1} 'rsync -Pav -e ssh --exclude \'{1}*\' {3}/* {0}@{2}:{3}'"'''.format(user,node,install_node,ohome) + output,error,retcode=self.execute_cmd(cmd,None,None) + self.check_os_err(output,error,retcode,False) + +######## Set the User profiles + def set_user_profile(self,ouser,key,val,type): + """ + This function run the custom scripts after Grid or DB setup based on env variables + """ + match=None + bashrc='''/home/{0}/.bashrc'''.format(ouser) + fdata=self.read_file(bashrc) + + match=re.search(key,fdata,re.MULTILINE) + #if not match: + if type=="export": + cmd='''echo "export {0}={1}" >> {2}'''.format(key,val,bashrc) + output,error,retcode=self.execute_cmd(cmd,None,None) + self.check_os_err(output,error,retcode,True) + if type=="alias": + cmd='''echo "alias {0}='{1}'" >> {2}'''.format(key,val,bashrc) + output,error,retcode=self.execute_cmd(cmd,None,None) + self.check_os_err(output,error,retcode,True) + +###### Reading grid Resonsefile + + # Update the env variables dictionary from the values in the grid response file ( if provided ) + + def update_gi_env_vars_from_rspfile(self): + """ + Update GI env vars as key value pair from the responsefile ( if provided ) + """ + gridrsp=None + privHost=None + privIP=None + privDomain=None + cls_nodes=None + + if self.check_key("GRID_RESPONSE_FILE",self.ora_env_dict): + gridrsp=self.ora_env_dict["GRID_RESPONSE_FILE"] + self.log_info_message("GRID_RESPONSE_FILE parameter is set and file location is:" + gridrsp ,self.file_name) + + if os.path.isfile(gridrsp): + with open(gridrsp) as fp: + for line in fp: + if len(line.split("=")) == 2: + key=(line.split("=")[0]).strip() + value=(line.split("=")[1]).strip() + self.log_info_message("KEY and Value pair set to: " + key + ":" + value ,self.file_name) + if (key == "INVENTORY_LOCATION"): + if self.check_key("INVENTORY",self.ora_env_dict): + self.ora_env_dict=self.update_key("INVENTORY",value,self.ora_env_dict) + else: + self.ora_env_dict=self.add_key("INVENTORY",value,self.ora_env_dict) + elif (key == "ORACLE_BASE"): + if self.check_key("GRID_BASE",self.ora_env_dict): + self.ora_env_dict=self.update_key("GRID_BASE",value,self.ora_env_dict) + else: + self.ora_env_dict=self.add_key("GRID_BASE",value,self.ora_env_dict) + elif (key == "scanName"): + if self.check_key("SCAN_NAME",self.ora_env_dict): + self.ora_env_dict=self.update_key("SCAN_NAME",value,self.ora_env_dict) + else: + self.ora_env_dict=self.add_key("SCAN_NAME",value,self.ora_env_dict) + elif (key == "diskString"): + if self.check_key("CRS_ASM_DISCOVERY_STRING",self.ora_env_dict): + self.ora_env_dict=self.update_key("CRS_ASM_DISCOVERY_STRING",value,self.ora_env_dict) + else: + self.ora_env_dict=self.add_key("CRS_ASM_DISCOVERY_STRING",value,self.ora_env_dict) + elif (key == "diskList"): + if self.check_key("CRS_ASM_DEVICE_LIST",self.ora_env_dict): + self.ora_env_dict=self.update_key("CRS_ASM_DEVICE_LIST",value,self.ora_env_dict) + else: + self.ora_env_dict=self.add_key("CRS_ASM_DEVICE_LIST",value,self.ora_env_dict) + elif (key == "diskGroupName"): + if self.check_key("CRS_ASM_DISKGROUP",self.ora_env_dict): + self.ora_env_dict=self.update_key("CRS_ASM_DISKGROUP",value,self.ora_env_dict) + else: + self.ora_env_dict=self.add_key("CRS_ASM_DISKGROUP",value,self.ora_env_dict) + elif (key == "clusterNodes"): + install_node_flag=False + for crs_node in value.split(","): + installNode=(crs_node.split(":"))[0].strip() + installVIPNode=(crs_node.split(":"))[1].strip() + cls_node='''pubhost:{0},viphost:{1}'''.format(installNode,installVIPNode) + self.log_info_message("cls_node set to : " + cls_node,self.file_name) + if cls_nodes is None: + cls_nodes=cls_node + ';' + else: + cls_nodes= cls_nodes + cls_node + ';' + self.log_info_message("cls_nodes set to : " + cls_nodes,self.file_name) + if not install_node_flag: + if self.check_key("INSTALL_NODE",self.ora_env_dict): + self.ora_env_dict=self.update_key("INSTALL_NODE",installNode,self.ora_env_dict) + else: + self.ora_env_dict=self.add_key("INSTALL_NODE",installNode,self.ora_env_dict) + install_node_flag=True + self.log_info_message("Install node set to :" + self.ora_env_dict["INSTALL_NODE"], self.file_name) + elif (key == "redundancy"): + if self.check_key("CRS_ASMDG_REDUNDANCY ",self.ora_env_dict): + self.ora_env_dict=self.update_key("CRS_ASMDG_REDUNDANCY ",value,self.ora_env_dict) + else: + self.ora_env_dict=self.add_key("CRS_ASMDG_REDUNDANCY ",value,self.ora_env_dict) + else: + pass + + #crsNodes=cls_nodes[:-1] if cls_nodes[:-1]==';' else cls_nodes + self.log_info_message("cls_nodes set to : " + cls_nodes,self.file_name) + crsNodes=cls_nodes.rstrip(cls_nodes[-1]) + if self.check_key("CRS_NODES",self.ora_env_dict): + self.ora_env_dict=self.update_key("CRS_NODES",crsNodes,self.ora_env_dict) + else: + self.ora_env_dict=self.add_key("CRS_NODES",crsNodes,self.ora_env_dict) + + else: + self.log_error_message("Grid response file does not exist at its location: " + gridrsp + ".Exiting..",self.file_name) + self.prog_exit("127") + + + + def update_pre_23c_gi_env_vars_from_rspfile(self): + """ + Update GI env vars as key value pair from the responsefile ( if provided ) + """ + gridrsp=None + privHost=None + privIP=None + privDomain=None + cls_nodes=None + + if self.check_key("GRID_RESPONSE_FILE",self.ora_env_dict): + gridrsp=self.ora_env_dict["GRID_RESPONSE_FILE"] + self.log_info_message("GRID_RESPONSE_FILE parameter is set and file location is:" + gridrsp ,self.file_name) + + if os.path.isfile(gridrsp): + with open(gridrsp) as fp: + for line in fp: + if len(line.split("=")) == 2: + key=(line.split("=")[0]).strip() + value=(line.split("=")[1]).strip() + self.log_info_message("KEY and Value pair set to: " + key + ":" + value ,self.file_name) + if (key == "INVENTORY_LOCATION"): + if self.check_key("INVENTORY",self.ora_env_dict): + self.ora_env_dict=self.update_key("INVENTORY",value,self.ora_env_dict) + else: + self.ora_env_dict=self.add_key("INVENTORY",value,self.ora_env_dict) + elif (key == "ORACLE_BASE"): + if self.check_key("GRID_BASE",self.ora_env_dict): + self.ora_env_dict=self.update_key("GRID_BASE",value,self.ora_env_dict) + else: + self.ora_env_dict=self.add_key("GRID_BASE",value,self.ora_env_dict) + elif (key == "oracle.install.crs.config.gpnp.scanName"): + if self.check_key("SCAN_NAME",self.ora_env_dict): + self.ora_env_dict=self.update_key("SCAN_NAME",value,self.ora_env_dict) + else: + self.ora_env_dict=self.add_key("SCAN_NAME",value,self.ora_env_dict) + elif (key == "oracle.install.asm.diskGroup.diskDiscoveryString"): + if self.check_key("CRS_ASM_DISCOVERY_STRING",self.ora_env_dict): + self.ora_env_dict=self.update_key("CRS_ASM_DISCOVERY_STRING",value,self.ora_env_dict) + else: + self.ora_env_dict=self.add_key("CRS_ASM_DISCOVERY_STRING",value,self.ora_env_dict) + elif (key == "oracle.install.asm.diskGroup.disks"): + if self.check_key("CRS_ASM_DEVICE_LIST",self.ora_env_dict): + self.ora_env_dict=self.update_key("CRS_ASM_DEVICE_LIST",value,self.ora_env_dict) + else: + self.ora_env_dict=self.add_key("CRS_ASM_DEVICE_LIST",value,self.ora_env_dict) + elif (key == "oracle.install.crs.config.clusterNodes"): + install_node_flag=False + for crs_node in value.split(","): + installNode=(crs_node.split(":"))[0].strip() + installVIPNode=(crs_node.split(":"))[1].strip() + cls_node='''pubhost:{0},viphost:{1}'''.format(installNode,installVIPNode) + self.log_info_message("cls_node set to : " + cls_node,self.file_name) + if cls_nodes is None: + cls_nodes=cls_node + ';' + else: + cls_nodes= cls_nodes + cls_node + ';' + self.log_info_message("cls_nodes set to : " + cls_nodes,self.file_name) + if not install_node_flag: + if self.check_key("INSTALL_NODE",self.ora_env_dict): + self.ora_env_dict=self.update_key("INSTALL_NODE",installNode,self.ora_env_dict) + else: + self.ora_env_dict=self.add_key("INSTALL_NODE",installNode,self.ora_env_dict) + install_node_flag=True + self.log_info_message("Install node set to :" + self.ora_env_dict["INSTALL_NODE"], self.file_name) + elif (key == "oracle.install.asm.diskGroup.redundancy"): + if self.check_key("CRS_ASMDG_REDUNDANCY ",self.ora_env_dict): + self.ora_env_dict=self.update_key("CRS_ASMDG_REDUNDANCY ",value,self.ora_env_dict) + else: + self.ora_env_dict=self.add_key("CRS_ASMDG_REDUNDANCY ",value,self.ora_env_dict) + elif (key == "oracle.install.asm.diskGroup.AUSize"): + if self.check_key("CRS_ASMDG_AU_SIZE ",self.ora_env_dict): + self.ora_env_dict=self.update_key("CRS_ASMDG_AU_SIZE ",value,self.ora_env_dict) + else: + self.ora_env_dict=self.add_key("CRS_ASMDG_AU_SIZE ",value,self.ora_env_dict) + else: + pass + + #crsNodes=cls_nodes[:-1] if cls_nodes[:-1]==';' else cls_nodes + self.log_info_message("cls_nodes set to : " + cls_nodes,self.file_name) + crsNodes=cls_nodes.rstrip(cls_nodes[-1]) + if self.check_key("CRS_NODES",self.ora_env_dict): + self.ora_env_dict=self.update_key("CRS_NODES",crsNodes,self.ora_env_dict) + else: + self.ora_env_dict=self.add_key("CRS_NODES",crsNodes,self.ora_env_dict) + + else: + self.log_error_message("Grid response file does not exist at its location: " + gridrsp + ".Exiting..",self.file_name) + self.prog_exit("127") + + + def update_rac_env_vars_from_rspfile(self,dbcarsp): + """ + Update RAC env vars as key value pair from the responsefile ( if provided ) + """ + if os.path.isfile(dbcarsp): + with open(dbcarsp) as fp: + for line in fp: + msg="Read from dbca.rsp: line=" + line + self.log_info_message(msg,self.file_name) + if len(line.split("=",1)) == 2: + key=(line.split("=")[0]).strip() + value=(line.split("=")[1]).strip() + msg="key=" + key + ".. value=" + value + self.log_info_message(msg,self.file_name) + if (key == "gdbName"): + if self.check_key("DB_NAME",self.ora_env_dict): + self.ora_env_dict=self.update_key("DB_NAME",value,self.ora_env_dict) + else: + self.ora_env_dict=self.add_key("DB_NAME",value,self.ora_env_dict) + elif (key == "datafileDestination"): + if value != "": + dg = (re.search("\+(.+?)/.*",value)).group(1) + if self.check_key("DB_DATA_FILE_DEST",self.ora_env_dict): + self.ora_env_dict=self.update_key("DB_DATA_FILE_DEST",dg,self.ora_env_dict) + else: + self.ora_env_dict=self.add_key("DB_DATA_FILE_DEST",dg,self.ora_env_dict) + elif (key == "recoveryAreaDestination"): + if value != "" : + dg = (re.search("\+(.+?)/.*",value)).group(1) + if self.check_key("DB_RECOVERY_FILE_DEST",self.ora_env_dict): + self.ora_env_dict=self.update_key("DB_RECOVERY_FILE_DEST",dg,self.ora_env_dict) + else: + self.ora_env_dict=self.add_key("DB_RECOVERY_FILE_DEST",dg,self.ora_env_dict) + + elif (key == "variables"): + variablesvalue=(re.search("variables=(.*)",line)).group(1) + if variablesvalue: + dbUniqueStr=(re.search("(DB_UNIQUE_NAME=.+?),.*",variablesvalue)).group(1) + if dbUniqueStr: + dbUniqueValue=(dbUniqueStr.split("=")[1]).strip() + if self.check_key("DB_UNIQUE_NAME",self.ora_env_dict): + self.ora_env_dict=self.update_key("DB_UNIQUE_NAME",dbUniqueValue,self.ora_env_dict) + else: + self.ora_env_dict=self.add_key("DB_UNIQUE_NAME",dbUniqueValue,self.ora_env_dict) + dbHomeStr=(re.search("(ORACLE_HOME=.+?),.*",variablesvalue)).group(1) + if dbHomeStr: + dbHomeValue=(dbHomeStr.split("=")[1]).strip() + if self.check_key("DB_HOME",self.ora_env_dict): + self.ora_env_dict=self.update_key("DB_HOME",dbHomeValue,self.ora_env_dict) + else: + self.ora_env_dict=self.add_key("DB_HOME",dbHomeValue,self.ora_env_dict) + dbBaseStr=(re.search("(ORACLE_BASE=.+?),.*",variablesvalue)).group(1) + if dbBaseStr: + dbBaseValue=(dbBaseStr.split("=")[1]).strip() + if self.check_key("DB_BASE",self.ora_env_dict): + self.ora_env_dict=self.update_key("DB_BASE",dbBaseValue,self.ora_env_dict) + else: + self.ora_env_dict=self.add_key("DB_BASE",dbBaseValue,self.ora_env_dict) + else: + pass + + else: + self.log_error_message("dbca response file does not exist at its location: " + dbcarsp + ".Exiting..",self.file_name) + self.prog_exit("127") + + + # Update the env variables dictionary from the values in the grid response file ( if provided ) + def update_domainfrom_resolvconf_file(self): + """ + Update domain variables + """ + privDomain=None + pubDomain=None + ## Update DNS_SERVERS from /etc/resolv.conf + if os.path.isfile("/etc/resolv.conf"): + fdata=self.read_file("/etc/resolv.conf") + str=re.search("nameserver\s+(.+?)\s+",fdata) + if str: + dns_server=str.group(1) + if self.check_key("DNS_SERVERS",self.ora_env_dict): + self.ora_env_dict=self.update_key("DNS_SERVERS",dns_server,self.ora_env_dict) + else: + self.ora_env_dict=self.add_key("DNS_SERVERS",dns_server,self.ora_env_dict) + + domains=(re.search("search\s+(.*)",fdata)).group(1) + cmd="echo " + domains + " | cut -d' ' -f1" + output,error,retcode=self.execute_cmd(cmd,None,None) + pubDomain=output.strip() + self.log_info_message("Domain set to :" + pubDomain, self.file_name) + self.check_os_err(output,error,retcode,True) + if self.check_key("PUBLIC_HOSTS_DOMAIN",self.ora_env_dict): + self.ora_env_dict=self.update_key("PUBLIC_HOSTS_DOMAIN",pubDomain,self.ora_env_dict) + else: + self.ora_env_dict=self.add_key("PUBLIC_HOSTS_DOMAIN",pubDomain,self.ora_env_dict) + +######## set DG Prefix Function + def setdgprefix(self,dgname): + """ + add dg prefix + """ + dgflag = dgname.startswith("+") + + if not dgflag: + dgname= "+" + dgname + self.log_info_message("The dgname set to : " + dgname, self.file_name) + + return dgname + +######## rm DG Prefix Function + def rmdgprefix(self,dgname): + """ + rm dg prefix + """ + dgflag = dgname.startswith("+") + + if dgflag: + return dgname[1:] + else: + return dgname + +###### Get SID, dbname,dbuname + def getdbnameinfo(self): + """ + this function returns the sid,dbname,dbuname + """ + dbname=self.ora_env_dict["DB_NAME"] if self.check_key("DB_NAME",self.ora_env_dict) else "ORCLCDB" + osid=dbname + dbuname=self.ora_env_dict["DB_UNIQUE_NAME"] if self.check_key("DB_UNIQUE_NAME",self.ora_env_dict) else dbname + + return dbname,osid,dbuname + +###### function to return DG Name for CRS + def getcrsdgname(self): + """ + return CRS DG NAME + """ + return self.ora_env_dict["CRS_ASM_DISKGROUP"] if self.check_key("CRS_ASM_DISKGROUP",self.ora_env_dict) else "+DATA" + + +###### function to return DG Name for DATAFILE + def getdbdestdgname(self,dgname): + """ + return DB DG NAME + """ + return self.ora_env_dict["DB_DATA_FILE_DEST"] if self.check_key("DB_DATA_FILE_DEST",self.ora_env_dict) else dgname + +###### function to return DG Name for RECOVERY DESTINATION + def getdbrdestdgname(self,dgname): + """ + return RECO DG NAME + """ + return self.ora_env_dict["DB_RECOVERY_FILE_DEST"] if self.check_key("DB_RECOVERY_FILE_DEST",self.ora_env_dict) else dgname + +##### Function to catalog the backup + def catalog_bkp(self): + """ + catalog the backup + """ + osuser,dbhome,dbbase,oinv=self.get_db_params() + osid=self.ora_env_dict["GOLD_SID_NAME"] + rmanlogincmd=self.get_rman_str(dbhome,osid,osuser,"sys",None,None,None,osid,None,None,None) + rmancmd=''' + catalog start with '{0}' noprompt; + '''.format(self.ora_env_dict["GOLD_DB_BACKUP_LOC"]) + self.log_info_message("Running the rman command to catalog the backup: " + rmancmd,self.file_name) + output,error,retcode=self.run_sqlplus(rmanlogincmd,rmancmd,None) + self.log_info_message("Calling check_sql_err() to validate the rman command return status",self.file_name) + self.check_sql_err(output,error,retcode,True) + +#### Function to validate the backup + def check_bkp(self): + """ + Check the backup + """ + pass + +#### Function to validate the backup + def restore_bkp(self,dgname): + """ + restore the backup + """ + osuser,dbhome,dbbase,oinv=self.get_db_params() + osid=self.ora_env_dict["GOLD_SID_NAME"] + dbname=self.ora_env_dict["GOLD_DB_NAME"] + self.log_info_message("In restore_bkp() : dgname=[" + dgname + "]", self.file_name) + rmanlogincmd=self.get_rman_str(dbhome,osid,osuser,"sys",None,None,None,osid,None,None,None) + rmancmd=''' + run {{ + restore controlfile from '{2}'; + alter database mount; + set newname for database to '{0}'; + restore database; + switch datafile all; + alter database open resetlogs; + alter pluggable database {1} open read write; + }} + '''.format(dgname,self.ora_env_dict["GOLD_PDB_NAME"],"/oradata/orclcdb_bkp/spfile" + dbname + ".ora") + self.log_info_message("Running the rman command to restore the controlfile and datafiles from the backup: " + rmancmd,self.file_name) + output,error,retcode=self.run_sqlplus(rmanlogincmd,rmancmd,None) + self.log_info_message("Calling check_sql_err() to validate the rman command return status",self.file_name) + self.check_sql_err(output,error,retcode,True) + +#### Function restore the spfile + def restore_spfile(self): + """ + Restore the spfile + """ + osuser,dbhome,dbbase,oinv=self.get_db_params() + osid=self.ora_env_dict["GOLD_SID_NAME"] + dbname=self.ora_env_dict["GOLD_DB_NAME"] + rmanlogincmd=self.get_rman_str(dbhome,osid,osuser,"sys",None,None,None,osid,None,None,None) + rmancmd=''' + restore spfile from '{0}'; + '''.format(self.ora_env_dict["GOLD_DB_BACKUP_LOC"] + "/spfile" + dbname + ".ora") + self.log_info_message("Running the rman command to restore the spfile from the backup: " + rmancmd,self.file_name) + output,error,retcode=self.run_sqlplus(rmanlogincmd,rmancmd,None) + self.log_info_message("Calling check_sql_err() to validate the rman command return status",self.file_name) + self.check_sql_err(output,error,retcode,True) + +#### Set cluster mode to true or false + def set_cluster_mode(self,pfile,cflag): + """ + This function sets the cluster mode to true or false in the pfile + """ + cmd='''sed -i "s/*.cluster_database=.*/*.cluster_database={0}/g" {1}'''.format(cflag,pfile) + output,error,retcode=self.execute_cmd(cmd,None,None) + self.check_os_err(output,error,retcode,False) + +#### Change the dbname in the parameter file to the new dbname + def change_dbname(self,pfile,newdbname): + """ + This function sets the resets the dbname to newdbname in the pfile + """ + osuser,dbhome,dbbase,oinv=self.get_db_params() + olddbname=self.ora_env_dict["GOLD_DB_NAME"] + osid=self.ora_env_dict["GOLD_SID_NAME"] + cmd='''su - {3} -c "export ORACLE_SID={2};export ORACLE_HOME={1};echo Y | {1}/bin/nid target=/ dbname={0}"'''.format(newdbname,dbhome,osid,osuser) + output,error,retcode=self.execute_cmd(cmd,None,None) + self.check_os_err(output,error,retcode,False) + + self.set_cluster_mode(pfile,True) + cmd='''sed -i "s/*.db_name=.*/*.db_name={0}/g" {1}'''.format(newdbname,pfile) + output,error,retcode=self.execute_cmd(cmd,None,None) + self.check_os_err(output,error,retcode,False) + cmd='''sed -i "s/*.db_unique_name=.*/*.db_unique_name={0}/g" {1}'''.format(newdbname,pfile) + output,error,retcode=self.execute_cmd(cmd,None,None) + self.check_os_err(output,error,retcode,False) + cmd='''sed -i "s/{0}\(.*\).instance_number=\(.*\)/{1}\\1.instance_number=\\2/g" {2}'''.format(olddbname,newdbname,pfile) + output,error,retcode=self.execute_cmd(cmd,None,None) + self.check_os_err(output,error,retcode,False) + +#### Change the dbname in the parameter file to the new dbname + def rotate_log_files(self): + """ + remove old logfiles + """ + currentfile='''{0}'''.format(self.ologger.filename_) + newfile='''{0}.old'''.format(self.ologger.filename_) + if self.check_file(currentfile,"local",None,None): + os.rename(currentfile,newfile) + + def modify_scan(self,giuser,gihome,scanname): + """ + Modify Scan Details + """ + cmd='''{1}/bin/srvctl modify scan -scanname {2}'''.format(giuser,gihome, scanname) + output,error,retcode=self.execute_cmd(cmd,None,None) + self.check_os_err(output,error,retcode,None) + if retcode == 0: + return True + else: + return False + + def updateasmcount(self,giuser,gihome,asmcount): + """ + Update ASM disk counts + """ + cmd='''su - {0} -c "{1}/bin/srvctl modify asm -count {2}"'''.format(giuser,gihome, asmcount) + output,error,retcode=self.execute_cmd(cmd,None,None) + self.check_os_err(output,error,retcode,None) + if retcode == 0: + return True + else: + return False + + def updateasmdevices(self, giuser, gihome, diskname, diskgroup, processtype): + """ + Update ASM devices, handle addition or deletion. + """ + retcode = 1 + if processtype == "addition": + cmd = '''su - {0} -c "{1}/bin/asmca -silent -addDisk -diskGroupName {2} -disk {3}"'''.format(giuser, gihome, diskgroup, diskname) + output, error, retcode = self.execute_cmd(cmd, None, None) + self.check_os_err(output, error, retcode, None) + elif processtype == "deletion": + cmd = '''su - {0} -c "{1}/bin/asmca -silent -removeDisk -diskGroupName {2} -disk {3}"'''.format(giuser, gihome, diskgroup, diskname) + output, error, retcode = self.execute_cmd(cmd, None, None) + self.check_os_err(output, error, retcode, None) + if retcode == 0: + return True + else: + return False + + def updatelistenerendp(self,giuser,gihome,listenername,portlist): + """ + Update ListenerEndpoints + """ + cmd='''su - {0} -c "{1}/bin/srvctl modify listener -listener {2} -endpoints 'TCP:{3}'"'''.format(giuser,gihome,listenername,portlist) + output,error,retcode=self.execute_cmd(cmd,None,None) + self.check_os_err(output,error,retcode,None) + if retcode == 0: + return True + else: + return False + + def get_asmsid(self,giuser,gihome): + """ + get the asm sid details + """ + sid=None + cmd='''su - {0} -c "{1}/bin/olsnodes -n"'''.format(giuser,gihome) + output,error,retcode=self.execute_cmd(cmd,None,None) + self.check_os_err(output,error,retcode,None) + if retcode == 0: + pubhost=self.get_public_hostname() + for line in output.splitlines(): + if pubhost in line: + nodeid = line.split() + if len(nodeid) == 2: + sid="+ASM" + nodeid[1] + break + if sid is not None: + self.log_info_message("ASM sid set to :" + sid,self.file_name) + return sid + else: + return None + + def check_asminst(self,giuser,gihome): + """ + check asm instance + """ + sid=self.get_asmsid(giuser,gihome) + if sid is not None: + sqlpluslogincmd=self.get_sqlplus_str(gihome,sid,giuser,"sys",None,None,None,sid,None,None,None) + sqlcmd=""" + set heading off + set feedback off + set term off + SET NEWPAGE NONE + select status from v$instance; + exit; + """ + output,error,retcode=self.run_sqlplus(sqlpluslogincmd,sqlcmd,None) + self.log_info_message("Calling check_sql_err() to validate the sql command return status",self.file_name) + self.check_sql_err(output,error,retcode,True) + if "STARTED" in ''.join(output.upper()): + return 0 + else: + return 1 + + def get_asmdg(self,giuser,gihome): + """ + get the asm dg list + """ + sid=self.get_asmsid(giuser,gihome) + if sid is not None: + sqlpluslogincmd=self.get_sqlplus_str(gihome,sid,giuser,"sys",None,None,None,sid,None,None,None) + sqlcmd=""" + set heading off + set feedback off + set term off + SET NEWPAGE NONE + select name from v$asm_diskgroup; + """ + output,error,retcode=self.run_sqlplus(sqlpluslogincmd,sqlcmd,None) + self.log_info_message("Calling check_sql_err() to validate the sql command return status",self.file_name) + self.check_sql_err(output,error,retcode,True) + return output.strip().replace('\n',',') + + def get_asmdgrd(self,giuser,gihome,dg): + """ + get the asm disk redudancy + """ + sid=self.get_asmsid(giuser,gihome) + if sid is not None: + sqlpluslogincmd=self.get_sqlplus_str(gihome,sid,giuser,"sys",None,None,None,sid,None,None,None) + sqlcmd=""" + set heading off + set feedback off + set term off + SET NEWPAGE NONE + select type from v$asm_diskgroup where upper(name)=upper('{0}'); + """.format(dg) + output,error,retcode=self.run_sqlplus(sqlpluslogincmd,sqlcmd,None) + self.log_info_message("Calling check_sql_err() to validate the sql command return status",self.file_name) + self.check_sql_err(output,error,retcode,True) + return output + + def get_asmdsk(self,giuser,gihome,dg): + """ + check asm disks based on dg group + """ + sid=self.get_asmsid(giuser,gihome) + cmd='''su - {0} -c "asmcmd lsdsk -G {1} --suppressheader --member"'''.format(giuser,dg) + output,error,retcode=self.execute_cmd(cmd,None,None) + self.check_os_err(output,error,retcode,None) + if retcode == 0: + return output.strip().replace('\n',',') + else: + return "ERROR OCCURRED" diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/oracvu.py b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/oracvu.py new file mode 100755 index 0000000000..f287084fda --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/oracvu.py @@ -0,0 +1,261 @@ +#!/usr/bin/python + +############################# +# Copyright 2021, Oracle Corporation and/or affiliates. All rights reserved. +# Licensed under the Universal Permissive License v 1.0 as shown at http://oss.oracle.com/licenses/upl +# Author: paramdeep.saini@oracle.com +############################ + +""" + This file contains to the code call different classes objects based on setup type +""" + +from oralogger import * +from oraenv import * +from oracommon import * +from oramachine import * +from orasetupenv import * +from orasshsetup import * + +import os +import sys + +class OraCvu: + """ + This class performs the CVU checks + """ + def __init__(self,oralogger,orahandler,oraenv,oracommon): + try: + self.ologger = oralogger + self.ohandler = orahandler + self.oenv = oraenv.get_instance() + self.ocommon = oracommon + self.ora_env_dict = oraenv.get_env_vars() + self.file_name = os.path.basename(__file__) + except BaseException as ex: + ex_type, ex_value, ex_traceback = sys.exc_info() + trace_back = sys.tracebacklimit.extract_tb(ex_traceback) + stack_trace = list() + for trace in trace_back: + stack_trace.append("File : %s , Line : %d, Func.Name : %s, Message : %s" % (trace[0], trace[1], trace[2], trace[3])) + self.ocommon.log_info_message(ex_type.__name__,self.file_name) + self.ocommon.log_info_message(ex_value,self.file_name) + self.ocommon.log_info_message(stack_trace,self.file_name) + + def setup(self): + """ + This function setup the grid on this machine + """ + pass + + def node_reachability_checks(self,checktype,user,ctype): + """ + This function performs the cluvfy checks + """ + exiting_cls_node="" + if ctype == 'ADDNODE': + exiting_cls_node=self.ocommon.get_existing_clu_nodes(True) + + if self.ocommon.check_key("CRS_NODES",self.ora_env_dict): + pub_nodes,vip_nodes,priv_nodes=self.ocommon.process_cluster_vars("CRS_NODES") + if checktype=="private": + crs_nodes=priv_nodes.replace(" ",",") + else: + crs_nodes=pub_nodes.replace(" ",",") + if exiting_cls_node: + crs_nodes = crs_nodes + "," + exiting_cls_node + + nwmask,nwsubnet,nwname=self.ocommon.get_nwlist(checktype) + self.cluvfy_nodereach(crs_nodes,nwname,user) + + + def node_connectivity_checks(self,checktype,user,ctype): + """ + This function performs the cluvfy checks + """ + exiting_cls_node="" + if ctype == 'ADDNODE': + exiting_cls_node=self.ocommon.get_existing_clu_nodes(True) + + if self.ocommon.check_key("CRS_NODES",self.ora_env_dict): + pub_nodes,vip_nodes,priv_nodes=self.ocommon.process_cluster_vars("CRS_NODES") + if checktype=="private": + crs_nodes=priv_nodes.replace(" ",",") + else: + crs_nodes=pub_nodes.replace(" ",",") + if exiting_cls_node: + crs_nodes = crs_nodes + "," + exiting_cls_node + + nwmask,nwsubnet,nwname=self.ocommon.get_nwlist(checktype) + self.cluvfy_nodereach(crs_nodes,nwname,user) + + def cluvfy_nodereach(self,crs_nodes,nwname,user): + """ + This function performs the cluvfy checks + """ + ohome=self.ora_env_dict["GRID_HOME"] + self.ocommon.log_info_message("Performing cluvfy check to perform node reachability.",self.file_name) + cmd='''su - {2} -c "{1}/runcluvfy.sh comp nodereach -n {0} -verbose"'''.format(crs_nodes,ohome,user) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,True) + + def cluvfy_nodecon(self,crs_nodes,nwname,user): + """ + This function performs the cluvfy checks + """ + ohome=self.ora_env_dict["GRID_HOME"] + self.ocommon.log_info_message("Performing cluvfy check to perform node connectivty.",self.file_name) + cmd='''su - {3} -c "{1}/runcluvfy.sh comp nodecon -n {0} -networks {2} -verbose"'''.format(crs_nodes,ohome,nwname,user) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,None) + + def cluvfy_compsys(self,ctype,user): + """ + This function performs the cluvfy comp sys checks + """ + ohome=self.ora_env_dict["GRID_HOME"] + self.ocommon.log_info_message("Performing cluvfy check to perform node connectivty.",self.file_name) + cmd='''su - {2} -c "{1}/runcluvfy.sh comp sys -n racnode6,racnode8 -p {0} -verbose"'''.format(ctype,ohome,user) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,None) + + def cluvfy_checkrspfile(self,fname,ohome,user): + """ + This function performs the cluvfy check on a responsefile + """ + self.cluvfy_updcvucfg(ohome,user) + self.ocommon.log_info_message("Performing cluvfy check on a responsefile: " + fname,self.file_name) + cmd='''su - {0} -c "{1}/runcluvfy.sh stage -pre crsinst -responseFile {2} | tee -a {3}/cluvfy_check.txt"'''.format(user,ohome,fname,"/tmp") + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + if self.ocommon.check_key("IGNORE_CVU_CHECKS",self.ora_env_dict): + self.ocommon.check_os_err(output,error,retcode,None) + else: + self.ocommon.check_os_err(output,error,retcode,None) + + def cluvfy_updcvucfg(self,ohome,user): + """ + This function update the CVU config file with the correct CV_DESTLOC + """ + match=None + tmpdir=self.ocommon.get_tmpdir() + fname='''{0}/cv/admin/cvu_config'''.format(ohome) + self.ocommon.log_info_message("Updating CVU config file: " + fname,self.file_name) + fdata=self.ocommon.read_file(fname) + match=re.search("CV_DESTLOC=",fdata,re.MULTILINE) + if not match: + cmd='''su - {0} -c "echo CV_DESTLOC=\"{1}\" >> {2}"'''.format(user,tmpdir,fname) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + else: + cmd='''su - {0} -c "echo CV_DESTLOC=\"{1}\" >> {2}"'''.format(user,tmpdir,fname) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + + def check_ohasd(self,node): + """ + This function check if crs is configued properly + """ + giuser,gihome,gbase,oinv=self.ocommon.get_gi_params() + crs_nodes="" + if not node: + crs_nodes=" -allnodes " + else: + crs_nodes=" -n " + node + + cmd='''su - {0} -c "{1}/bin/cluvfy comp ohasd {2}"'''.format(giuser,gihome,crs_nodes) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,None) + return retcode + + def check_asm(self,node): + """ + This function check if crs is configued properly + """ + giuser,gihome,gbase,oinv=self.ocommon.get_gi_params() + crs_nodes="" + if not node: + crs_nodes=" -allnodes " + else: + crs_nodes=" -n " + node + + cmd='''su - {0} -c "{1}/bin/cluvfy comp asm {2}"'''.format(giuser,gihome,crs_nodes) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,None) + return retcode + + def check_clu(self,node,sshflag): + """ + This function check if crs is configued properly + """ + giuser,gihome,gbase,oinv=self.ocommon.get_gi_params() + crs_nodes="" + if not node: + crs_nodes=" -allnodes " + cmd='''su - {0} -c "{1}/bin/cluvfy comp clumgr {2}"'''.format(giuser,gihome,crs_nodes) + else: + crs_nodes=" -n " + node + cmd='''su - {0} -c "{1}/bin/cluvfy comp clumgr {2}"'''.format(giuser,gihome,crs_nodes) + + if sshflag: + crs_nodes=" -n " + node + cmd='''su - {0} -c "ssh {3} '{1}/bin/cluvfy comp clumgr {2}'"'''.format(giuser,gihome,crs_nodes,node) + + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,None) + return retcode + + def check_home(self,node,home,user): + """ + This function check if crs is configued properly + """ + giuser,gihome,gbase,oinv=self.ocommon.get_gi_params() + if not node: + crs_nodes=" -allnodes " + else: + crs_nodes=" -n " + node + + cvufile='''{0}/bin/cluvfy'''.format(gihome) + if not self.ocommon.check_file(cvufile,True,None,None): + return 1 + + cmd='''su - {0} -c "{1}/bin/cluvfy comp software -d {3} -verbose"'''.format(user,gihome,node,home) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + if not self.ocommon.check_substr_match(output,"FAILED"): + return 0 + else: + return 1 + + def check_db_homecfg(self,node): + """ + This function check if db home is configured properly + """ + giuser,gihome,gbase,oinv=self.ocommon.get_gi_params() + dbuser,dbhome,dbbase,oinv=self.ocommon.get_db_params() + + if not node: + crs_nodes=" -allnodes " + else: + crs_nodes=" -n " + node + + cmd='''su - {0} -c "{1}/bin/cluvfy stage -pre dbcfg {2} -d {3}"'''.format(dbuser,gihome,crs_nodes,dbhome) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,None) + return retcode + + def check_addnode(self): + """ + This function check if the node can be added + """ + exiting_cls_node=self.ocommon.get_existing_clu_nodes(True) + giuser,gihome,gbase,oinv=self.ocommon.get_gi_params() + node=exiting_cls_node.split(",")[0] + tmpdir=self.ocommon.get_tmpdir() + if self.ocommon.check_key("CRS_NODES",self.ora_env_dict): + pub_nodes,vip_nodes,priv_nodes=self.ocommon.process_cluster_vars("CRS_NODES") + crs_nodes=pub_nodes.replace(" ",",") + cmd='''su - {0} -c "ssh {1} '{2}/runcluvfy.sh stage -pre nodeadd -n {3}'"'''.format(giuser,node,gihome,crs_nodes,tmpdir) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + if self.ocommon.check_key("IGNORE_CVU_CHECKS",self.ora_env_dict): + self.ocommon.log_info_message("Ignoring CVU checks failure as IGNORE_CVU_CHECKS set to ignore CVU checks.",self.file_name) + self.ocommon.check_os_err(output,error,retcode,None) + else: + self.ocommon.check_os_err(output,error,retcode,None) + diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/oraenv.py b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/oraenv.py new file mode 100755 index 0000000000..50690370c4 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/oraenv.py @@ -0,0 +1,173 @@ +#!/usr/bin/python + +############################# +# Copyright 2020, Oracle Corporation and/or affiliates. All rights reserved. +# Licensed under the Universal Permissive License v 1.0 as shown at http://oss.oracle.com/licenses/upl +# Author: paramdeep.saini@oracle.com +############################ + +""" + This file read the env variables from a file or using env command and populate them in variable +""" + +import os + +class OraEnv: + __instance = None + __env_var_file = '/etc/rac_env_vars' + __env_var_file_flag = None + __env_var_dict = {} + __ora_asm_diskgroup_name = '+DATA' + __ora_gimr_flag = 'false' + __ora_grid_user = 'grid' + __ora_db_user = 'oracle' + __ora_oinstall_group_name = 'oinstall' + encrypt_str__ = None + original_str__ = None + logdir__ = "/tmp/orod" + + def __init__(self): + """ Virtually private constructor. """ + if OraEnv.__instance != None: + raise Exception("This class is a singleton!") + else: + OraEnv.__instance = self + OraEnv.read_variable() + OraEnv.add_variable() + try: + os.mkdir(OraEnv.logdir__) + except OSError as error: + pass + + @staticmethod + def get_instance(): + """ Static access method. """ + if OraEnv.__instance == None: + OraEnv() + return OraEnv.__instance + + @staticmethod + def read_variable(): + """ Read the variables from a file into dict """ + if OraEnv.__env_var_file_flag: + with open(OraEnv.__env_var_file) as envfile: + for line in envfile: + name, var = line.partition("=")[::2] + OraEnv.__env_var_dict[name.strip()] = var + else: + OraEnv.__env_var_dict = os.environ + + @staticmethod + def add_variable(): + """ Add more variable ased on enviornment with default values in __env_var_dict""" + if "ORA_ASM_DISKGROUP_NAME" not in OraEnv.__env_var_dict: + OraEnv.__env_var_dict["ORA_ASM_DISKGROUP_NAME"] = "+DATA" + + if "ORA_GRID_USER" not in OraEnv.__env_var_dict: + OraEnv.__env_var_dict["GRID_USER"] = "grid" + + if "ORA_DB_USER" not in OraEnv.__env_var_dict: + OraEnv.__env_var_dict["DB_USER"] = "oracle" + + if "ORA_OINSTALL_GROUP_NAME" not in OraEnv.__env_var_dict: + OraEnv.__env_var_dict["OINSTALL"] = "oinstall" + + @staticmethod + def add_custom_variable(key,val): + """ Addcustom more variable passed from main.py values in __env_var_dict""" + if key not in OraEnv.__env_var_dict: + OraEnv.__env_var_dict[key] = val + + @staticmethod + def update_key(key,val): + """ Updating key variable passed from main.py values in __env_var_dict""" + OraEnv.__env_var_dict[key] = val + + @staticmethod + def get_env_vars(): + """ Static access method to get the env vars. """ + return OraEnv.__env_var_dict + + @staticmethod + def update_env_vars(env_dict): + """ Static access method to get the env vars. """ + OraEnv.__env_var_dict = env_dict + + @staticmethod + def get_env_dict(): + """ Static access method t return the dict. """ + return OraEnv.__env_var_dict + + @staticmethod + def get_log_dir(): + """ Static access method to return the logdir. """ + return OraEnv.logdir__ + + @staticmethod + def statelogfile_name(): + """ Static access method to return the state logfile name. """ + if "STATE_LOGFILE_NAME" not in OraEnv.__env_var_dict: + return OraEnv.logdir__ + "/.statefile" + else: + return OraEnv.__env_var_dict["STATE_LOGFILE_NAME"] + + @staticmethod + def logfile_name(file_type): + """ Static access method to return the logfile name. """ + if file_type == "NONE": + if "LOGFILE_NAME" not in OraEnv.__env_var_dict: + OraEnv.__env_var_dict["LOG_FILE_NAME"] = OraEnv.logdir__ + "/oracle_rac_setup.log" + elif file_type == "DEL_PARAMS": + OraEnv.__env_var_dict["LOG_FILE_NAME"] = OraEnv.logdir__ + "/oracle_rac_del.log" + elif file_type == "RESET_PASSWORD": + OraEnv.__env_var_dict["LOG_FILE_NAME"] = OraEnv.logdir__ + "/oracle_rac_reset_passwd.log" + elif file_type == "ADD_TNS": + OraEnv.__env_var_dict["LOG_FILE_NAME"] = OraEnv.logdir__ + "/oracle_rac_populate_tns_file.log" + elif file_type == "CHECK_RAC_INST": + OraEnv.__env_var_dict["LOG_FILE_NAME"] = OraEnv.logdir__ + "/oracle_check_rac_inst_file.log" + elif file_type == "CHECK_GI_LOCAL": + OraEnv.__env_var_dict["LOG_FILE_NAME"] = OraEnv.logdir__ + "/oracle_check_gi_local_file.log" + elif file_type == "CHECK_RAC_DB": + OraEnv.__env_var_dict["LOG_FILE_NAME"] = OraEnv.logdir__ + "/oracle_check_rac_db_file.log" + elif file_type == "CHECK_DB_ROLE": + OraEnv.__env_var_dict["LOG_FILE_NAME"] = OraEnv.logdir__ + "/oracle_check_db_role.log" + elif file_type == "CHECK_CONNECT_STR": + OraEnv.__env_var_dict["LOG_FILE_NAME"] = OraEnv.logdir__ + "/oracle_check_conn_str_file.log" + elif file_type == "CHECK_PDB_CONNECT_STR": + OraEnv.__env_var_dict["LOG_FILE_NAME"] = OraEnv.logdir__ + "/oracle_check_pdb_conn_str_file.log" + elif file_type == "SETUP_DB_LSNR": + OraEnv.__env_var_dict["LOG_FILE_NAME"] = OraEnv.logdir__ + "/setup_db_lsnr.log" + elif file_type == "SETUP_LOCAL_LSNR": + OraEnv.__env_var_dict["LOG_FILE_NAME"] = OraEnv.logdir__ + "/setup_local_lsnr.log" + elif file_type == "CHECK_DB_VERSION": + OraEnv.__env_var_dict["LOG_FILE_NAME"] = OraEnv.logdir__ + "/check_db_version.log" + elif file_type == "CHECK_DB_SVC": + OraEnv.__env_var_dict["LOG_FILE_NAME"] = OraEnv.logdir__ + "/check_db_svc.log" + elif file_type == "MODIFY_DB_SVC": + OraEnv.__env_var_dict["LOG_FILE_NAME"] = OraEnv.logdir__ + "/modify_db_svc.log" + elif file_type == "CHECK_RAC_STATUS": + OraEnv.__env_var_dict["LOG_FILE_NAME"] = OraEnv.logdir__ + "/check_racdb_status.log" + elif file_type == "MODIFY_SCAN": + OraEnv.__env_var_dict["LOG_FILE_NAME"] = OraEnv.logdir__ + "/oracle_modify_scan_status.log" + elif file_type == "UPDATE_ASMCOUNT": + OraEnv.__env_var_dict["LOG_FILE_NAME"] = OraEnv.logdir__ + "/oracle_update_asmcount_status.log" + elif file_type == "UPDATE_ASMDEVICES": + OraEnv.__env_var_dict["LOG_FILE_NAME"] = OraEnv.logdir__ + "/oracle_update_asmdevices_status.log" + elif file_type == "UPDATE_LISTENERENDP": + OraEnv.__env_var_dict["LOG_FILE_NAME"] = OraEnv.logdir__ + "/oracle_update_listenerendp_status.log" + elif file_type == "LIST_ASMDG": + OraEnv.__env_var_dict["LOG_FILE_NAME"] = OraEnv.logdir__ + "/oracle_list_asmdg_status.log" + elif file_type == "LIST_ASMDISKS": + OraEnv.__env_var_dict["LOG_FILE_NAME"] = OraEnv.logdir__ + "/oracle_list_asmdisks_status.log" + elif file_type == "LIST_ASMDGREDUNDANCY": + OraEnv.__env_var_dict["LOG_FILE_NAME"] = OraEnv.logdir__ + "/oracle_list_asmdgredudancy_status.log" + elif file_type == "LIST_ASMINSTNAME": + OraEnv.__env_var_dict["LOG_FILE_NAME"] = OraEnv.logdir__ + "/oracle_list_asminstname_status.log" + elif file_type == "LIST_ASMINSTSTATUS": + OraEnv.__env_var_dict["LOG_FILE_NAME"] = OraEnv.logdir__ + "/oracle_list_amsinst_status.log" + elif file_type == "UPDATE_LISTENERENDP": + OraEnv.__env_var_dict["LOG_FILE_NAME"] = OraEnv.logdir__ + "/oracle_update_listenerendp_status.log" + else: + pass + + return OraEnv.__env_var_dict["LOG_FILE_NAME"] diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/orafactory.py b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/orafactory.py new file mode 100755 index 0000000000..17e1905937 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/orafactory.py @@ -0,0 +1,209 @@ +#!/usr/bin/python + +############################# +# Copyright 2020, Oracle Corporation and/or affiliates. All rights reserved. +# Licensed under the Universal Permissive License v 1.0 as shown at http://oss.oracle.com/licenses/upl +# Author: paramdeep.saini@oracle.com +############################ + +""" + This file contains to the code call different classes objects based on setup type +""" + +import os +import sys +import re +sys.path.insert(0, "/opt/scripts/startup/scripts") + + +from oralogger import * +from oraenv import * +from oracommon import * +from oramachine import * +from oragiprov import * +from oragiadd import * +from orasshsetup import * +from oraracadd import * +from oraracprov import * +from oraracdel import * +from oramiscops import * + +class OraFactory: + """ + This is a class for calling child objects to setup RAC/DG/GRID/DB/Sharding based on OP_TYPE env variable. + + Attributes: + oralogger (object): object of OraLogger Class. + ohandler (object): object of Handler class. + oenv (object): object of singleton OraEnv class. + ocommon(object): object of OraCommon class. + ora_env_dict(dict): Dict of env variable populated based on env variable for the setup. + file_name(string): Filename from where logging message is populated. + """ + def __init__(self,oralogger,orahandler,oraenv,oracommon): + """ + This is a class for calling child objects to setup RAC/DG/GRID/DB based on OP_TYPE env variable. + + Attributes: + oralogger (object): object of OraLogger Class. + ohandler (object): object of Handler class. + oenv (object): object of singleton OraEnv class. + ocommon(object): object of OraCommon class. + ora_env_dict(dict): Dict of env variable populated based on env variable for the setup. + file_name(string): Filename from where logging message is populated. + """ + self.ologger = oralogger + self.ohandler = orahandler + self.oenv = oraenv.get_instance() + self.ocommon = oracommon + self.ocvu = OraCvu(self.ologger,self.ohandler,self.oenv,self.ocommon) + self.osetupssh = OraSetupSSH(self.ologger,self.ohandler,self.oenv,self.ocommon) + self.ora_env_dict = oraenv.get_env_vars() + self.file_name = os.path.basename(__file__) + def get_ora_objs(self): + ''' + Return the instance of a classes which will setup the enviornment. + + Returns: + ofactory_obj: List of objects + ''' + ofactory_obj = [] + + msg='''ora_env_dict set to : {0}'''.format(self.ora_env_dict) + self.ocommon.log_info_message(msg,self.file_name) + + msg='''Adding machine setup object in orafactory''' + self.ocommon.log_info_message(msg,self.file_name) + omachine=OraMachine(self.ologger,self.ohandler,self.oenv,self.ocommon,self.ocvu,self.osetupssh) + self.ocommon.log_info_message(msg,self.file_name) + ofactory_obj.append(omachine) + + msg="Checking the OP_TYPE and Version to begin the installation" + self.ocommon.log_info_message(msg,self.file_name) + + # Checking the OP_TYPE + op_type=None + if self.ocommon.check_key("CUSTOM_RUN_FLAG",self.ora_env_dict): + if self.ocommon.check_key("OP_TYPE",self.ora_env_dict): + op_type=self.ora_env_dict["OP_TYPE"] + + self.ocommon.populate_rac_env_vars() + if self.ocommon.check_key("OP_TYPE",self.ora_env_dict): + if op_type is not None: + self.ocommon.update_key("OP_TYPE",op_type,self.ora_env_dict) + msg='''OP_TYPE variable is set to {0}.'''.format(self.ora_env_dict["OP_TYPE"]) + self.ocommon.log_info_message(msg,self.file_name) + else: + self.ora_env_dict=self.ocommon.add_key("OP_TYPE","nosetup",self.ora_env_dict) + msg="OP_TYPE variable is set to default nosetup. No value passed as an enviornment variable." + self.ocommon.log_info_message(msg,self.file_name) + #default version as 0 integer, will read from rsp file + version=0 + if self.ocommon.check_key("GRID_RESPONSE_FILE",self.ora_env_dict): + gridrsp=self.ora_env_dict["GRID_RESPONSE_FILE"] + self.ocommon.log_info_message("GRID_RESPONSE_FILE parameter is set and file location is:" + gridrsp ,self.file_name) + + if os.path.isfile(gridrsp): + with open(gridrsp) as fp: + for line in fp: + if len(line.split("=")) == 2: + key=(line.split("=")[0]).strip() + value=(line.split("=")[1]).strip() + self.ocommon.log_info_message("KEY and Value pair set to: " + key + ":" + value ,self.file_name) + if key == "oracle.install.responseFileVersion": + match = re.search(r'v(\d{2})', value) + if match: + version=int(match.group(1)) + else: + # Default to version 23 if no match is found + version=23 + #print version in logs + msg="Version detected in response file is {0}".format(version) + self.ocommon.log_info_message(msg,self.file_name) + ## Calling this function from here to make sure INSTALL_NODE is set + if version == int(19) or version == int(21): + self.ocommon.update_pre_23c_gi_env_vars_from_rspfile() + else: + # default to read when its either set as 23 in response file or if response file is not present + self.ocommon.update_gi_env_vars_from_rspfile() + # Check the OP_TYPE value and call objects based on it value + install_node,pubhost=self.ocommon.get_installnode() + if install_node.lower() == pubhost.lower(): + if self.ora_env_dict["OP_TYPE"] == 'setupgrid': + msg="Creating and calling instance to provGrid" + ogiprov = OraGIProv(self.ologger,self.ohandler,self.oenv,self.ocommon,self.ocvu,self.osetupssh) + self.ocommon.log_info_message(msg,self.file_name) + ofactory_obj.append(ogiprov) + elif self.ora_env_dict["OP_TYPE"] == 'setuprac': + msg="Creating and calling instance to prov RAC DB" + oracdb = OraRacProv(self.ologger,self.ohandler,self.oenv,self.ocommon,self.ocvu,self.osetupssh) + self.ocommon.log_info_message(msg,self.file_name) + ofactory_obj.append(oracdb) + elif self.ora_env_dict["OP_TYPE"] in ['setuprac,catalog','catalog,setuprac']: + msg="Creating and calling instance to prov RAC DB for catalog setup" + oracdb = OraRacProv(self.ologger,self.ohandler,self.oenv,self.ocommon,self.ocvu,self.osetupssh) + self.ocommon.log_info_message(msg,self.file_name) + ofactory_obj.append(oracdb) + elif self.ora_env_dict["OP_TYPE"] in ['setuprac,primaryshard','primaryshard,setuprac']: + msg="Creating and calling instance to prov RAC DB for primary shard" + oracdb = OraRacProv(self.ologger,self.ohandler,self.oenv,self.ocommon,self.ocvu,self.osetupssh) + self.ocommon.log_info_message(msg,self.file_name) + ofactory_obj.append(oracdb) + elif self.ora_env_dict["OP_TYPE"] in ['setuprac,standbyshard','standbyshard,setuprac']: + msg="Creating and calling instance to prov RAC DB for standby shard setup" + oracdb = OraRacProv(self.ologger,self.ohandler,self.oenv,self.ocommon,self.ocvu,self.osetupssh) + self.ocommon.log_info_message(msg,self.file_name) + ofactory_obj.append(oracdb) + elif self.ora_env_dict["OP_TYPE"] == 'setupssh': + msg="Creating and calling instance to setup ssh between computes" + ossh = self.osetupssh + self.ocommon.log_info_message(msg,self.file_name) + ofactory_obj.append(ossh) + elif self.ora_env_dict["OP_TYPE"] == 'setupracstandby': + msg="Creating and calling instance to setup RAC standby database" + oracstdby = OraRacStdby(self.ologger,self.ohandler,self.oenv,self.ocommon,self.ocvu,self.osetupssh) + self.ocommon.log_info_message(msg,self.file_name) + ofactory_obj.append(oracstdby) + elif self.ora_env_dict["OP_TYPE"] == 'gridaddnode': + msg="Creating and calling instance to add grid" + oaddgi = OraGIAdd(self.ologger,self.ohandler,self.oenv,self.ocommon,self.ocvu,self.osetupssh) + self.ocommon.log_info_message(msg,self.file_name) + ofactory_obj.append(oaddgi) + elif self.ora_env_dict["OP_TYPE"] == 'racaddnode': + msg="Creating and calling instance to add RAC node" + oaddrac = OraRacAdd(self.ologger,self.ohandler,self.oenv,self.ocommon,self.ocvu,self.osetupssh) + self.ocommon.log_info_message(msg,self.file_name) + ofactory_obj.append(oaddrac) + elif self.ora_env_dict["OP_TYPE"] == 'setupenv': + msg="Creating and calling instance to setup the racenv" + osetupenv = OraSetupEnv(self.ologger,self.ohandler,self.oenv,self.ocommon,self.ocvu,self.osetupssh) + self.ocommon.log_info_message(msg,self.file_name) + ofactory_obj.append(osetupenv) + elif self.ora_env_dict["OP_TYPE"] == 'racdelnode': + msg="Creating and calling instance to delete the rac node" + oracdel = OraRacDel(self.ologger,self.ohandler,self.oenv,self.ocommon,self.ocvu,self.osetupssh) + self.ocommon.log_info_message(msg,self.file_name) + ofactory_obj.append(oracdel) + elif self.ora_env_dict["OP_TYPE"] == 'miscops': + msg="Creating and calling instance to perform the miscellenous operations" + oramops = OraMiscOps(self.ologger,self.ohandler,self.oenv,self.ocommon,self.ocvu,self.osetupssh) + self.ocommon.rotate_log_files() + self.ocommon.log_info_message(msg,self.file_name) + ofactory_obj.append(oramops) + else: + msg="OP_TYPE must be set to {setupgrid|setuprac|setupssh|setupracstandby|gridaddnode|racaddnode}" + self.ocommon.log_info_message(msg,self.file_name) + elif install_node.lower() != pubhost.lower() and self.ocommon.check_key("CUSTOM_RUN_FLAG",self.ora_env_dict): + if self.ora_env_dict["OP_TYPE"] == 'miscops': + msg="Creating and calling instance to perform the miscellenous operations" + oramops = OraMiscOps(self.ologger,self.ohandler,self.oenv,self.ocommon,self.ocvu,self.osetupssh) + self.ocommon.rotate_log_files() + self.ocommon.log_info_message(msg,self.file_name) + ofactory_obj.append(oramops) + else: + msg="INSTALL_NODE {0} is not matching with the hostname {1}. Resetting OP_TYPE to nosetup.".format(install_node,pubhost) + self.ocommon.log_info_message(msg,self.file_name) + self.ocommon.update_key("OP_TYPE","nosetup",self.ora_env_dict) + + + return ofactory_obj diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/oragiadd.py b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/oragiadd.py new file mode 100755 index 0000000000..c24efe2048 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/oragiadd.py @@ -0,0 +1,314 @@ +#!/usr/bin/python + +############################# +# Copyright 2021, Oracle Corporation and/or affiliates. All rights reserved. +# Licensed under the Universal Permissive License v 1.0 as shown at http://oss.oracle.com/licenses/upl +# Author: paramdeep.saini@oracle.com +############################ + +""" + This file contains to the code call different classes objects based on setup type +""" + +import os +import sys +import traceback + +from oralogger import * +from oraenv import * +from oracommon import * +from oramachine import * +from orasetupenv import * +from orasshsetup import * +from oracvu import * +from oragiprov import * + +class OraGIAdd: + """ + This class performs the CVU checks + """ + def __init__(self,oralogger,orahandler,oraenv,oracommon,oracvu,orasetupssh): + try: + self.ologger = oralogger + self.ohandler = orahandler + self.oenv = oraenv.get_instance() + self.ocommon = oracommon + self.ora_env_dict = oraenv.get_env_vars() + self.file_name = os.path.basename(__file__) + self.ocvu = oracvu + self.osetupssh = orasetupssh + self.ogiprov = OraGIProv(self.ologger,self.ohandler,self.oenv,self.ocommon,self.ocvu,self.osetupssh) + + except BaseException as ex: + traceback.print_exc(file = sys.stdout) + + def setup(self): + """ + This function setup the grid on this machine + """ + self.ocommon.log_info_message("Start setup()",self.file_name) + ct = datetime.datetime.now() + bts = ct.timestamp() + giuser,gihome,obase,invloc=self.ocommon.get_gi_params() + pubhostname = self.ocommon.get_public_hostname() + retcode1=self.ocvu.check_home(pubhostname,gihome,giuser) + if retcode1 == 0: + bstr="Grid home is already installed on this machine" + self.ocommon.log_info_message(self.ocommon.print_banner(bstr),self.file_name) + if self.ocommon.check_key("GI_HOME_INSTALLED_FLAG",self.ora_env_dict): + bstr="Grid is already configured on this machine" + self.ocommon.log_info_message(self.ocommon.print_banner(bstr),self.file_name) + else: + self.env_param_checks() + self.ocommon.log_info_message("Start perform_ssh_setup()",self.file_name) + self.perform_ssh_setup() + self.ocommon.log_info_message("End perform_ssh_setup()",self.file_name) + if self.ocommon.check_key("COPY_GRID_SOFTWARE",self.ora_env_dict): + self.ocommon.log_info_message("Start crs_sw_install()",self.file_name) + self.ogiprov.crs_sw_install() + self.ocommon.log_info_message("End crs_sw_install()",self.file_name) + self.ogiprov.run_orainstsh() + self.ocommon.log_info_message("Start ogiprov.run_rootsh()",self.file_name) + self.ogiprov.run_rootsh() + self.ocommon.log_info_message("End ogiprov.run_rootsh()",self.file_name) + self.ocvu.check_addnode() + self.ocommon.log_info_message("Start crs_sw_configure()",self.file_name) + gridrsp=self.crs_sw_configure() + self.ocommon.log_info_message("End crs_sw_configure()",self.file_name) + self.run_orainstsh() + self.ocommon.log_info_message("Start run_rootsh()",self.file_name) + self.run_rootsh() + self.ocommon.log_info_message("End run_rootsh()",self.file_name) + pub_nodes,vip_nodes,priv_nodes=self.ocommon.process_cluster_vars("CRS_NODES") + crs_nodes=pub_nodes.replace(" ",",") + for node in crs_nodes.split(","): + self.clu_checks(node) + if self.ocommon.detect_k8s_env(): + self.ocommon.run_custom_scripts("CUSTOM_GRID_SCRIPT_DIR","CUSTOM_GRID_SCRIPT_FILE",giuser) + self.ocommon.update_scan(giuser,gihome,None,pubhostname) + self.ocommon.start_scan(giuser,gihome,pubhostname) + self.ocommon.update_scan_lsnr(giuser,gihome,pubhostname) + self.ocommon.start_scan_lsnr(giuser,gihome,pubhostname) + ct = datetime.datetime.now() + ets = ct.timestamp() + totaltime=ets - bts + self.ocommon.log_info_message("Total time for setup() = [ " + str(round(totaltime,3)) + " ] seconds",self.file_name) + + def env_param_checks(self): + """ + Perform the env setup checks + """ + self.scan_check() + self.ocommon.check_env_variable("GRID_HOME",True) + self.ocommon.check_env_variable("GRID_BASE",True) + self.ocommon.check_env_variable("INVENTORY",True) +# self.ocommon.check_env_variable("ASM_DISCOVERY_DIR",None) + + def scan_check(self): + """ + Check if scan is set + """ + if self.ocommon.check_key("GRID_RESPONSE_FILE",self.ora_env_dict): + self.ocommon.log_info_message("GRID_RESPONSE_FILE is set. Ignoring checking SCAN_NAME as CVU will validate responsefile",self.file_name) + else: + if self.ocommon.check_key("SCAN_NAME",self.ora_env_dict): + self.ocommon.log_info_message("SCAN_NAME variable is set: " + self.ora_env_dict["SCAN_NAME"],self.file_name) + # ipaddr=self.ocommon.get_ip(self.ora_env_dict["SCAN_NAME"]) + # status=self.ocommon.validate_ip(ipaddr) + # if status: + # self.ocommon.log_info_message("SCAN_NAME is a valid IP. Check passed...",self.file_name) + # else: + # self.ocommon.log_error_message("SCAN_NAME is not a valid IP. Check failed. Exiting...",self.file_name) + # self.ocommon.prog_exit("127") + # else: + # self.ocommon.log_error_message("SCAN_NAME is not set. Exiting...",self.file_name) + # self.ocommon.prog_exit("127") + + def clu_checks(self,hostname): + """ + Performing clu checks + """ + self.ocommon.log_info_message("Performing CVU checks before DB home installation to make sure clusterware is up and running",self.file_name) + retcode1=self.ocvu.check_ohasd(hostname) + retcode2=self.ocvu.check_asm(hostname) + retcode3=self.ocvu.check_clu(hostname,None) + + if retcode1 == 0: + msg="Cluvfy ohasd check passed!" + self.ocommon.log_info_message(msg,self.file_name) + else: + msg="Cluvfy ohasd check faild. Exiting.." + self.ocommon.log_error_message(msg,self.file_name) + self.ocommon.prog_exit("127") + + if retcode2 == 0: + msg="Cluvfy asm check passed!" + self.ocommon.log_info_message(msg,self.file_name) + else: + msg="Cluvfy asm check faild. Exiting.." + self.ocommon.log_error_message(msg,self.file_name) + self.ocommon.prog_exit("127") + + if retcode3 == 0: + msg="Cluvfy clumgr check passed!" + self.ocommon.log_info_message(msg,self.file_name) + else: + msg="Cluvfy clumgr check faild. Exiting.." + self.ocommon.log_error_message(msg,self.file_name) + self.ocommon.prog_exit("127") + + def perform_ssh_setup(self): + """ + Perform ssh setup + """ + if not self.ocommon.detect_k8s_env(): + user=self.ora_env_dict["GRID_USER"] + ohome=self.ora_env_dict["GRID_HOME"] + self.osetupssh.setupssh(user,ohome,'ADDNODE') + #if self.ocommon.check_key("VERIFY_SSH",self.ora_env_dict): + #self.osetupssh.verifyssh(user,'ADDNODE') + else: + self.ocommon.log_info_message("SSH setup must be already completed during env setup as this this k8s env.",self.file_name) + + def crs_sw_configure(self): + """ + This function performs the crs software install on all the nodes + """ + ohome=self.ora_env_dict["GRID_HOME"] + gridrsp="" + if self.ocommon.check_key("GRID_RESPONSE_FILE",self.ora_env_dict): + gridrsp=self.check_responsefile() + else: + gridrsp=self.prepare_responsefile() + + node="" + nodeflag=False + existing_crs_nodes=self.ocommon.get_existing_clu_nodes(True) + for cnode in existing_crs_nodes.split(","): + retcode3=self.ocvu.check_clu(cnode,True) + if retcode3 == 0: + node=cnode + nodeflag=True + break + + #self.ocvu.cluvfy_addnode(gridrsp,self.ora_env_dict["GRID_HOME"],self.ora_env_dict["GRID_USER"]) + if node: + user=self.ora_env_dict["GRID_USER"] + self.ocommon.scpfile(node,gridrsp,gridrsp,user) + status=self.ocommon.check_home_inv(None,ohome,user) + if status: + self.ocommon.sync_gi_home(node,ohome,user) + cmd=self.ocommon.get_sw_cmd("ADDNODE",gridrsp,node,None) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,None) + self.ocommon.check_crs_sw_install(output) + else: + self.ocommon.log_error_message("Clusterware is not up on any node : " + existing_crs_nodes + ".Exiting...",self.file_name) + self.ocommon.prog_exit("127") + + return gridrsp + + def check_responsefile(self): + """ + This function returns the valid response file + """ + gridrsp=None + if self.ocommon.check_key("GRID_RESPONSE_FILE",self.ora_env_dict): + gridrsp=self.ora_env_dict["GRID_RESPONSE_FILE"] + self.ocommon.log_info_message("GRID_RESPONSE_FILE parameter is set and file location is:" + gridrsp ,self.file_name) + + if os.path.isfile(gridrsp): + return gridrsp + else: + self.ocommon.log_error_message("Grid response file does not exist at its location: " + gridrsp + ".Exiting..",self.file_name) + self.ocommon.prog_exit("127") + + def prepare_responsefile(self): + """ + This function prepare the response file if no response file passed + """ + self.ocommon.log_info_message("Preparing Grid responsefile.",self.file_name) + giuser,gihome,obase,invloc=self.ocommon.get_gi_params() + ## Variable Assignments + #asmstr="/dev/asm*" + x = datetime.datetime.now() + rspdata="" + gridrsp='''{1}/grid_addnode_{0}.rsp'''.format(x.strftime("%f"),"/tmp") + clunodes=self.ocommon.get_crsnodes() + node="" + nodeflag=False + existing_crs_nodes=self.ocommon.get_existing_clu_nodes(True) + for cnode in existing_crs_nodes.split(","): + retcode3=self.ocvu.check_clu(cnode,True) + if retcode3 == 0: + node=cnode + nodeflag=True + break + + if not nodeflag: + self.ocommon.log_error_message("Unable to find any existing healthy cluster node to verify the cluster status. This can be a ssh problem or cluster is not healthy. Error occurred!") + self.ocommon.prog_exit("127") + + oraversion=self.ocommon.get_rsp_version("ADDNODE",node) + + version=oraversion.split(".",1)[0].strip() + if int(version) < 23: + rspdata=''' + oracle.install.responseFileVersion=/oracle/install/rspfmt_dbinstall_response_schema_v{3} + oracle.install.option=CRS_ADDNODE + ORACLE_BASE={0} + INVENTORY_LOCATION={1} + oracle.install.asm.OSDBA=asmdba + oracle.install.asm.OSOPER=asmoper + oracle.install.asm.OSASM=asmadmin + oracle.install.crs.config.clusterNodes={2} + oracle.install.crs.rootconfig.configMethod=ROOT + oracle.install.asm.configureAFD=false + oracle.install.crs.rootconfig.executeRootScript=false + oracle.install.crs.configureRHPS=false + '''.format(obase,invloc,clunodes,oraversion,"false") +# fdata="\n".join([s for s in rspdata.split("\n") if s]) + else: + rspdata=''' + oracle.install.responseFileVersion=/oracle/install/rspfmt_dbinstall_response_schema_v{3} + oracle.install.option=CRS_ADDNODE + ORACLE_BASE={0} + INVENTORY_LOCATION={1} + OSDBA=asmdba + OSOPER=asmoper + OSASM=asmadmin + clusterNodes={2} + configMethod=ROOT + configureAFD=false + executeRootScript=false + '''.format(obase,invloc,clunodes,oraversion,"false") + + self.ocommon.write_file(gridrsp,rspdata) + if os.path.isfile(gridrsp): + return gridrsp + else: + self.ocommon.log_error_message("Grid response file does not exist at its location: " + gridrsp + ".Exiting..",self.file_name) + self.ocommon.prog_exit("127") + + + def run_orainstsh(self): + """ + This function run the orainst after grid setup + """ + giuser,gihome,gbase,oinv=self.ocommon.get_gi_params() + pub_nodes,vip_nodes,priv_nodes=self.ocommon.process_cluster_vars("CRS_NODES") + for node in pub_nodes.split(" "): + cmd='''su - {0} -c "ssh {1} sudo {2}/orainstRoot.sh"'''.format(giuser,node,oinv) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,True) + + def run_rootsh(self): + """ + This function run the root.sh after grid setup + """ + giuser,gihome,gbase,oinv=self.ocommon.get_gi_params() + pub_nodes,vip_nodes,priv_nodes=self.ocommon.process_cluster_vars("CRS_NODES") + for node in pub_nodes.split(" "): + cmd='''su - {0} -c "ssh {1} sudo {2}/root.sh"'''.format(giuser,node,gihome) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,True) diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/oragiprov.py b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/oragiprov.py new file mode 100755 index 0000000000..75c36efad6 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/oragiprov.py @@ -0,0 +1,611 @@ +#!/usr/bin/python + +############################# +# Copyright 2021-2024, Oracle Corporation and/or affiliates. All rights reserved. +# Licensed under the Universal Permissive License v 1.0 as shown at http://oss.oracle.com/licenses/upl +# Author: paramdeep.saini@oracle.com +# Contributor: saurabh.ahuja@oracle.com +############################ + +""" + This file contains to the code call different classes objects based on setup type +""" + +from oralogger import * +from oraenv import * +from oracommon import * +from oramachine import * +from orasetupenv import * +from orasshsetup import * +from oracvu import * +import time + +import os +import sys +import subprocess +import datetime + +class OraGIProv: + """ + This class performs the CVU checks + """ + def __init__(self,oralogger,orahandler,oraenv,oracommon,oracvu,orasetupssh): + try: + self.ologger = oralogger + self.ohandler = orahandler + self.oenv = oraenv.get_instance() + self.ocommon = oracommon + self.ora_env_dict = oraenv.get_env_vars() + self.file_name = os.path.basename(__file__) + self.osetupssh = orasetupssh + self.ocvu = oracvu + self.stopThreaFlag = False + self.mythread = {} + self.myproc = {} + except BaseException as ex: + ex_type, ex_value, ex_traceback = sys.exc_info() + trace_back = sys.tracebacklimit.extract_tb(ex_traceback) + stack_trace = list() + for trace in trace_back: + stack_trace.append("File : %s , Line : %d, Func.Name : %s, Message : %s" % (trace[0], trace[1], trace[2], trace[3])) + self.ocommon.log_info_message(ex_type.__name__,self.file_name) + self.ocommon.log_info_message(ex_value,self.file_name) + self.ocommon.log_info_message(stack_trace,self.file_name) + + def setup(self): + """ + This function setup the grid on this machine + """ + self.ocommon.log_info_message("Start setup()",self.file_name) + ct = datetime.datetime.now() + bts = ct.timestamp() + giuser,gihome,obase,invloc=self.ocommon.get_gi_params() + pubhostname = self.ocommon.get_public_hostname() + retcode1=1 + if not self.ocommon.check_key("GI_SW_UNZIPPED_FLAG",self.ora_env_dict): + retcode1=self.ocvu.check_home(pubhostname,gihome,giuser) + if retcode1 == 0: + bstr="Grid home is already installed on this machine" + self.ocommon.log_info_message(self.ocommon.print_banner(bstr),self.file_name) + if self.ocommon.check_key("GI_HOME_CONFIGURED_FLAG",self.ora_env_dict): + bstr="Grid is already configured on this machine" + self.ocommon.log_info_message(self.ocommon.print_banner(bstr),self.file_name) + else: + self.env_param_checks() + self.ocommon.reset_os_password(giuser) + self.ocommon.log_info_message("Start perform_ssh_setup()",self.file_name) + self.perform_ssh_setup() + self.ocommon.log_info_message("End perform_ssh_setup()",self.file_name) + if self.ocommon.check_key("RESET_FAILED_SYSTEMD",self.ora_env_dict): + self.ocommon.log_info_message("Start reset_failed_units()",self.file_name) + self.reset_failed_units_on_all_nodes() + if self.ocommon.check_key("PERFORM_CVU_CHECKS",self.ora_env_dict): + self.ocommon.log_info_message("Start ocvu.node_reachability_checks()",self.file_name) + self.ocvu.node_reachability_checks("public",self.ora_env_dict["GRID_USER"],"INSTALL") + self.ocommon.log_info_message("End ocvu.node_reachability_checks()",self.file_name) + self.ocommon.log_info_message("Start ocvu.node_connectivity_checks()",self.file_name) + self.ocvu.node_connectivity_checks("public",self.ora_env_dict["GRID_USER"],"INSTALL") + self.ocommon.log_info_message("End ocvu.node_connectivity_checks()",self.file_name) + if retcode1 != 0 and self.ocommon.check_key("COPY_GRID_SOFTWARE",self.ora_env_dict): + self.ocommon.log_info_message("Start crs_sw_instal()",self.file_name) + self.crs_sw_install() + self.ocommon.log_info_message("End crs_sw_instal()",self.file_name) + self.ocommon.log_info_message("Start run_rootsh() and run_orainstsh()",self.file_name) + self.run_orainstsh() + self.run_rootsh() + self.ocommon.log_info_message("End run_rootsh() and run_orainstsh()",self.file_name) + self.ocommon.log_info_message("Start install_cvuqdisk_on_all_nodes()",self.file_name) + self.install_cvuqdisk_on_all_nodes() + self.ocommon.log_info_message("Start crs_config_install()",self.file_name) + gridrsp=self.crs_config_install() + self.ocommon.log_info_message("End crs_config_install()",self.file_name) + self.ocommon.log_info_message("Start run_rootsh()",self.file_name) + self.run_rootsh() + self.ocommon.log_info_message("End run_rootsh()",self.file_name) + self.ocommon.log_info_message("Start execute_postconfig()",self.file_name) + self.run_postroot(gridrsp) + self.ocommon.log_info_message("End execute_postconfig()",self.file_name) + retcode1=self.ocvu.check_ohasd(None) + retcode3=self.ocvu.check_clu(None,None) + if retcode1 != 0 and retcode3 != 0: + self.ocommon.log_info_message("Cluster state is not healthy. Exiting..",self.file_name) + self.ocommon.prog_exit("127") + else: + self.ora_env_dict=self.ocommon.add_key("CLUSTER_SETUP_FLAG","running",self.ora_env_dict) + + self.ocommon.run_custom_scripts("CUSTOM_GRID_SCRIPT_DIR","CUSTOM_GRID_SCRIPT_FILE",giuser) + + ct = datetime.datetime.now() + ets = ct.timestamp() + totaltime=ets - bts + self.ocommon.log_info_message("Total time for setup() = [ " + str(round(totaltime,3)) + " ] seconds",self.file_name) + + def env_param_checks(self): + """ + Perform the env setup checks + """ + if not self.ocommon.check_key("CRS_GPC",self.ora_env_dict): + self.scan_check() + self.ocommon.check_env_variable("GRID_HOME",True) + self.ocommon.check_env_variable("GRID_BASE",True) + self.ocommon.check_env_variable("INVENTORY",True) + self.ocommon.check_env_variable("ASM_DISCOVERY_DIR",None) + + def scan_check(self): + """ + Check if scan is set + """ + if self.ocommon.check_key("GRID_RESPONSE_FILE",self.ora_env_dict): + self.ocommon.log_info_message("GRID_RESPONSE_FILE is set. Ignoring checking SCAN_NAME as CVU will validate responsefile",self.file_name) + else: + if self.ocommon.check_key("SCAN_NAME",self.ora_env_dict): + self.ocommon.log_info_message("SCAN_NAME variable is set: " + self.ora_env_dict["SCAN_NAME"],self.file_name) + #ipaddr=self.ocommon.get_ip(self.ora_env_dict["SCAN_NAME"]) + #status=self.ocommon.validate_ip(ipaddr) + #if status: + # self.ocommon.log_info_message("SCAN_NAME is a valid IP. Check passed...",self.file_name) + #else: + # self.ocommon.log_error_message("SCAN_NAME is not a valid IP. Check failed. Exiting...",self.file_name) + # self.ocommon.prog_exit("127") + else: + self.ocommon.log_error_message("SCAN_NAME is not set. Exiting...",self.file_name) + self.ocommon.prog_exit("127") + + def perform_ssh_setup(self): + """ + Perform ssh setup + """ + #if not self.ocommon.detect_k8s_env(): + pub_nodes,vip_nodes,priv_nodes=self.ocommon.process_cluster_vars("CRS_NODES") + crs_nodes=pub_nodes.replace(" ",",") + crs_nodes_list=crs_nodes.split(",") + if len(crs_nodes_list) == 1: + self.ocommon.log_info_message("Cluster size=1. Node=" + crs_nodes_list[0],self.file_name) + user=self.ora_env_dict["GRID_USER"] + cmd='''su - {0} -c "/bin/rm -rf ~/.ssh ; sleep 1; /bin/ssh-keygen -t rsa -q -N \'\' -f ~/.ssh/id_rsa ; sleep 1; /bin/ssh-keyscan {1} > ~/.ssh/known_hosts 2>/dev/null ; sleep 1; /bin/cp ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys"'''.format(user,crs_nodes_list[0]) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,None) + else: + if not self.ocommon.check_key("SSH_PRIVATE_KEY",self.ora_env_dict) and not self.ocommon.check_key("SSH_PUBLIC_KEY",self.ora_env_dict): + user=self.ora_env_dict["GRID_USER"] + ohome=self.ora_env_dict["GRID_HOME"] + self.osetupssh.setupssh(user,ohome,"INSTALL") + #if self.ocommon.check_key("VERIFY_SSH",self.ora_env_dict): + # self.osetupssh.verifyssh(user,"INSTALL") + else: + self.ocommon.log_info_message("SSH setup must be already completed during env setup as this this env variables SSH_PRIVATE_KEY and SSH_PUBLIC_KEY are set.",self.file_name) + + def crs_sw_install(self): + """ + This function performs the crs software install on all the nodes + """ + giuser,gihome,gibase,oinv=self.ocommon.get_gi_params() + pub_nodes,vip_nodes,priv_nodes=self.ocommon.process_cluster_vars("CRS_NODES") + crs_nodes=pub_nodes.replace(" ",",") + osdba=self.ora_env_dict["OSDBA_GROUP"] if self.ocommon.check_key("OSDBA",self.ora_env_dict) else "asmdba" + osoper=self.ora_env_dict["OSPER_GROUP"] if self.ocommon.check_key("OSPER_GROUP",self.ora_env_dict) else "asmoper" + osasm=self.ora_env_dict["OSASM_GROUP"] if self.ocommon.check_key("OSASM_GROUP",self.ora_env_dict) else "asmadmin" + unixgrp="oinstall" + hostname=self.ocommon.get_public_hostname() + lang=self.ora_env_dict["LANGUAGE"] if self.ocommon.check_key("LANGUAGE",self.ora_env_dict) else "en" + + #copyflag=" -noCopy " + copyflag=" -noCopy " + if not self.ocommon.check_key("COPY_GRID_SOFTWARE",self.ora_env_dict): + copyflag=" -noCopy " + + oraversion=self.ocommon.get_rsp_version("INSTALL",None) + version=oraversion.split(".",1)[0].strip() + + ## Clering the dictionary + self.mythread.clear() + mythreads=[] + #self.mythread.clear() + myproc=[] + + for node in pub_nodes.split(" "): + #self.crs_sw_install_on_node(giuser,copyflag,crs_nodes,oinv,gihome,gibase,osdba,osoper,osasm,version,node) + self.ocommon.log_info_message("Running CRS Sw install on node " + node,self.file_name) + #thread=Thread(target=self.ocommon.crs_sw_install_on_node,args=(giuser,copyflag,crs_nodes,oinv,gihome,gibase,osdba,osoper,osasm,version,node)) + ##thread.setDaemon(True) + #mythreads.append(thread) + + thread=Process(target=self.ocommon.crs_sw_install_on_node,args=(giuser,copyflag,crs_nodes,oinv,gihome,gibase,osdba,osoper,osasm,version,node)) + #thread.setDaemon(True) + mythreads.append(thread) + thread.start() + +# for thread in mythreads: +# thread.start() +# sleep(10) +# self.ocommon.log_info_message("Starting thread ",self.file_name) + + for thread in mythreads: # iterates over the threads + thread.join() # waits until the thread has finished work + self.ocommon.log_info_message("Joining the threads ",self.file_name) + + def crs_config_install(self): + """ + This function performs the crs software install on all the nodes + """ + gridrsp="" + netmasklist=None + + if self.ocommon.check_key("GRID_RESPONSE_FILE",self.ora_env_dict): + gridrsp,netmasklist=self.check_responsefile() + else: + gridrsp,netmasklist=self.prepare_responsefile() + + if self.ocommon.check_key("PERFORM_CVU_CHECKS",self.ora_env_dict): + self.ocvu.cluvfy_checkrspfile(gridrsp,self.ora_env_dict["GRID_HOME"],self.ora_env_dict["GRID_USER"]) + cmd=self.ocommon.get_sw_cmd("INSTALL",gridrsp,None,netmasklist) + passwd=self.ocommon.get_asm_passwd().replace('\n', ' ').replace('\r', '') + self.ocommon.set_mask_str(passwd) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.unset_mask_str() + self.ocommon.check_os_err(output,error,retcode,None) + self.check_crs_config_install(output) + + return gridrsp + def parse_gridrsp_file(self, filename): + """ + Parses the grid_setup_new_23ai.rsp file and extracts network interface details into a formatted string. + + Args: + filename: The name of the grid_setup_new_23ai.rsp file. + + Returns: + A string containing the formatted network interface list. + """ + netmasklist = "" + with open(filename, 'r') as f: + for line in f: + if line.startswith('networkInterfaceList='): + self.ocommon.log_info_message("networkInterfaceList parameter is found from response file in line:" + line, self.file_name) + # Extract network interface details + interface_data = line.strip().split('=')[1].split(',') + for interface in interface_data: + nwname, _, suffix = interface.split(':') + if interface.endswith(":1"): + subnet_mask = "255.255.0.0" # Hardcoded subnet mask for public interfaces with ":1" + # self.ocommon.log_info_message(f"Subnet mask (hardcoded for :1): {subnet_mask}", self.file_name) + else: + try: + subnet_mask = self.ocommon.get_netmask_info(nwname) + # self.ocommon.log_info_message(f"Subnet mask (from ocommon): {subnet_mask}", self.file_name) + except Exception as e: + self.ocommon.log_warning_message(f"Failed to retrieve subnet mask for {nwname} using ocommon, using default (may be inaccurate)", self.file_name) + subnet_mask = "255.255.255.0" # Default subnet mask if retrieval fails + self.ocommon.log_info_message(f"Default subnet mask used: {subnet_mask}", self.file_name) + + netmasklist += f"{nwname}:{subnet_mask}," + + # Remove the trailing comma + netmasklist = netmasklist[:-1] + self.ocommon.log_info_message("netmasklist parameter is set and returned from parse_gridrsp_file method:" + netmasklist ,self.file_name) + return netmasklist + + def check_responsefile(self): + """ + This function returns the valid response file + """ + gridrsp=None + netmasklist = "" + if self.ocommon.check_key("GRID_RESPONSE_FILE",self.ora_env_dict): + gridrsp=self.ora_env_dict["GRID_RESPONSE_FILE"] + self.ocommon.log_info_message("GRID_RESPONSE_FILE parameter is set and file location is:" + gridrsp ,self.file_name) + netmasklist = self.parse_gridrsp_file(gridrsp) + self.ocommon.log_info_message("netmasklist parameter is set to:" + netmasklist ,self.file_name) + + if os.path.isfile(gridrsp): + return gridrsp, netmasklist + else: + self.ocommon.log_error_message("Grid response file does not exist at its location: " + gridrsp + ".Exiting..",self.file_name) + self.ocommon.prog_exit("127") + + def prepare_responsefile(self): + """ + This function prepare the response file if no response file passed + """ + self.ocommon.log_info_message("Preparing Grid responsefile.",self.file_name) + asmfg_disk="" + asm_disk="" + gimrfg_disk="" + gimr_disk="" + giuser,gihome,obase,invloc=self.ocommon.get_gi_params() + dgred=self.ora_env_dict["CRS_ASMDG_REDUNDANCY"] if self.ocommon.check_key("CRS_ASMDG_REDUNDANCY",self.ora_env_dict) else "EXTERNAL" + asmfg_disk,asm_disk=self.ocommon.build_asm_device("CRS_ASM_DEVICE_LIST",dgred) + if self.ocommon.check_key("CLUSTER_TYPE",self.ora_env_dict): + if self.ora_env_dict["CLUSTER_TYPE"] == 'DOMAIN': + gimrfg_disk,gimr_disk=self.ocommon.build_asm_device("GIMR_ASM_DEVICE_LIST",dgred) + + ## Variable Assignments + clusterusage="GENERAL_PURPOSE" if self.ocommon.check_key("CRS_GPC",self.ora_env_dict) else "RAC" + crsconfig="HA_CONFIG" if self.ocommon.check_key("CRS_GPC",self.ora_env_dict) else "CRS_CONFIG" + if clusterusage != "GENERAL_PURPOSE": + scanname=self.ora_env_dict["SCAN_NAME"] + scanport=self.ora_env_dict["SCAN_PORT"] if self.ocommon.check_key("SCAN_PORT",self.ora_env_dict) else "1521" + else: + scanname="" + scanport="" + clutype=self.ora_env_dict["CLUSTER_TYPE"] if self.ocommon.check_key("CLUSTER_TYPE",self.ora_env_dict) else "STANDALONE" + cluname=self.ora_env_dict["CLUSTER_NAME"] if self.ocommon.check_key("CLUSTER_NAME",self.ora_env_dict) else "racnode-c" + clunodes=self.ocommon.get_crsnodes() + nwiface,netmasklist=self.ocommon.get_nwifaces() + gimrflag=self.ora_env_dict["GIMR_FLAG"] if self.ocommon.check_key("GIMR",self.ora_env_dict) else "false" + passwd=self.ocommon.get_asm_passwd().replace('\n', ' ').replace('\r', '') + dgname=self.ocommon.rmdgprefix(self.ora_env_dict["CRS_ASM_DISKGROUP"]) if self.ocommon.check_key("CRS_ASM_DISKGROUP",self.ora_env_dict) else "DATA" + fgname=asmfg_disk + asmdisk=asm_disk + discovery_str=self.ocommon.build_asm_discovery_str("CRS_ASM_DEVICE_LIST") + asmstr=self.ora_env_dict["CRS_ASM_DISCOVERY_STRING"] if self.ocommon.check_key("CRS_ASM_DISCOVERY_STRING",self.ora_env_dict) else discovery_str + oraversion=self.ocommon.get_rsp_version("INSTALL",None) + self.ocommon.log_info_message("oraversion" + oraversion, self.file_name) + disksWithFGNames=asmdisk.replace(',',',,') + ',' + self.ocommon.log_info_message("disksWithFGNames" + disksWithFGNames, self.file_name) + gridrsp="/tmp/grid.rsp" + + version=oraversion.split(".",1)[0].strip() + self.ocommon.log_info_message("disk" + version, self.file_name) + if int(version) < 23: + if self.ocommon.check_key("CRS_GPC",self.ora_env_dict): + clsnodes=None + return self.get_responsefile(obase,invloc,scanname,scanport,clutype,cluname,clunodes,nwiface,gimrflag,passwd,dgname,dgred,fgname,asmdisk,asmstr,disksWithFGNames,oraversion,gridrsp,netmasklist,crsconfig) + else: + return self.get_23c_responsefile(obase,invloc,scanname,scanport,clutype,cluname,clunodes,nwiface,gimrflag,passwd,dgname,dgred,fgname,asmdisk,asmstr,disksWithFGNames,oraversion,gridrsp,netmasklist,clusterusage) + + + def get_responsefile(self,obase,invloc,scanname,scanport,clutype,cluname,clunodes,nwiface,gimrflag,passwd,dgname,dgred,fgname,asmdisk,asmstr,disksWithFGNames,oraversion,gridrsp,netmasklist,crsconfig): + """ + This function prepare the response file if no response file passed + """ + self.ocommon.log_info_message("I am in get_responsefile", self.file_name) + rspdata=''' + oracle.install.responseFileVersion=/oracle/install/rspfmt_dbinstall_response_schema_v{15} + oracle.install.option={19} + ORACLE_BASE={0} + INVENTORY_LOCATION={1} + oracle.install.asm.OSDBA=asmdba + oracle.install.asm.OSOPER=asmoper + oracle.install.asm.OSASM=asmadmin + oracle.install.crs.config.gpnp.scanName={2} + oracle.install.crs.config.gpnp.scanPort={3} + oracle.install.crs.config.clusterName={5} + oracle.install.crs.config.clusterNodes={6} + oracle.install.crs.config.networkInterfaceList={7} + oracle.install.crs.configureGIMR={8} + oracle.install.asm.SYSASMPassword={9} + oracle.install.asm.monitorPassword={9} + oracle.install.crs.config.storageOption= + oracle.install.asm.diskGroup.name={10} + oracle.install.asm.diskGroup.redundancy={11} + oracle.install.asm.diskGroup.AUSize=4 + oracle.install.asm.diskGroup.disksWithFailureGroupNames={18} + oracle.install.asm.diskGroup.disks={13} + oracle.install.asm.diskGroup.quorumFailureGroupNames= + oracle.install.asm.diskGroup.diskDiscoveryString={14} + oracle.install.crs.rootconfig.configMethod=ROOT + oracle.install.asm.configureAFD=false + oracle.install.crs.rootconfig.executeRootScript=false + oracle.install.crs.config.ignoreDownNodes=false + oracle.install.config.managementOption=NONE + oracle.install.crs.configureRHPS={16} + oracle.install.crs.config.ClusterConfiguration={17} + '''.format(obase,invloc,scanname,scanport,clutype,cluname,clunodes,nwiface,gimrflag,passwd,dgname,dgred,fgname,asmdisk,asmstr,oraversion,"false","STANDALONE",disksWithFGNames,crsconfig) +# fdata="\n".join([s for s in rspdata.split("\n") if s]) + self.ocommon.write_file(gridrsp,rspdata) + if os.path.isfile(gridrsp): + return gridrsp,netmasklist + else: + self.ocommon.log_error_message("Grid response file does not exist at its location: " + gridrsp + ".Exiting..",self.file_name) + self.ocommon.prog_exit("127") + + def get_23c_responsefile(self,obase,invloc,scanname,scanport,clutype,cluname,clunodes,nwiface,gimrflag,passwd,dgname,dgred,fgname,asmdisk,asmstr,disksWithFGNames,oraversion,gridrsp,netmasklist,clusterusage): + """ + This function prepare the response file if no response file passed + """ + self.ocommon.log_info_message("I am in get_23c_responsefile", self.file_name) + rspdata=''' + oracle.install.responseFileVersion=/oracle/install/rspfmt_dbinstall_response_schema_v{15} + installOption=CRS_CONFIG + ORACLE_BASE={0} + INVENTORY_LOCATION={1} + OSDBA=asmdba + OSOPER=asmoper + OSASM=asmadmin + clusterUsage={16} + scanName={2} + scanPort={3} + clusterName={5} + clusterNodes={6} + networkInterfaceList={7} + storageOption= + diskGroupName={10} + redundancy={11} + auSize=4 + disksWithFailureGroupNames={17} + diskList={13} + quorumFailureGroupNames= + diskString={14} + configMethod=ROOT + configureAFD=false + executeRootScript=false + ignoreDownNodes=false + managementOption=NONE + '''.format(obase,invloc,scanname,scanport,clutype,cluname,clunodes,nwiface,gimrflag,passwd,dgname,dgred,fgname,asmdisk,asmstr,oraversion,clusterusage,disksWithFGNames) +# fdata="\n".join([s for s in rspdata.split("\n") if s]) + self.ocommon.write_file(gridrsp,rspdata) + if os.path.isfile(gridrsp): + return gridrsp,netmasklist + else: + self.ocommon.log_error_message("Grid response file does not exist at its location: " + gridrsp + ".Exiting..",self.file_name) + self.ocommon.prog_exit("127") + + def check_crs_config_install(self,swdata): + """ + This function check the if the sw install went fine + """ + #if not self.ocommon.check_substr_match(swdata,"orainstRoot.sh"): + # self.ocommon.log_error_message("Grid software install failed. Exiting...",self.file_name) + # self.ocommon.prog_exit("127") + if not self.ocommon.check_substr_match(swdata,"root.sh"): + self.ocommon.log_error_message("Grid software install failed. Exiting...",self.file_name) + self.ocommon.prog_exit("127") + if not self.ocommon.check_substr_match(swdata,"executeConfigTools -responseFile"): + self.ocommon.log_error_message("Grid software install failed. Exiting...",self.file_name) + self.ocommon.prog_exit("127") + + def check_crs_sw_install(self,swdata): + """ + This function check the if the sw install went fine + """ + if not self.ocommon.check_substr_match(swdata,"orainstRoot.sh"): + self.ocommon.log_error_message("Grid software install failed. Exiting...",self.file_name) + self.ocommon.prog_exit("127") + if not self.ocommon.check_substr_match(swdata,"root.sh"): + self.ocommon.log_error_message("Grid software install failed. Exiting...",self.file_name) + self.ocommon.prog_exit("127") + + def run_orainstsh(self): + """ + This function run the orainst after grid setup + """ + giuser,gihome,gbase,oinv=self.ocommon.get_gi_params() + pub_nodes,vip_nodes,priv_nodes=self.ocommon.process_cluster_vars("CRS_NODES") + for node in pub_nodes.split(" "): + cmd='''su - {0} -c "ssh {1} sudo {2}/orainstRoot.sh"'''.format(giuser,node,oinv) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,True) + + def run_rootsh(self): + """ + This function run the root.sh after grid setup + """ + giuser,gihome,gbase,oinv=self.ocommon.get_gi_params() + pub_nodes,vip_nodes,priv_nodes=self.ocommon.process_cluster_vars("CRS_NODES") + # Clear the dict + self.mythread.clear() + mythreads=[] + for node in pub_nodes.split(" "): + oraversion=self.ocommon.get_rsp_version("INSTALL",None) + version = oraversion.split(".", 1)[0].strip() + self.ocommon.log_info_message("oraversion" + version, self.file_name) + if int(version) == 19 or int(version) == 21: + self.run_rootsh_on_node(node,giuser,gihome) + else: + self.ocommon.log_info_message("Running root.sh on node " + node,self.file_name) + thread=Process(target=self.run_rootsh_on_node,args=(node,giuser,gihome)) + mythreads.append(thread) + thread.start() + for thread in mythreads: # iterates over the threads + thread.join() # waits until the thread has finished wor + self.ocommon.log_info_message("Joining the root.sh thread ",self.file_name) + + def run_rootsh_on_node(self,node,giuser,gihome): + """ + This function run root.sh on a node + """ + cmd='''su - {0} -c "ssh {1} sudo {2}/root.sh"'''.format(giuser,node,gihome) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,True) +# if len(self.mythread) > 0: +# if node in self.mythread.keys(): +# swthread_list=self.mythread[node] +# value=swthread_list[0] +# new_list=[value,'FALSE'] +# new_val={node,tuple(new_list)} +# self.mythread.update(new_val) + + def run_postroot(self,gridrsp): + """ + This function execute the post root steps: + """ + giuser,gihome,gbase,oinv=self.ocommon.get_gi_params() + cmd='''su - {0} -c "{1}/gridSetup.sh -executeConfigTools -responseFile {2} -silent"'''.format(giuser,gihome,gridrsp) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,None) + + def reset_systemd(self): + """ + This function reset the systemd + This function reset the systemd + """ + pass + while True: + self.ocommon.log_info_message("Root.sh is running. Resetting systemd to avoid failure.",self.file_name) + cmd='''systemctl reset-failed'''.format() + cmd='''systemctl reset-failed'''.format() + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,None) + cmd = '''systemctl is-system-running'''.format() + cmd = '''systemctl is-system-running'''.format() + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,None) + sleep(3) + if self.stopThreaFlag: + break + def reset_failed_units_on_all_nodes(self): + pub_nodes,vip_nodes,priv_nodes=self.ocommon.process_cluster_vars("CRS_NODES") + for node in pub_nodes.split(" "): + self.ocommon.log_info_message("Running reset_failed_units() on node " + node,self.file_name) + self.reset_failed_units(node) + + def reset_failed_units(self,node): + RESET_FAILED_SYSTEMD = 'true' + SERVICE_NAME = "rhnsd" + SCRIPT_DIR = "/opt/scripts/startup/scripts" + RESET_FAILED_UNITS = "resetFailedUnits.sh" + GRID_USER = "grid" + CRON_JOB_FREQUENCY = "* * * * *" + + def error_exit(message): + raise Exception(message) + + giuser,gihome,obase,invloc=self.ocommon.get_gi_params() + + if RESET_FAILED_SYSTEMD != 'false': + if subprocess.run(["pgrep", "-x", SERVICE_NAME], stdout=subprocess.DEVNULL).returncode == 0: + self.ocommon.log_info_message(SERVICE_NAME + " is running.",self.file_name) + # Check if the service is responding + if subprocess.run(["systemctl", "is-active", "--quiet", SERVICE_NAME]).returncode != 0: + self.ocommon.log_info_message(SERVICE_NAME + " is not responding. Stopping the service.",self.file_name) + cmd='''su - {0} -c "ssh {1} sudo systemctl stop {2}"'''.format(giuser,node,SERVICE_NAME) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,None) + cmd='''su - {0} -c "ssh {1} sudo systemctl disable {2}"'''.format(giuser,node,SERVICE_NAME) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,None) + self.ocommon.log_info_message(SERVICE_NAME + "stopped.",self.file_name) + else: + self.ocommon.log_info_message(SERVICE_NAME + " is responsive. No action needed.",self.file_name) + else: + self.ocommon.log_info_message(SERVICE_NAME + " is not running.",self.file_name) + + self.ocommon.log_info_message("Setting Crontab",self.file_name) + cmd = '''su - {0} -c "ssh {1} 'sudo crontab -l | {{ cat; echo \\"{2} {3}/{4}\\"; }} | sudo crontab -'"'''.format(giuser, node, CRON_JOB_FREQUENCY, SCRIPT_DIR, RESET_FAILED_UNITS) + try: + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,None) + self.ocommon.log_info_message("Successfully installed " + SCRIPT_DIR + "/" + RESET_FAILED_UNITS + " using crontab",self.file_name) + except subprocess.CalledProcessError: + error_exit("Error occurred in crontab setup") + + def install_cvuqdisk_on_all_nodes(self): + pub_nodes,vip_nodes,priv_nodes=self.ocommon.process_cluster_vars("CRS_NODES") + for node in pub_nodes.split(" "): + self.ocommon.log_info_message("Running install_cvuqdisk() on node " + node,self.file_name) + self.install_cvuqdisk(node) + + def install_cvuqdisk(self,node): + rpm_directory = "/u01/app/23c/grid/cv/rpm" + giuser,gihome,obase,invloc=self.ocommon.get_gi_params() + try: + # Construct the rpm command using wildcard for version + cmd = '''su - {0} -c "ssh {1} 'sudo rpm -Uvh {2}/cvuqdisk-*.rpm'"'''.format(giuser, node, rpm_directory) + # Run the rpm command using subprocess + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,None) + self.ocommon.log_info_message("Successfully installed cvuqdisk file.",self.file_name) + + except subprocess.CalledProcessError as e: + self.ocommon.log_error_message("Error installing cvuqdisk. Exiting..." + e,self.file_name) diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/oragridadd.py b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/oragridadd.py new file mode 100755 index 0000000000..a4885b3ac9 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/oragridadd.py @@ -0,0 +1,53 @@ +#!/usr/bin/python + +############################# +# Copyright 2021, Oracle Corporation and/or affiliates. All rights reserved. +# Licensed under the Universal Permissive License v 1.0 as shown at http://oss.oracle.com/licenses/upl +# Author: paramdeep.saini@oracle.com +############################ + +""" + This file contains to the code call different classes objects based on setup type +""" + +from oralogger import * +from oraenv import * +from oracommon import * +from oramachine import * +from orasetupenv import * +from oraracstdby import * +from oraracadd import * +from oracvu import * +from orasshsetup import * + +import os +import sys + +class OraGridAdd: + """ + This class Add the Grid instances + """ + def __init__(self,oralogger,orahandler,oraenv,oracommon): + try: + self.ologger = oralogger + self.ohandler = orahandler + self.oenv = oraenv.get_instance() + self.ocommon = oracommon + self.ora_env_dict = oraenv.get_env_vars() + self.file_name = os.path.basename(__file__) + self.osetupssh = OraSetupSSH(self.ologger,self.ohandler,self.oenv,self.ocommon) + self.ocvu = OraCvu(self.ologger,self.ohandler,self.oenv,self.ocommon) + except BaseException as ex: + ex_type, ex_value, ex_traceback = sys.exc_info() + trace_back = sys.tracebacklimit.extract_tb(ex_traceback) + stack_trace = list() + for trace in trace_back: + stack_trace.append("File : %s , Line : %d, Func.Name : %s, Message : %s" % (trace[0], trace[1], trace[2], trace[3])) + self.ocommon.log_info_message(ex_type.__name__,self.file_name) + self.ocommon.log_info_message(ex_value,self.file_name) + self.ocommon.log_info_message(stack_trace,self.file_name) + def setup(self): + """ + This function setup the grid on this machine + """ + pass diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/oralogger.py b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/oralogger.py new file mode 100755 index 0000000000..552fedc7b2 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/oralogger.py @@ -0,0 +1,182 @@ +#!/usr/bin/python + +############################# +# Copyright 2020, Oracle Corporation and/or affiliates. All rights reserved. +# Licensed under the Universal Permissive License v 1.0 as shown at http://oss.oracle.com/licenses/upl +# Author: paramdeep.saini@oracle.com +############################ + +""" + This file provides the functionality to log the event in console and file +""" + +import logging +import os + +class LoggingType(object): + CONSOLE = 1 + FILE = 2 + STDOUT = 3 + +class OraLogger(object): + """ + This is a class constructor which sets parameter for logger. + + Attributes: + filename_ (string): Filename which we need to set to store logs in a file. + """ + def __init__(self, filename_): + """ + This is a class constructor which sets parameter for logger. + + Attributes: + filename_ (string): Filename which we need to set to store logs in a file. + """ + self.filename_ = filename_ + # Set to default values can be changed later from other classes objects + self.console_ = LoggingType.CONSOLE + self.file_ = LoggingType.FILE + self.stdout_ = LoggingType.STDOUT + self.msg_ = None + self.functname_ = None + self.lineno_ = None + self.logtype_ = "INFO" + self.fmtstr_ = "%(asctime)s: %(levelname)s: %(message)s" + self.datestr_ = "%m/%d/%Y %I:%M:%S %p" + self.root = logging.getLogger() + self.root.setLevel(logging.DEBUG) + self.formatter = logging.Formatter('%(asctime)s %(levelname)8s:%(message)s', "%m/%d/%Y %I:%M:%S %p") + self.stdoutfile_ = "/proc/1/fd/1" + #self.stdoutfile_ = "/dev/pts/0" + # self.stdoutfile_ = "/tmp/test.log" + + def getStdOutValue(self): + return self.stdout_ + +class Handler(object): + """ + This is a class which sets the handler for next logger. + """ + def __init__(self): + """ + This is a handler class constructor and nexthandler is set to None. + """ + self.nextHandler = None + + def handle(self, request): + ''' + This is a function which set the next handler. + + Attributes: + request (object): Object of the class oralogger. + ''' + self.nextHandler.handle(request) + + def print_message(self,request,lhandler): + """ + This function set the log type to INFO, WARN, DEBUG and CRITICAL. + + Attribute: + request (object): Object of the class oralogger. + lhandler: This parameter accept the loghandler. + """ + if request.logtype_ == "WARN": + request.root.warning(request.msg_) + elif request.logtype_ == "DEBUG": + request.root.debug(request.msg_) + elif request.logtype_ == "CRITICAL": + request.root.critical(request.msg_) + elif request.logtype_ == "ERROR": + request.root.error(request.msg_) + else: + request.root.info(request.msg_) + + request.root.removeHandler(lhandler) + +class FHandler(Handler): + """ + This is a class which sets the handler for next logger. + """ + def handle(self,request): + """ + This function print the message and call next handler. + + Attribut: + request: Object of OraLogger + """ + if request.file_ == LoggingType.FILE: + fh = logging.FileHandler(request.filename_) + request.root.addHandler(fh) + fh.setFormatter(request.formatter) + self.print_message(request,fh) + super(FHandler, self).handle(request) + else: + super(FHandler, self).handle(request) + + def print_message(self,request,fh): + """ + This function log the message to console/file/stdout. + """ + super(FHandler, self).print_message(request,fh) + +class CHandler(Handler): + """ + This is a class which sets the handler for next logger. + """ + def handle(self,request): + """ + This function print the message and call next handler. + + Attribute: + request: Object of OraLogger + """ + if request.console_ == LoggingType.CONSOLE: + # ch = logging.StreamHandler() + ch = logging.FileHandler("/tmp/test.log") + request.root.addHandler(ch) + ch.setFormatter(request.formatter) + self.print_message(request,ch) + super(CHandler, self).handle(request) + else: + super(CHandler, self).handle(request) + + def print_message(self,request,ch): + """ + This function log the message to console/file/stdout. + """ + super(CHandler, self).print_message(request,ch) + + +class StdHandler(Handler): + """ + This is a class which sets the handler for next logger. + """ + def handle(self,request): + """ + This function print the message and call next handler. + + Attribute: + request: Object of OraLogger + """ + request.stdout_ = request.getStdOutValue() + if request.stdout_ == LoggingType.STDOUT: + st = logging.FileHandler(request.stdoutfile_) + request.root.addHandler(st) + st.setFormatter(request.formatter) + self.print_message(request,st) + super(StdHandler, self).handle(request) + else: + super(StdHandler, self).handle(request) + + def print_message(self,request,st): + """ + This function log the message to console/file/stdout. + """ + super(StdHandler, self).print_message(request,st) + +class PassHandler(Handler): + """ + This is a class which sets the handler for next logger. + """ + def handle(self, request): + pass diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/oramachine.py b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/oramachine.py new file mode 100755 index 0000000000..bffbedfd9f --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/oramachine.py @@ -0,0 +1,63 @@ +#!/usr/bin/python + +############################# +# Copyright 2020, Oracle Corporation and/or affiliates. All rights reserved. +# Licensed under the Universal Permissive License v 1.0 as shown at http://oss.oracle.com/licenses/upl +# Author: paramdeep.saini@oracle.com +############################ + +from oralogger import * +from oraenv import * +from oracommon import * +from oramachine import * +from orasetupenv import * + +import os +import sys + +class OraMachine: + """ + This calss setup the compute before starting the installation. + """ + def __init__(self,oralogger,orahandler,oraenv,oracommon,oracvu,orasetupssh): + """ + This constructor of OraMachine class to setup the compute + + Attributes: + oralogger (object): object of OraLogger Class. + ohandler (object): object of Handler class. + oenv (object): object of singleton OraEnv class. + ocommon(object): object of OraCommon class. + ora_env_dict(dict): Dict of env variable populated based on env variable for the setup. + file_name(string): Filename from where logging message is populated. + """ + self.ologger = oralogger + self.ohandler = orahandler + self.oenv = oraenv.get_instance() + self.ocommon = oracommon + self.ora_env_dict = oraenv.get_env_vars() + self.file_name = os.path.basename(__file__) + self.ocvu = oracvu + self.osetupssh = orasetupssh + self.osetupenv = OraSetupEnv(self.ologger,self.ohandler,self.oenv,self.ocommon,self.ocvu,self.osetupssh) + def setup(self): + """ + This function setup the compute before starting the installation + """ + self.ocommon.log_info_message("Start setup()",self.file_name) + ct = datetime.datetime.now() + bts = ct.timestamp() + + self.memory_check() + self.osetupenv.setup() + + ct = datetime.datetime.now() + ets = ct.timestamp() + totaltime=ets - bts + self.ocommon.log_info_message("Total time for setup() = [ " + str(round(totaltime,3)) + " ] seconds",self.file_name) + + def memory_check(self): + """ + This function check the memory available inside the container + """ + pass diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/oramiscops.py b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/oramiscops.py new file mode 100755 index 0000000000..3824bdceae --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/oramiscops.py @@ -0,0 +1,788 @@ +#!/usr/bin/python + +############################# +# Copyright 2021, Oracle Corporation and/or affiliates. All rights reserved. +# Licensed under the Universal Permissive License v 1.0 as shown at http://oss.oracle.com/licenses/upl +# Author: sanjay.singh@oracle.com,paramdeep.saini@oracle.com +############################ + +""" + This file contains to the code call different classes objects based on setup type +""" + +import os +import sys +import traceback + +from oralogger import * +from oraenv import * +from oracommon import * +from oramachine import * +from orasetupenv import * +from orasshsetup import * +from oracvu import * +from oragiprov import * +from oraasmca import * +from oraracdel import * +from oraracadd import * +from oraracprov import * +from oraracstdby import * + +class OraMiscOps: + """ + This class performs the misc RAC options such as RAC delete + """ + def __init__(self,oralogger,orahandler,oraenv,oracommon,oracvu,orasetupssh): + try: + self.ologger = oralogger + self.ohandler = orahandler + self.oenv = oraenv.get_instance() + self.ocommon = oracommon + self.ora_env_dict = oraenv.get_env_vars() + self.file_name = os.path.basename(__file__) + self.osetupssh = orasetupssh + self.ocvu = oracvu + self.oracstdby = OraRacStdby(self.ologger,self.ohandler,self.oenv,self.ocommon,self.ocvu,self.osetupssh) + except BaseException as ex: + traceback.print_exc(file = sys.stdout) + + def setup(self): + """ + This function setup the RAC home on this machine + """ + self.ocommon.log_info_message("Start setup()",self.file_name) + ct = datetime.datetime.now() + bts = ct.timestamp() + self.ocommon.update_gi_env_vars_from_rspfile() + if self.ocommon.check_key("DBCA_RESPONSE_FILE",self.ora_env_dict): + self.ocommon.update_rac_env_vars_from_rspfile(self.ora_env_dict["DBCA_RESPONSE_FILE"]) + if self.ocommon.check_key("DEL_RACHOME",self.ora_env_dict): + self.delracnode() + else: + pass + + if self.ocommon.check_key("TNS_PARAMS",self.ora_env_dict): + self.populate_tnsfile() + else: + pass + + if self.ocommon.check_key("CHECK_RAC_INST",self.ora_env_dict): + self.checkraclocal() + else: + pass + + if self.ocommon.check_key("CHECK_RAC_STATUS",self.ora_env_dict): + mode1=self.checkracinst() + if mode1=='OPEN': + sys.exit(0) + else: + sys.exit(127) + else: + pass + + if self.ocommon.check_key("CHECK_GI_LOCAL",self.ora_env_dict): + self.checkgilocal() + else: + pass + + if self.ocommon.check_key("CHECK_RAC_DB",self.ora_env_dict): + self.checkracdb() + else: + pass + + if self.ocommon.check_key("CHECK_DB_ROLE",self.ora_env_dict): + self.checkdbrole() + else: + pass + + + if self.ocommon.check_key("CHECK_CONNECT_STR",self.ora_env_dict): + self.checkconnstr() + else: + pass + + if self.ocommon.check_key("CHECK_PDB_CONNECT_STR",self.ora_env_dict): + self.checkpdbconnstr() + else: + pass + + if self.ocommon.check_key("NEW_DB_LSNR_ENDPOINTS",self.ora_env_dict): + self.setupdblsnr() + else: + pass + + if self.ocommon.check_key("NEW_LOCAL_LISTENER",self.ora_env_dict): + self.setuplocallsnr() + else: + pass + + if self.ocommon.check_key("CHECK_DB_SVC",self.ora_env_dict): + self.checkdbsvc() + else: + pass + + if self.ocommon.check_key("MODIFY_DB_SVC",self.ora_env_dict): + self.modifydbsvc() + else: + pass + + if self.ocommon.check_key("CHECK_DB_VERSION",self.ora_env_dict): + self.checkdbversion() + else: + pass + + if self.ocommon.check_key("RESET_PASSWORD",self.ora_env_dict): + self.resetpassword() + else: + pass + if self.ocommon.check_key("MODIFY_SCAN",self.ora_env_dict): + self.modifyscan() + else: + pass + if self.ocommon.check_key("UPDATE_ASMCOUNT",self.ora_env_dict): + self.updateasmcount() + else: + pass + if self.ocommon.check_key("UPDATE_LISTENERENDP",self.ora_env_dict): + self.updatelistenerendp() + else: + pass + if self.ocommon.check_key("LIST_ASMDG",self.ora_env_dict): + self.listasmdg() + else: + pass + if self.ocommon.check_key("LIST_ASMDISKS",self.ora_env_dict): + self.listasmdisks() + else: + pass + if self.ocommon.check_key("LIST_ASMDGREDUNDANCY",self.ora_env_dict): + self.listasmdgredundancy() + else: + pass + if self.ocommon.check_key("LIST_ASMINSTNAME",self.ora_env_dict): + self.listasminstname() + else: + pass + if self.ocommon.check_key("LIST_ASMINSTSTATUS",self.ora_env_dict): + self.listasminststatus() + else: + pass + if self.ocommon.check_key("UPDATE_ASMDEVICES",self.ora_env_dict): + self.updateasmdevices() + else: + pass + + + ct = datetime.datetime.now() + ets = ct.timestamp() + totaltime=ets - bts + self.ocommon.log_info_message("Total time for setup() = [ " + str(round(totaltime,3)) + " ] seconds",self.file_name) + + def delracnode(self): + """ + This function delete the racnode + """ + self.ocommon.del_node_params("DEL_PARAMS") + msg="Creating and calling instance to delete the rac node" + oracdel = OraRacDel(self.ologger,self.ohandler,self.oenv,self.ocommon,self.ocvu,self.osetupssh) + self.ocommon.log_info_message(msg,self.file_name) + oracdel.setup() + + def populate_tnsfile(self): + """ + This function populate the tns entry + """ + scanname,scanport,dbuname=self.process_tns_params("TNS_PARAMS") + osuser,dbhome,dbbase,oinv=self.ocommon.get_db_params() + self.oracstdby.create_local_tns_enteries(dbhome,dbuname,scanname,scanport,osuser,"oinstall") + tnsfile='''{0}/network/admin/tnsnames.ora'''.format(dbhome) + self.ocommon.copy_file_cluster(tnsfile,tnsfile,osuser) + + def process_tns_params(self,key): + """ + Process TNS params + """ + scanname=None + scanport=None + dbuname=None + + self.ocommon.log_info_message("Processing TNS Params",self.file_name) + cvar_str=self.ora_env_dict[key] + cvar_str=cvar_str.replace('"', '') + cvar_dict=dict(item.split("=") for item in cvar_str.split(";")) + for ckey in cvar_dict.keys(): + if ckey == 'scan_name': + scanname = cvar_dict[ckey] + if ckey == 'scan_port': + scanport = cvar_dict[ckey] + if ckey == 'db_unique_name': + dbuname = cvar_dict[ckey] + + if not scanport: + scanport=1521 + + if scanname and scanport and dbuname: + return scanname,scanport,dbuname + else: + msg1='''scan_name={0},scan_port={1}'''.format((scanname or "Missing Value"),(scanport or "Missing Value")) + self.ocommon.log_info_message(msg1,self.file_name) + msg2='''db_unique_name={0}'''.format((dbuname or "Missing Value")) + self.ocommon.log_info_message(msg2,self.file_name) + self.ocommon.prog_exit("Error occurred") + + def checkracdb(self): + """ + This will verify RAC DB + """ + status="" + mode="" + dbuser,dbhome,dbase,oinv=self.ocommon.get_db_params() + retcode1=self.ocvu.check_home(None,dbhome,dbuser) + retcode1=0 + if retcode1 != 0: + status="RAC_NOT_INSTALLED_OR_CONFIGURED" + else: + mode=self.checkracsvc() + status=mode + + msg='''Database state is {0}'''.format(status) + self.ocommon.log_info_message(msg,self.file_name) + print(status) + + def checkconnstr(self): + """ + Check the connect str + """ + status="" + mode="" + dbuser,dbhome,dbase,oinv=self.ocommon.get_db_params() + retcode1=self.ocvu.check_home(None,dbhome,dbuser) + retcode1=0 + if retcode1 != 0: + status="RAC_NOT_INSTALLED_OR_CONFIGURED" + else: + state=self.checkracsvc() + if state == 'OPEN': + mode=self.getconnectstr() + else: + mode="NOTAVAILABLE" + + status=mode + + msg='''Database connect str is {0}'''.format(status) + self.ocommon.log_info_message(msg,self.file_name) + print(status) + + def checkpdbconnstr(self): + """ + Check the PDB connect str + """ + status="" + mode="" + dbuser,dbhome,dbase,oinv=self.ocommon.get_db_params() + retcode1=self.ocvu.check_home(None,dbhome,dbuser) + retcode1=0 + if retcode1 != 0: + status="RAC_NOT_INSTALLED_OR_CONFIGURED" + else: + state=self.checkracsvc() + if state == 'OPEN': + mode=self.getpdbconnectstr() + else: + mode="NOTAVAILABLE" + + status=mode + + msg='''PDB connect str is {0}'''.format(status) + self.ocommon.log_info_message(msg,self.file_name) + print(status) + + def checkdbrole(self): + """ + This will verify RAC DB Role + """ + status="" + mode="" + dbuser,dbhome,dbase,oinv=self.ocommon.get_db_params() + #retcode1=self.ocvu.check_home(None,dbhome,dbuser) + retcode1=0 + if retcode1 != 0: + status="RAC_NOT_INSTALLED_OR_CONFIGURED" + else: + mode=self.checkracsvc() + if (mode == "OPEN") or ( mode == "MOUNT"): + osuser,dbhome,dbbase,oinv=self.ocommon.get_db_params() + osid=self.ora_env_dict["DB_NAME"] if self.ocommon.check_key("DB_NAME",self.ora_env_dict) else "ORCLCDB" + scanname=self.ora_env_dict["SCAN_NAME"] + scanport=self.ora_env_dict["SCAN_PORT"] if self.ocommon.check_key("SCAN_PORT",self.ora_env_dict) else "1521" + connect_str=self.ocommon.get_sqlplus_str(dbhome,osid,osuser,"sys",'HIDDEN_STRING',scanname,scanport,osid,None,None,None) + status=self.ocommon.get_db_role(osuser,dbhome,osid,connect_str) + else: + status="NOTAVAILABLE" + + msg='''Database role set to {0}'''.format(status) + self.ocommon.log_info_message(msg,self.file_name) + print(status) + + def getconnectstr(self): + """ + get the connect str + """ + osuser,dbhome,dbbase,oinv=self.ocommon.get_db_params() + osid=self.ora_env_dict["DB_NAME"] if self.ocommon.check_key("DB_NAME",self.ora_env_dict) else "ORCLCDB" + scanname=self.ora_env_dict["SCAN_NAME"] + scanport=self.ora_env_dict["SCAN_PORT"] if self.ocommon.check_key("SCAN_PORT",self.ora_env_dict) else "1521" + ##connect_str=self.ocommon.get_sqlplus_str(dbhome,osid,osuser,"sys",'HIDDEN_STRING',scanname,scanport,osid,None,None,None) + connect_str='''{0}:{1}/{2}'''.format(scanname,scanport,osid) + + return connect_str + + def getpdbconnectstr(self): + """ + get the PDB connect str + """ + svcname=None + osuser,dbhome,dbbase,oinv=self.ocommon.get_db_params() + pdb=self.ora_env_dict["PDB_NAME"] if self.ocommon.check_key("PDB_NAME",self.ora_env_dict) else "ORCLPDB" + osid=self.ora_env_dict["DB_NAME"] if self.ocommon.check_key("DB_NAME",self.ora_env_dict) else "ORCLCDB" + scanname=self.ora_env_dict["SCAN_NAME"] + scanport=self.ora_env_dict["SCAN_PORT"] if self.ocommon.check_key("SCAN_PORT",self.ora_env_dict) else "1521" + sname,osid,opdb,sparams=self.ocommon.get_service_name() + status,msg=self.ocommon.check_db_service_status(sname,osid) + if status: + svcname = sname + else: + svcname = pdb + self.ocommon.log_info_message(msg,self.file_name) + ##connect_str=self.ocommon.get_sqlplus_str(dbhome,osid,osuser,"sys",'HIDDEN_STRING',scanname,scanport,osid,None,None,None) + connect_str='''{0}:{1}/{2}'''.format(scanname,scanport,svcname) + + return connect_str + + def checkracsvc(self): + """ + Check the RAC SVC + """ + mode="" + osuser,dbhome,dbbase,oinv=self.ocommon.get_db_params() + osid=self.ora_env_dict["DB_NAME"] if self.ocommon.check_key("DB_NAME",self.ora_env_dict) else "ORCLCDB" + scanname=self.ora_env_dict["SCAN_NAME"] + scanport=self.ora_env_dict["SCAN_PORT"] if self.ocommon.check_key("SCAN_PORT",self.ora_env_dict) else "1521" + connect_str=self.ocommon.get_sqlplus_str(dbhome,osid,osuser,"sys",'HIDDEN_STRING',scanname,scanport,osid,None,None,None) + status=self.ocommon.get_dbinst_status(osuser,dbhome,osid,connect_str) + if self.ocommon.check_substr_match(status,"OPEN"): + mode="OPEN" + elif self.ocommon.check_substr_match(status,"MOUNT"): + mode="MOUNT" + elif self.ocommon.check_substr_match(status,"NOMOUNT"): + mode="NOMOUNT" + else: + mode="NOTAVAILABLE" + + return mode + + def checkraclocal(self): + """ + Check the RAC software + """ + status="" + mode="" + dbuser,dbhome,dbase,oinv=self.ocommon.get_db_params() + retcode1=self.ocvu.check_home(None,dbhome,dbuser) + retcode1=0 + if retcode1 != 0: + status="RAC_NOT_INSTALLED_OR_CONFIGURED" + else: + mode=self.checkracinst() + status=mode + + msg='''Database instance state is {0}'''.format(status) + self.ocommon.log_info_message(msg,self.file_name) + print(status) + + def checkracinst(self): + """ + This function check the rac inst is up + """ + mode1="" + msg="Checking RAC instance status" + oracdb = OraRacProv(self.ologger,self.ohandler,self.oenv,self.ocommon,self.ocvu,self.osetupssh) + self.ocommon.log_info_message(msg,self.file_name) + status,osid,host,mode=self.ocommon.check_dbinst() + if self.ocommon.check_substr_match(mode,"OPEN"): + mode1="OPEN" + elif self.ocommon.check_substr_match(mode,"MOUNT"): + mode1="MOUNT" + elif self.ocommon.check_substr_match(mode,"NOMOUNT"): + mode1="NOMOUNT" + else: + mode1="NOTAVAILABLE" + + return mode1 + + def checkgilocal(self): + """ + Check GI + """ + status="" + retcode=self.checkgihome() + if retcode != 0: + status="GI_NOT_INSTALLED_OR_CONFIGURED" + else: + node=self.ocommon.get_public_hostname() + retcode1=self.checkclulocal(node) + if retcode1 != 0: + status="NOT HEALTHY" + else: + status="HEALTHY" + msg='''GI status is {0}'''.format(status) + self.ocommon.log_info_message(msg,self.file_name) + print(status) + + def checkclulocal(self,node): + """ + This function check the cluster health + """ + retcode=self.ocvu.check_clu(node,None) + return retcode + + def checkgihome(self): + """ + Check the GI home + """ + giuser,gihome,obase,invloc=self.ocommon.get_gi_params() + pubhostname = self.ocommon.get_public_hostname() + retcode1=self.ocvu.check_home(pubhostname,gihome,giuser) + return retcode1 + + def setupdblsnr(self): + """ + update db lsnr + """ + value=self.ora_env_dict["NEW_DB_LSNR_ENDPOINTS"] + self.ocommon.log_info_message("lsnr new end Points are set to :" + value,self.file_name ) + if self.check_key("DB_LISTENER_ENDPOINTS",self.ora_env_dict): + self.ocommon.log_info_message("lsnr old end points were set to :" + self.ora_env_dict["DB_LISTENER_ENDPOINTS"],self.file_name ) + self.ora_env_dict=self.update_key("DB_LISTENER_ENDPOINTS",value,self.ora_env_dict) + else: + self.ora_env_dict=self.add_key("DB_LISTENER_ENDPOINTS",value,self.ora_env_dict) + self.ocommon.setup_db_lsnr() + + def setuplocallsnr(self): + """ + update db lsnr + """ + value=self.ora_env_dict["NEW_LOCAL_LISTENER"] + self.ocommon.log_info_message("local lsnr new end Points are set to :" + value,self.file_name ) + if self.check_key("LOCAL_LISTENER",self.ora_env_dict): + self.ocommon.log_info_message("lsnr old end points were set to :" + self.ora_env_dict["LOCAL_LISTENER"],self.file_name ) + self.ora_env_dict=self.update_key("LOCAL_LISTENER",value,self.ora_env_dict) + else: + self.ora_env_dict=self.add_key("LOCAL_LISTENER",value,self.ora_env_dict) + self.ocommon.set_local_listener() + + def checkdbversion(self): + """ + This function check the db version + """ + output=self.ocommon.get_dbversion() + print(output) + + def checkdbsvc(self): + """ + This function check the db service + """ + svcname,osid,preferred,available=self.process_dbsvc_params("CHECK_DB_SVC") + #osuser,dbhome,dbbase,oinv=self.ocommon.get_db_params() + if svcname and osid: + status,msg=self.ocommon.check_db_service_status(svcname,osid) + self.ocommon.log_info_message(msg,self.file_name) + print(msg) + else: + print("NOTAVAILABLE") + + def modifydbsvc(self): + """ + This function check the db service + """ + svcname,osid,preferred,available=self.process_dbsvc_params("CHECK_DB_SVC") + #osuser,dbhome,dbbase,oinv=self.ocommon.get_db_params() + if svcname and osid and preferred: + status,msg=self.ocommon.check_db_service_status(svcname,osid) + self.ocommon.log_info_message(msg,self.file_name) + print(msg.strip("\r\n")) + else: + print("NOTAVAILABLE") + + def process_dbsvc_params(self,key): + """ + check svc params + """ + svcname=None + preferred=None + available=None + dbsid=None + + self.ocommon.log_info_message("processing service params",self.file_name) + cvar_str=self.ora_env_dict[key] + cvar_str=cvar_str.replace('"', '') + cvar_dict=dict(item.split("=") for item in cvar_str.split(";")) + for ckey in cvar_dict.keys(): + if ckey == 'service': + svcname = cvar_dict[ckey] + if ckey == 'preferred': + preferred = cvar_dict[ckey] + if ckey == 'available': + available = cvar_dict[ckey] + if ckey == 'dbname': + dbsid = cvar_dict[ckey] + + + return svcname,dbsid,preferred,available + + def resetpassword(self): + """ + resetting password + """ + user,pdb,type,containerall=self.process_dbsvc_params("CHECK_DB_SVC") + if type.lower() != 'os': + self.ocommon.reset_dbuser_passwd(user,pdb,containerall) + + def process_resetpasswd_params(self,key): + """ + process reset DB password params + """ + user=None + pdb=None + type=None + containerall=None + + self.ocommon.log_info_message("processing reset password params",self.file_name) + cvar_str=self.ora_env_dict[key] + cvar_str=cvar_str.replace('"', '') + cvar_dict=dict(item.split("=") for item in cvar_str.split(";")) + for ckey in cvar_dict.keys(): + if ckey == 'user': + user = cvar_dict[ckey] + if ckey == 'pdb': + pdb = cvar_dict[ckey] + if ckey == 'type': + type = cvar_dict[ckey] + if ckey == 'container': + containerall = "all" + + + return user,pdb,type,containerall + + def modifyscan(self): + """ + modify scan details + """ + status="" + msg="" + giuser,gihome,obase,invloc=self.ocommon.get_gi_params() + self.ocommon.log_info_message("modifing scan details params",self.file_name) + scanname=self.ora_env_dict["MODIFY_SCAN"] + retvalue=self.ocommon.modify_scan(giuser,gihome,scanname) + if not retvalue: + status="MODIFY_SCAN_NOT_UPDATED" + msg='''Scan Details not modified to {0}'''.format(scanname) + self.ocommon.log_info_message(msg,self.file_name) + print(status) + self.ocommon.prog_exit("Error occurred") + else: + msg='''Scan Details is now modified to {0}'''.format(scanname) + status="MODIFY_SCAN_UPDATED_SUCCESSFULLY" + self.ocommon.log_info_message(msg,self.file_name) + print(status) + + def updateasmcount(self): + """ + update asm count details + """ + status="" + msg="" + giuser,gihome,obase,invloc=self.ocommon.get_gi_params() + self.ocommon.log_info_message("updating asm count details params",self.file_name) + asmcount=self.ora_env_dict["UPDATE_ASMCOUNT"] + retvalue=self.ocommon.updateasmcount(giuser,gihome,asmcount) + if not retvalue: + status="UPDATE_ASMCOUNT_NOT_UPDATED" + msg='''ASM Counts Details is not updated to {0}'''.format(asmcount) + self.ocommon.log_info_message(msg,self.file_name) + print(status) + self.ocommon.prog_exit("Error occurred") + else: + msg='''ASM Counts Details is now updated to {0}'''.format(asmcount) + status="UPDATE_ASMCOUNT_UPDATED_SUCCESSFULLY" + self.ocommon.log_info_message(msg,self.file_name) + print(status) + + def process_listenerendpoint_params(self,key): + """ + check listenerendpoint params + """ + status="" + msg="" + listenername=None + portlist=None + + self.ocommon.log_info_message("processing listenerendpoint params {0}".format(key),self.file_name) + cvar_str=self.ora_env_dict[key] + self.ocommon.log_info_message("processing listenerendpoint params {0}".format(cvar_str),self.file_name) + cvar_str=cvar_str.replace('"', '') + try: + cvar_dict = dict(item.split("=") for item in cvar_str.split(";") if "=" in item) + except ValueError as e: + self.ocommon.prog_exit("Error occurred") + for ckey in cvar_dict.keys(): + if ckey == 'lsnrname': + listenername = cvar_dict[ckey] + if ckey == 'portlist': + portlist = cvar_dict[ckey] + return listenername,portlist + + def updatelistenerendp(self): + """ + update listener end points details + """ + status="" + msg="" + giuser,gihome,obase,invloc=self.ocommon.get_gi_params() + self.ocommon.log_info_message("updating listener end points details params",self.file_name) + listenername,portlist=self.process_listenerendpoint_params("UPDATE_LISTENERENDP") + retvalue=self.ocommon.updatelistenerendp(giuser,gihome,listenername,portlist) + if not retvalue: + status="UPDATE_LISTENERENDPOINT_NOT_UPDATED" + msg='''Listener {0} End Point Details is not updated to portlist {1}'''.format(listenername,portlist) + self.ocommon.log_info_message(msg,self.file_name) + print(status) + self.ocommon.prog_exit("Error occurred") + else: + msg='''Listener End Point Details is now updated to listenername-> {0} portlist-> {1}'''.format(listenername,portlist) + status="UPDATE_LISTENERENDPOINT_UPDATED_SUCCESSFULLY" + self.ocommon.log_info_message(msg,self.file_name) + print(status) + + def process_asmdevices_params(self,key): + """ + check asmdevices params + """ + status="" + msg="" + diskname=None + diskgroup=None + processtype=None + + self.ocommon.log_info_message("processing asmdevices params {0}".format(key),self.file_name) + cvar_str=self.ora_env_dict[key] + self.ocommon.log_info_message("processing asmdevices params {0}".format(cvar_str),self.file_name) + cvar_str=cvar_str.replace('"', '') + try: + cvar_dict = dict(item.split("=") for item in cvar_str.split(";") if "=" in item) + except ValueError as e: + self.ocommon.prog_exit("Error occurred") + for ckey in cvar_dict.keys(): + if ckey == 'diskname': + diskname = cvar_dict[ckey] + if ckey == 'diskgroup': + diskgroup = cvar_dict[ckey] + if ckey == 'processtype': + processtype = cvar_dict[ckey] + return diskname,diskgroup,processtype + + def updateasmdevices(self): + """ + update asm devices details + """ + status="" + msg="" + giuser,gihome,obase,invloc=self.ocommon.get_gi_params() + self.ocommon.log_info_message("updating asm devices details params",self.file_name) + diskname,diskgroup,processtype=self.process_asmdevices_params("UPDATE_ASMDEVICES") + retvalue=self.ocommon.updateasmdevices(giuser,gihome,diskname,diskgroup,processtype) + if not retvalue: + status="UPDATE_ASMDEVICES_NOT_UPDATED" + msg='''ASM Devices Details is not processed {0} to disk {1} for disk group {2}'''.format(processtype,diskname,diskgroup) + self.ocommon.log_info_message(msg,self.file_name) + print(status) + self.ocommon.prog_exit("Error occurred") + else: + msg='''ASM Devices Details is now processed {0} to disk {1} for disk group {2}'''.format(processtype,diskname,diskgroup) + status="UPDATE_ASMDEVICES_UPDATED_SUCCESSFULLY" + self.ocommon.log_info_message(msg,self.file_name) + print(status) + + def listasmdg(self): + """ + getting the ams details + """ + status="" + msg="" + giuser,gihome,obase,invloc=self.ocommon.get_gi_params() + self.ocommon.log_info_message("getting the asm diskgroup list",self.file_name) + retvalue=self.ocommon.check_asminst(giuser,gihome) + if retvalue == 0: + dglist=self.ocommon.get_asmdg(giuser,gihome) + print(dglist) + else: + print("NOT READY") + + def listasmdisks(self): + """ + getting the ams details + """ + status="" + msg="" + giuser,gihome,obase,invloc=self.ocommon.get_gi_params() + self.ocommon.log_info_message("getting the asm diskgroup list",self.file_name) + dg=self.ora_env_dict["LIST_ASMDISKS"] + retvalue=self.ocommon.check_asminst(giuser,gihome) + if retvalue == 0: + dsklist=self.ocommon.get_asmdsk(giuser,gihome,dg) + print(dsklist) + else: + print("NOT READY") + + def listasmdgredundancy(self): + """ + getting the asm disk redundancy + """ + status="" + msg="" + giuser,gihome,obase,invloc=self.ocommon.get_gi_params() + self.ocommon.log_info_message("getting the asm diskgroup list",self.file_name) + dg=self.ora_env_dict["LIST_ASMDGREDUNDANCY"] + retvalue=self.ocommon.check_asminst(giuser,gihome) + if retvalue == 0: + asmdgrd=self.ocommon.get_asmdgrd(giuser,gihome,dg) + print(asmdgrd) + else: + print("NOT READY") + + + def listasminststatus(self): + """ + getting the asm instance status + """ + status="" + msg="" + giuser,gihome,obase,invloc=self.ocommon.get_gi_params() + retvalue=self.ocommon.check_asminst(giuser,gihome) + if retvalue == 0: + print("STARTED") + else: + print("NOT_STARTED") + + def listasminstname(self): + """ + getting the asm disk redundancy + """ + status="" + msg="" + giuser,gihome,obase,invloc=self.ocommon.get_gi_params() + sid=self.ocommon.get_asmsid(giuser,gihome) + if sid is not None: + print(sid) + else: + print("NOT READY") diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/oraracadd.py b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/oraracadd.py new file mode 100755 index 0000000000..6db43726fb --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/oraracadd.py @@ -0,0 +1,227 @@ +#!/usr/bin/python + +############################# +# Copyright 2021, Oracle Corporation and/or affiliates. All rights reserved. +# Licensed under the Universal Permissive License v 1.0 as shown at http://oss.oracle.com/licenses/upl +# Author: paramdeep.saini@oracle.com +############################ + +""" + This file contains to the code call different classes objects based on setup type +""" + +import os +import sys +import traceback + +from oralogger import * +from oraenv import * +from oracommon import * +from oramachine import * +from orasetupenv import * +from orasshsetup import * +from oraracstdby import * +from oraracadd import * +from oracvu import * +from oragiadd import * + +class OraRacAdd: + """ + This class Add the RAC home and RAC instances + """ + def __init__(self,oralogger,orahandler,oraenv,oracommon,oracvu,orasetupssh): + try: + self.ologger = oralogger + self.ohandler = orahandler + self.oenv = oraenv.get_instance() + self.ocommon = oracommon + self.ora_env_dict = oraenv.get_env_vars() + self.file_name = os.path.basename(__file__) + self.osetupssh = orasetupssh + self.ocvu = oracvu + self.ogiadd = OraGIAdd(self.ologger,self.ohandler,self.oenv,self.ocommon,self.ocvu,self.osetupssh) + except BaseException as ex: + traceback.print_exc(file = sys.stdout) + def setup(self): + """ + This function setup the grid on this machine + """ + self.ocommon.log_info_message("Start setup()",self.file_name) + ct = datetime.datetime.now() + bts = ct.timestamp() + sshFlag=False + self.ocommon.log_info_message("Start ogiadd.setup()",self.file_name) + self.ogiadd.setup() + self.ocommon.log_info_message("End ogiadd.setup()",self.file_name) + self.env_param_checks() + self.clu_checks() + dbuser,dbhome,dbase,oinv=self.ocommon.get_db_params() + retcode1=self.ocvu.check_home(None,dbhome,dbuser) + status=self.ocommon.check_rac_installed(retcode1) + if not status: + sshFlag=True + self.ocommon.log_info_message("Start perform_ssh_setup()",self.file_name) + self.perform_ssh_setup() + self.ocommon.log_info_message("End perform_ssh_setup()",self.file_name) + self.ocommon.log_info_message("Start db_sw_install()",self.file_name) + self.db_sw_install() + self.ocommon.log_info_message("End db_sw_install()",self.file_name) + self.ocommon.log_info_message("Start run_rootsh()",self.file_name) + self.run_rootsh() + self.ocommon.log_info_message("End run_rootsh()",self.file_name) + if not self.ocommon.check_key("SKIP_DBCA",self.ora_env_dict): + status,osid,host,mode=self.ocommon.check_dbinst() + hostname=self.ocommon.get_public_hostname() + if status: + msg='''Database instance {0} already exist on this machine {1}.'''.format(osid,hostname) + self.ocommon.update_statefile("completed") + self.ocommon.log_info_message(self.ocommon.print_banner(msg),self.file_name) + else: + if not sshFlag: + self.perform_ssh_setup() + self.ocommon.log_info_message("Start add_dbinst()",self.file_name) + self.add_dbinst() + self.ocommon.log_info_message("End add_dbinst()",self.file_name) + self.ocommon.log_info_message("Setting db listener",self.file_name) + self.ocommon.setup_db_lsnr() + self.ocommon.log_info_message("Setting local listener",self.file_name) + self.ocommon.set_local_listener() + self.ocommon.setup_db_service("modify") + sname,osid,opdb,sparams=self.ocommon.get_service_name() + if sname is not None: + self.ocommon.start_db_service(sname,osid) + self.ocommon.check_db_service_status(sname,osid) + self.ocommon.log_info_message("End create_db()",self.file_name) + self.ocommon.perform_db_check("ADDNODE") + self.ocommon.update_statefile("completed") + ct = datetime.datetime.now() + ets = ct.timestamp() + totaltime=ets - bts + self.ocommon.log_info_message("Total time for setup() = [ " + str(round(totaltime,3)) + " ] seconds",self.file_name) + + def env_param_checks(self): + """ + Perform the env setup checks + """ + self.ocommon.check_env_variable("DB_HOME",True) + self.ocommon.check_env_variable("DB_BASE",True) + self.ocommon.check_env_variable("INVENTORY",True) + + def clu_checks(self): + """ + Performing clu checks + """ + self.ocommon.log_info_message("Performing CVU checks on new nodes before DB home installation to make sure clusterware is up and running",self.file_name) + hostname=self.ocommon.get_public_hostname() + retcode1=self.ocvu.check_ohasd(hostname) + retcode2=self.ocvu.check_asm(hostname) + retcode3=self.ocvu.check_clu(hostname,None) + if retcode1 == 0: + msg="Cluvfy ohasd check passed!" + self.ocommon.log_info_message(msg,self.file_name) + else: + msg="Cluvfy ohasd check faild. Exiting.." + self.ocommon.log_error_message(msg,self.file_name) + self.ocommon.prog_exit("127") + + if retcode2 == 0: + msg="Cluvfy asm check passed!" + self.ocommon.log_info_message(msg,self.file_name) + else: + msg="Cluvfy asm check faild. Exiting.." + self.ocommon.log_error_message(msg,self.file_name) + self.ocommon.prog_exit("127") + + if retcode3 == 0: + msg="Cluvfy clumgr check passed!" + self.ocommon.log_info_message(msg,self.file_name) + else: + msg="Cluvfy clumgr check faild. Exiting.." + self.ocommon.log_error_message(msg,self.file_name) + self.ocommon.prog_exit("127") + + def perform_ssh_setup(self): + """ + Perform ssh setup + """ + if not self.ocommon.detect_k8s_env(): + dbuser,dbhome,dbase,oinv=self.ocommon.get_db_params() + self.osetupssh.setupssh(dbuser,dbhome,'ADDNODE') + #if self.ocommon.check_key("VERIFY_SSH",self.ora_env_dict): + #self.osetupssh.verifyssh(dbuser,'ADDNODE') + else: + self.ocommon.log_info_message("SSH setup must be already completed during env setup as this this k8s env.",self.file_name) + + def db_sw_install(self): + """ + Perform the db_install + """ + dbuser,dbhome,dbase,oinv=self.ocommon.get_db_params() + pub_nodes,vip_nodes,priv_nodes=self.ocommon.process_cluster_vars("CRS_NODES") + crs_nodes=pub_nodes.replace(" ",",") + hostname=self.ocommon.get_public_hostname() + existing_crs_nodes=self.ocommon.get_existing_clu_nodes(True) + oraversion=self.ocommon.get_rsp_version("ADDNODE",hostname) + version=oraversion.split(".",1)[0].strip() + node="" + nodeflag=False + cmd=None + for cnode in existing_crs_nodes.split(","): + retcode3=self.ocvu.check_clu(cnode,True) + if retcode3 == 0: + node=cnode + nodeflag=True + break + + copyflag="" + if not self.ocommon.check_key("COPY_GRID_SOFTWARE",self.ora_env_dict): + copyflag=" -noCopy " + + if nodeflag: + #cmd='''su - {0} -c "ssh -vvv {4} 'sh {1}/addnode/addnode.sh \\"CLUSTER_NEW_NODES={{{2}}}\\" -skipPrereqs -waitForCompletion -ignoreSysPrereqs {3} -silent'"'''.format(dbuser,dbhome,crs_nodes,copyflag,node) + if int(version) < 23: + cmd='''su - {0} -c "ssh -vvv {4} 'sh {1}/addnode/addnode.sh \\"CLUSTER_NEW_NODES={{{2}}}\\" -waitForCompletion {3} -silent'"'''.format(dbuser,dbhome,crs_nodes,copyflag,node) + else: + cmd='''su - {0} -c "ssh -vvv {4} 'sh {1}/addnode/addnode.sh \\"CLUSTER_NEW_NODES={{{2}}}\\" -waitForCompletion {3} -silent'"'''.format(dbuser,dbhome,crs_nodes,copyflag,node) + #cmd='''su - {0} -c "ssh -vvv {4} 'sh {1}/runInstaller -setupDBHome -OSDBA -OSBACKUPDBA -OSDGDBA -OSKMDBA -OSRACDBA -ORACLE_BASE -clusterNodes '"''' + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,None) + else: + self.ocommon.log_error_message("Clusterware is not up on any node : " + existing_crs_nodes + ".Exiting...",self.file_name) + self.prog_exit("127") + + def run_rootsh(self): + """ + This function run the root.sh after DB home install + """ + dbuser,dbhome,dbbase,oinv=self.ocommon.get_db_params() + pub_nodes,vip_nodes,priv_nodes=self.ocommon.process_cluster_vars("CRS_NODES") + for node in pub_nodes.split(" "): + cmd='''su - {0} -c "ssh {1} sudo {2}/root.sh"'''.format(dbuser,node,dbhome) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,True) + + def add_dbinst(self): + """ + This function add the DB inst + """ + dbuser,dbhome,dbbase,oinv=self.ocommon.get_db_params() + pub_nodes,vip_nodes,priv_nodes=self.ocommon.process_cluster_vars("CRS_NODES") + existing_crs_nodes=self.ocommon.get_existing_clu_nodes(True) + node="" + nodeflag=False + for cnode in existing_crs_nodes.split(","): + retcode3=self.ocvu.check_clu(cnode,True) + if retcode3 == 0: + node=cnode + nodeflag=True + break + if nodeflag: + dbname,osid,dbuname=self.ocommon.getdbnameinfo() + for new_node in pub_nodes.split(" "): + cmd='''su - {0} -c "ssh {2} '{1}/bin/dbca -addInstance -silent -nodeName {3} -gdbName {4}'"'''.format(dbuser,dbhome,node,new_node,osid) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,True) + else: + self.ocommon.log_error_message("Clusterware is not up on any node : " + existing_crs_nodes + ".Exiting...",self.file_name) + self.prog_exit("127") diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/oraracdel.py b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/oraracdel.py new file mode 100755 index 0000000000..03f73f56de --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/oraracdel.py @@ -0,0 +1,274 @@ +#!/usr/bin/python + +############################# +# Copyright 2021, Oracle Corporation and/or affiliates. All rights reserved. +# Licensed under the Universal Permissive License v 1.0 as shown at http://oss.oracle.com/licenses/upl +# Author: sanjay.singh@oracle.com,paramdeep.saini@oracle.com +############################ + +""" + This file contains to the code call different classes objects based on setup type +""" + +import os +import sys +import traceback + +from oralogger import * +from oraenv import * +from oracommon import * +from oramachine import * +from orasetupenv import * +from orasshsetup import * +from oracvu import * +from oragiprov import * +from oraasmca import * + +class OraRacDel: + """ + This class delete the RAC database + """ + def __init__(self,oralogger,orahandler,oraenv,oracommon,oracvu,orasetupssh): + try: + self.ologger = oralogger + self.ohandler = orahandler + self.oenv = oraenv.get_instance() + self.ocommon = oracommon + self.ora_env_dict = oraenv.get_env_vars() + self.file_name = os.path.basename(__file__) + self.osetupssh = orasetupssh + self.ocvu = oracvu + except BaseException as ex: + traceback.print_exc(file = sys.stdout) + + def setup(self): + """ + This function setup the RAC home on this machine + """ + self.ocommon.log_info_message("Start setup()",self.file_name) + ct = datetime.datetime.now() + bts = ct.timestamp() + self.env_param_checks() + giuser,gihome,obase,invloc=self.ocommon.get_gi_params() + self.ocommon.populate_existing_cls_nodes() + #self.clu_checks() + hostname=self.ocommon.get_public_hostname() + if self.ocommon.check_key("EXISTING_CLS_NODE",self.ora_env_dict): + if len(self.ora_env_dict["EXISTING_CLS_NODE"].split(",")) == 0: + self.ora_env_dict=self.add_key("LAST_CRS_NODE","true",self.ora_env_dict) + + self.del_dbinst_main(hostname) + self.del_dbhome_main(hostname) + self.del_gihome_main(hostname) + self.del_ginode(hostname) + if self.ocommon.detect_k8s_env(): + if self.ocommon.check_key("EXISTING_CLS_NODE",self.ora_env_dict): + node=self.ora_env_dict["EXISTING_CLS_NODE"].split(",")[0] + self.ocommon.update_scan(giuser,gihome,None,node) + self.ocommon.start_scan(giuser,gihome,node) + self.ocommon.update_scan_lsnr(giuser,gihome,node) + + ct = datetime.datetime.now() + ets = ct.timestamp() + totaltime=ets - bts + self.ocommon.log_info_message("Total time for setup() = [ " + str(round(totaltime,3)) + " ] seconds",self.file_name) + +##### Check env vars ######## + + def env_param_checks(self): + """ + Perform the env setup checks + """ + self.ocommon.check_env_variable("DB_HOME",True) + self.ocommon.check_env_variable("DB_BASE",True) + self.ocommon.check_env_variable("INVENTORY",True) + + def clu_checks(self): + """ + Performing clu checks + """ + self.ocommon.log_info_message("Performing CVU checks before DB home installation to make sure clusterware is up and running",self.file_name) + hostname=self.ocommon.get_public_hostname() + retcode1=self.ocvu.check_ohasd(hostname) + retcode2=self.ocvu.check_asm(hostname) + retcode3=self.ocvu.check_clu(hostname,None) + + if retcode1 == 0: + msg="Cluvfy ohasd check passed!" + self.ocommon.log_info_message(msg,self.file_name) + else: + msg="Cluvfy ohasd check faild. Exiting.." + self.ocommon.log_error_message(msg,self.file_name) + self.ocommon.prog_exit("127") + + if retcode2 == 0: + msg="Cluvfy asm check passed!" + self.ocommon.log_info_message(msg,self.file_name) + else: + msg="Cluvfy asm check faild. Exiting.." + self.ocommon.log_error_message(msg,self.file_name) + self.ocommon.prog_exit("127") + + if retcode3 == 0: + msg="Cluvfy clumgr check passed!" + self.ocommon.log_info_message(msg,self.file_name) + else: + msg="Cluvfy clumgr check faild. Exiting.." + self.ocommon.log_error_message(msg,self.file_name) + self.ocommon.prog_exit("127") + + +######### Deleting DB Instnce ####### + def del_dbinst_main(self,hostname): + """ + This function call the del_dbinst to perform the db instance deletion + """ + if self.ocommon.check_key("LAST_CRS_NODE",self.ora_env_dict): + msg='''This is a last node {0} in the cluster.'''.format(hostname) + self.ocommon.log_info_message(msg,self.file_name) + else: + status,osid,host,mode=self.ocommon.check_dbinst() + msg='''Database instance {0} exist on this machine {1}.'''.format(osid,hostname) + self.ocommon.log_info_message(msg,self.file_name) + self.del_dbinst() + status,osid,host,mode=self.ocommon.check_dbinst() + if status: + msg='''Oracle Database {0} is stil up and running on {1}.'''.format(osid,host) + self.ocommon.log_info_message(self.ocommon.print_banner(msg),self.file_name) + self.ocommon.prog_exit("127") + else: + msg='''Oracle Database {0} is not up and running on {1}.'''.format(osid,host) + self.ocommon.log_info_message(self.ocommon.print_banner(msg),self.file_name) + + def del_dbinst(self): + """ + Perform the db instance deletion + """ + dbuser,dbhome,dbase,oinv=self.ocommon.get_db_params() + dbname,osid,dbuname=self.ocommon.getdbnameinfo() + hostname=self.ocommon.get_public_hostname() + inst_sid=self.ocommon.get_inst_sid(dbuser,dbhome,dbname,hostname) + existing_crs_nodes=self.ocommon.get_existing_clu_nodes(True) + node="" + nodeflag=False + for cnode in existing_crs_nodes.split(","): + retcode3=self.ocvu.check_clu(cnode,True) + if retcode3 == 0: + node=cnode + nodeflag=True + break + + if inst_sid: + if nodeflag: + cmd='''su - {0} -c "ssh {4} '{1}/bin/dbca -silent -ignorePrereqFailure -deleteInstance -gdbName {2} -nodeName {5} -instanceName {3}'"'''.format(dbuser,dbhome,dbname,inst_sid,node,hostname) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,True) + else: + self.ocommon.log_error_message("Clusterware is not up on any node : " + existing_crs_nodes + ".Exiting...",self.file_name) + self.ocommon.prog_exit("127") + else: + self.ocommon.log_info_message("No database instance is up and running on this machine!",self.file_name) + +####### DEL RAC DB HOME ######## + def del_dbhome_main(self,hostname): + """ + This function call the del_dbhome to perform the db home deletion + """ + dbuser,dbhome,dbase,oinv=self.ocommon.get_db_params() + if self.ocommon.check_key("DEL_RACHOME",self.ora_env_dict): + retcode1=self.ocvu.check_home(hostname,dbhome,dbuser) + status=self.ocommon.check_rac_installed(retcode1) + if status: + self.del_dbhome() + else: + self.ocommon.log_info_message("No configured RAC home exist on this machine",self.file_name) + + def del_dbhome(self): + """ + Perform the db home deletion + """ + dbuser,dbhome,dbase,oinv=self.ocommon.get_db_params() + tmpdir=self.ocommon.get_tmpdir() + dbrspdir="/{1}/dbdeinstall_{0}".format(time.strftime("%T"),tmpdir) + self.ocommon.create_dir(dbrspdir,"local",None,"oracle","oinstall") + self.generate_delrspfile(dbrspdir,dbuser,dbhome) + dbrspfile=self.ocommon.latest_file(dbrspdir) + if os.path.isfile(dbrspfile): + cmd='''su - {0} -c "{1}/deinstall/deinstall -silent -local -paramfile {2} "'''.format(dbuser,dbhome,dbrspfile) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,False) + else: + self.ocommon.log_error_message("No responsefile exist under " + dbrspdir,self.file_name) + self.ocommon.prog_exit("127") + + def generate_delrspfile(self,rspdir,user,home): + """ + Generate the responsefile to perform home deletion + """ + cmd='''su - {0} -c "{1}/deinstall/deinstall -silent -checkonly -local -o {2}"'''.format(user,home,rspdir) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,True) + +####### DEL GI HOME ######## + def del_gihome_main(self,hostname): + """ + This function call the del_gihome to perform the gi home deletion + """ + giuser,gihome,gbase,oinv=self.ocommon.get_gi_params() + self.ocommon.log_info_message("gi params " + gihome ,self.file_name) + hostname=self.ocommon.get_public_hostname() + node=hostname + if self.ocommon.check_key("DEL_GIHOME",self.ora_env_dict): + retcode1=self.ocvu.check_home(hostname,gihome,giuser) + status=self.ocommon.check_gi_installed(retcode1,gihome,giuser,node,oinv) + if status: + self.del_gihome() + else: + self.ocommon.log_info_message("No configured GI home exist on this machine",self.file_name) + + def del_gihome(self): + """ + Perform the GI home deletion + """ + giuser,gihome,gbase,oinv=self.ocommon.get_gi_params() + tmpdir=self.ocommon.get_tmpdir() + girspdir="/{1}/gideinstall_{0}".format(time.strftime("%T"),tmpdir) + self.ocommon.create_dir(girspdir,"local",None,"grid","oinstall") + self.generate_delrspfile(girspdir,giuser,gihome) + girspfile=self.ocommon.latest_file(girspdir) + if os.path.isfile(girspfile): + cmd='''su - {0} -c "export TEMP={3};{1}/deinstall/deinstall -silent -local -paramfile {2} "'''.format(giuser,gihome,girspfile,"/var/tmp") + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,True) + deinstallDir=self.ocommon.latest_dir(tmpdir,'deins*/') + cmd='''{0}/rootdeinstall.sh'''.format(deinstallDir) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,False) + else: + self.ocommon.log_error_message("No responsefile exist under " + girspdir,self.file_name) + self.ocommon.prog_exit("127") + + def del_ginode(self,hostname): + """ + Perform the GI Node deletion + """ + giuser,gihome,gbase,oinv=self.ocommon.get_gi_params() + + existing_crs_nodes=self.ocommon.get_existing_clu_nodes(True) + node="" + nodeflag=False + for cnode in existing_crs_nodes.split(","): + retcode3=self.ocvu.check_clu(cnode,True) + if retcode3 == 0: + node=cnode + nodeflag=True + break + + if nodeflag: + cmd='''su - {0} -c "ssh {2} '/bin/sudo {1}/bin/crsctl delete node -n {3}'"'''.format(giuser,gihome,node,hostname) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,True) + else: + self.ocommon.log_error_message("Clusterware is not up on any node : " + existing_crs_nodes + ".Exiting...",self.file_name) + self.ocommon.prog_exit("127") + diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/oraracprov.py b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/oraracprov.py new file mode 100755 index 0000000000..bacd1321b9 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/oraracprov.py @@ -0,0 +1,541 @@ +#!/usr/bin/python + +############################# +# Copyright 2021, Oracle Corporation and/or affiliates. All rights reserved. +# Licensed under the Universal Permissive License v 1.0 as shown at http://oss.oracle.com/licenses/upl +# Author: sanjay.singh@oracle.com,paramdeep.saini@oracle.com +############################ + +""" + This file contains to the code call different classes objects based on setup type +""" + +from distutils.log import debug +import os +import sys +import traceback +import datetime + +from oralogger import * +from oraenv import * +from oracommon import * +from oramachine import * +from orasetupenv import * +from orasshsetup import * +from oracvu import * +from oragiprov import * +from oraasmca import * + +dgname="" +dbfiledest="" +dbrdest="" + +class OraRacProv: + """ + This class provision the RAC database + """ + def __init__(self,oralogger,orahandler,oraenv,oracommon,oracvu,orasetupssh): + try: + self.ologger = oralogger + self.ohandler = orahandler + self.oenv = oraenv.get_instance() + self.ocommon = oracommon + self.ora_env_dict = oraenv.get_env_vars() + self.file_name = os.path.basename(__file__) + self.osetupssh = orasetupssh + self.ocvu = oracvu + self.mythread = {} + self.ogiprov = OraGIProv(self.ologger,self.ohandler,self.oenv,self.ocommon,self.ocvu,self.osetupssh) + self.oasmca = OraAsmca(self.ologger,self.ohandler,self.oenv,self.ocommon,self.ocvu,self.osetupssh) + except BaseException as ex: + traceback.print_exc(file = sys.stdout) + + def setup(self): + """ + This function setup the RAC home on this machine + """ + self.ocommon.log_info_message("Start setup()",self.file_name) + ct = datetime.datetime.now() + bts = ct.timestamp() + sshFlag=False + self.ogiprov.setup() + self.env_param_checks() + pub_nodes,vip_nodes,priv_nodes=self.ocommon.process_cluster_vars("CRS_NODES") + crs_nodes=pub_nodes.replace(" ",",") + if not self.ocommon.check_key("CLUSTER_SETUP_FLAG",self.ora_env_dict): + for node in crs_nodes.split(","): + self.clu_checks(node) + dbuser,dbhome,dbase,oinv=self.ocommon.get_db_params() + retcode1=self.ocvu.check_home(None,dbhome,dbuser) + status=self.ocommon.check_rac_installed(retcode1) + if not status: + self.ocommon.log_info_message("Start perform_ssh_setup()",self.file_name) + self.perform_ssh_setup() + self.ocommon.log_info_message("End perform_ssh_setup()",self.file_name) + sshFlag=True + status=self.ocommon.check_home_inv(None,dbhome,dbuser) + if not status: + self.ocommon.log_info_message("Start db_sw_install()",self.file_name) + self.db_sw_install() + self.ocommon.log_info_message("End db_sw_install()",self.file_name) + self.ocommon.log_info_message("Start run_rootsh()",self.file_name) + self.run_rootsh() + self.ocommon.log_info_message("End run_rootsh()",self.file_name) + if not self.ocommon.check_key("SKIP_DBCA",self.ora_env_dict): + self.create_asmdg() + status,osid,host,mode=self.ocommon.check_dbinst() + hostname=self.ocommon.get_public_hostname() + if status: + msg='''Database instance {0} already exist on this machine {1}.'''.format(osid,hostname) + self.ocommon.update_statefile("completed") + self.ocommon.log_info_message(self.ocommon.print_banner(msg),self.file_name) + + elif self.ocommon.check_key("CLONE_DB",self.ora_env_dict): + self.ocommon.log_info_message("Start clone_db()",self.file_name) + self.clone_db(crs_nodes) + else: + if not sshFlag: + self.perform_ssh_setup() + self.ocommon.log_info_message("Start create_db()",self.file_name) + self.create_db() + self.ocommon.log_info_message("Setting db listener",self.file_name) + self.ocommon.setup_db_lsnr() + self.ocommon.log_info_message("Setting local listener",self.file_name) + self.ocommon.set_local_listener() + self.ocommon.setup_db_service("create") + sname,osid,opdb,sparams=self.ocommon.get_service_name() + if sname is not None: + self.ocommon.start_db_service(sname,osid) + self.ocommon.check_db_service_status(sname,osid) + self.ocommon.log_info_message("End create_db()",self.file_name) + self.ocommon.perform_db_check("INSTALL") + self.ocommon.update_statefile("completed") + ct = datetime.datetime.now() + ets = ct.timestamp() + totaltime=ets - bts + self.ocommon.log_info_message("Total time for setup() = [ " + str(round(totaltime,3)) + " ] seconds",self.file_name) + + def env_param_checks(self): + """ + Perform the env setup checks + """ + self.ocommon.check_env_variable("DB_HOME",True) + self.ocommon.check_env_variable("DB_BASE",True) + self.ocommon.check_env_variable("INVENTORY",True) + + def clu_checks(self,hostname): + """ + Performing clu checks + """ + self.ocommon.log_info_message("Performing CVU checks before DB home installation to make sure clusterware is up and running on " + hostname,self.file_name) + # hostname=self.ocommon.get_public_hostname() + retcode1=self.ocvu.check_ohasd(hostname) + retcode2=self.ocvu.check_asm(hostname) + retcode3=self.ocvu.check_clu(hostname,None) + + if retcode1 == 0: + msg="Cluvfy ohasd check passed!" + self.ocommon.log_info_message(msg,self.file_name) + else: + msg="Cluvfy ohasd check faild. Exiting.." + self.ocommon.log_error_message(msg,self.file_name) + self.ocommon.prog_exit("127") + + if retcode2 == 0: + msg="Cluvfy asm check passed!" + self.ocommon.log_info_message(msg,self.file_name) + else: + msg="Cluvfy asm check faild. Exiting.." + self.ocommon.log_error_message(msg,self.file_name) + #self.ocommon.prog_exit("127") + + if retcode3 == 0: + msg="Cluvfy clumgr check passed!" + self.ocommon.log_info_message(msg,self.file_name) + else: + msg="Cluvfy clumgr check faild. Exiting.." + self.ocommon.log_error_message(msg,self.file_name) + self.ocommon.prog_exit("127") + + def perform_ssh_setup(self): + """ + Perform ssh setup + """ + #if not self.ocommon.detect_k8s_env(): + pub_nodes,vip_nodes,priv_nodes=self.ocommon.process_cluster_vars("CRS_NODES") + crs_nodes=pub_nodes.replace(" ",",") + crs_nodes_list=crs_nodes.split(",") + if len(crs_nodes_list) == 1: + self.ocommon.log_info_message("Cluster size=1. Node=" + crs_nodes_list[0],self.file_name) + user=self.ora_env_dict["DB_USER"] + cmd='''su - {0} -c "/bin/rm -rf ~/.ssh ; sleep 1; /bin/ssh-keygen -t rsa -q -N \'\' -f ~/.ssh/id_rsa ; sleep 1; /bin/ssh-keyscan {1} > ~/.ssh/known_hosts 2>/dev/null ; sleep 1; /bin/cp ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys"'''.format(user,crs_nodes_list[0]) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,None) + else: + if not self.ocommon.check_key("SSH_PRIVATE_KEY",self.ora_env_dict) and not self.ocommon.check_key("SSH_PUBLIC_KEY",self.ora_env_dict): + dbuser,dbhome,dbase,oinv=self.ocommon.get_db_params() + self.osetupssh.setupssh(dbuser,dbhome,"INSTALL") + #if self.ocommon.check_key("VERIFY_SSH",self.ora_env_dict): + #self.osetupssh.verifyssh(dbuser,"INSTALL") + else: + self.ocommon.log_info_message("SSH setup must be already completed during env setup as this this env variables SSH_PRIVATE_KEY and SSH_PUBLIC_KEY are set.",self.file_name) + + def db_sw_install(self): + """ + Perform the db_install + """ + dbuser,dbhome,dbase,oinv=self.ocommon.get_db_params() + pub_nodes,vip_nodes,priv_nodes=self.ocommon.process_cluster_vars("CRS_NODES") + crs_nodes=pub_nodes.replace(" ",",") + osdba=self.ora_env_dict["OSDBA_GROUP"] if self.ocommon.check_key("OSDBA",self.ora_env_dict) else "dba" + osbkp=self.ora_env_dict["OSBACKUPDBA_GROUP"] if self.ocommon.check_key("OSBACKUPDBA_GROUP",self.ora_env_dict) else "backupdba" + osoper=self.ora_env_dict["OSPER_GROUP"] if self.ocommon.check_key("OSPER_GROUP",self.ora_env_dict) else "oper" + osdgdba=self.ora_env_dict["OSDGDBA_GROUP"] if self.ocommon.check_key("OSDGDBA_GROUP",self.ora_env_dict) else "dgdba" + oskmdba=self.ora_env_dict["OSKMDBA_GROUP"] if self.ocommon.check_key("OSKMDBA_GROUP",self.ora_env_dict) else "kmdba" + osracdba=self.ora_env_dict["OSRACDBA_GROUP"] if self.ocommon.check_key("OSRACDBA_GROUP",self.ora_env_dict) else "racdba" + osasm=self.ora_env_dict["OSASM_GROUP"] if self.ocommon.check_key("OSASM_GROUP",self.ora_env_dict) else "asmadmin" + unixgrp="oinstall" + hostname=self.ocommon.get_public_hostname() + lang=self.ora_env_dict["LANGUAGE"] if self.ocommon.check_key("LANGUAGE",self.ora_env_dict) else "en" + edition= self.ora_env_dict["DB_EDITION"] if self.ocommon.check_key("DB_EDITION",self.ora_env_dict) else "EE" + ignoreflag= " -ignorePrereq " if self.ocommon.check_key("IGNORE_DB_PREREQS",self.ora_env_dict) else " " + + copyflag=" -noCopy " + if not self.ocommon.check_key("COPY_DB_SOFTWARE",self.ora_env_dict): + copyflag=" -noCopy " + + mythread_list=[] + + oraversion=self.ocommon.get_rsp_version("INSTALL",None) + version=oraversion.split(".",1)[0].strip() + + self.mythread.clear() + mythreads=[] + for node in pub_nodes.split(" "): + self.ocommon.log_info_message("Running DB Sw install on node " + node,self.file_name) + thread=Process(target=self.db_sw_install_on_node,args=(dbuser,hostname,unixgrp,crs_nodes,oinv,lang,dbhome,dbase,edition,osdba,osbkp,osdgdba,oskmdba,osracdba,copyflag,node,ignoreflag)) + #thread.setDaemon(True) + mythreads.append(thread) + thread.start() + +# for thread in mythreads: +# self.ocommon.log_info_message("Starting Thread",self.file_name) +# thread.start() + + for thread in mythreads: # iterates over the threads + thread.join() # waits until the thread has finished wor + + #self.manage_thread() + + def db_sw_install_on_node(self,dbuser,hostname,unixgrp,crs_nodes,oinv,lang,dbhome,dbase,edition,osdba,osbkp,osdgdba,oskmdba,osracdba,copyflag,node,ignoreflag): + """ + Perform the db_install + """ + runCmd="" + if self.ocommon.check_key("APPLY_RU_LOCATION",self.ora_env_dict): + ruLoc=self.ora_env_dict["APPLY_RU_LOCATION"] + runCmd='''runInstaller -applyRU "{0}"'''.format(self.ora_env_dict["APPLY_RU_LOCATION"]) + else: + runCmd='''runInstaller ''' + + + if self.ocommon.check_key("DEBUG_MODE",self.ora_env_dict): + dbgCmd='''{0} -debug '''.format(runCmd) + runCmd=dbgCmd + + rspdata='''su - {0} -c "ssh {17} {1}/{16} {18} -waitforcompletion {15} -silent + oracle.install.option=INSTALL_DB_SWONLY + ORACLE_HOSTNAME={2} + UNIX_GROUP_NAME={3} + oracle.install.db.CLUSTER_NODES={4} + INVENTORY_LOCATION={5} + SELECTED_LANGUAGES={6} + ORACLE_HOME={7} + ORACLE_BASE={8} + oracle.install.db.InstallEdition={9} + oracle.install.db.OSDBA_GROUP={10} + oracle.install.db.OSBACKUPDBA_GROUP={11} + oracle.install.db.OSDGDBA_GROUP={12} + oracle.install.db.OSKMDBA_GROUP={13} + oracle.install.db.OSRACDBA_GROUP={14} + SECURITY_UPDATES_VIA_MYORACLESUPPORT=false + DECLINE_SECURITY_UPDATES=true"'''.format(dbuser,dbhome,hostname,unixgrp,crs_nodes,oinv,lang,dbhome,dbase,edition,osdba,osbkp,osdgdba,oskmdba,osracdba,copyflag,runCmd,node,ignoreflag) + cmd=rspdata.replace('\n'," ") + #dbswrsp="/tmp/dbswrsp.rsp" + #self.ocommon.write_file(dbswrsp,rspdata) + #if os.path.isfile(dbswrsp): + #cmd='''su - {0} -c "{1}/runInstaller -ignorePrereq -waitforcompletion -silent -responseFile {2}"'''.format(dbuser,dbhome,dbswrsp) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,None) + #else: + # self.ocommon.log_error_message("DB response file does not exist at its location: " + dbswrsp + ".Exiting..",self.file_name) + # self.ocommon.prog_exit("127") + if len(self.mythread) > 0: + if node in self.mythread.keys(): + swthread_list=self.mythread[node] + value=swthread_list[0] + new_list=[value,'FALSE'] + new_val={node,tuple(new_list)} + self.mythread.update(new_val) + + def run_rootsh(self): + """ + This function run the root.sh after DB home install + """ + dbuser,dbhome,dbbase,oinv=self.ocommon.get_db_params() + pub_nodes,vip_nodes,priv_nodes=self.ocommon.process_cluster_vars("CRS_NODES") + for node in pub_nodes.split(" "): + cmd='''su - {0} -c "ssh {1} sudo {2}/root.sh"'''.format(dbuser,node,dbhome) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,True) + + def create_asmdg(self): + """ + Perform the asm disk group creation + """ + dbuser,dbhome,dbase,oinv=self.ocommon.get_db_params() + if (self.ocommon.check_key("REDO_ASM_DEVICE_LIST",self.ora_env_dict)) and (self.ocommon.check_key("LOG_FILE_DEST",self.ora_env_dict)): + lgdest=self.ocommon.rmdgprefix(self.ora_env_dict["LOG_FILE_DEST"]) + device_prop=self.ora_env_dict["REDO_ASMDG_PROPERTIES"] if self.ocommon.check_key("REDO_ASMDG_PROPERTIES",self.ora_env_dict) else None + self.ocommon.log_info_message("dg validation for :" + lgdest + " is in progress", self.file_name) + status=self.oasmca.validate_dg(self.ora_env_dict["REDO_ASM_DEVICE_LIST"],device_prop,lgdest) + if not status: + self.oasmca.create_dg(self.ora_env_dict["REDO_ASM_DEVICE_LIST"],device_prop,lgdest) + else: + self.ocommon.log_info_message("ASM diskgroup exist!",self.file_name) + + if (self.ocommon.check_key("RECO_ASM_DEVICE_LIST",self.ora_env_dict)) and (self.ocommon.check_key("DB_RECOVERY_FILE_DEST",self.ora_env_dict)): + dbrdest=self.ocommon.rmdgprefix(self.ora_env_dict["DB_RECOVERY_FILE_DEST"]) + device_prop=self.ora_env_dict["RECO_ASMDG_PROPERTIES"] if self.ocommon.check_key("RECO_ASMDG_PROPERTIES",self.ora_env_dict) else None + self.ocommon.log_info_message("dg validation for :" + dbrdest + " is in progress", self.file_name) + status=self.oasmca.validate_dg(self.ora_env_dict["RECO_ASM_DEVICE_LIST"],device_prop,dbrdest) + if not status: + self.oasmca.create_dg(self.ora_env_dict["RECO_ASM_DEVICE_LIST"],device_prop,dbrdest) + else: + self.ocommon.log_info_message("ASM diskgroup exist!",self.file_name) + + if (self.ocommon.check_key("DB_ASM_DEVICE_LIST",self.ora_env_dict)) and (self.ocommon.check_key("DB_DATA_FILE_DEST",self.ora_env_dict)): + dbfiledest=self.ocommon.rmdgprefix(self.ora_env_dict["DB_DATA_FILE_DEST"]) + device_prop=self.ora_env_dict["DB_ASMDG_PROPERTIES"] if self.ocommon.check_key("DB_ASMDG_PROPERTIES",self.ora_env_dict) else None + self.ocommon.log_info_message("dg validation for :" + dbfiledest + " is in progress", self.file_name) + status=self.oasmca.validate_dg(self.ora_env_dict["DB_ASM_DEVICE_LIST"],device_prop,dbfiledest) + if not status: + self.oasmca.create_dg(self.ora_env_dict["DB_ASM_DEVICE_LIST"],device_prop,dbfiledest) + else: + self.ocommon.log_info_message("ASM diskgroup exist!",self.file_name) + + def set_clonedb_params(self): + """ + Set clone database parameters + """ + osuser,dbhome,dbbase,oinv=self.ocommon.get_db_params() + dgname=self.ocommon.setdgprefix(self.ocommon.getcrsdgname()) + dbfiledest=self.ocommon.setdgprefix(self.ocommon.getdbdestdgname(dgname)) + dbrdest=self.ocommon.setdgprefix(self.ocommon.getdbrdestdgname(dbfiledest)) + osid=self.ora_env_dict["GOLD_SID_NAME"] + connect_str=self.ocommon.get_sqlplus_str(dbhome,osid,osuser,"sys",None,None,None,osid,None,None,None) + sqlcmd=''' + alter system set control_files='{1}' scope=spfile; + ALTER SYSTEM SET DB_CREATE_FILE_DEST='{0}' scope=spfile sid='*'; + ALTER SYSTEM SET DB_RECOVERY_FILE_DEST='{1}' scope=spfile sid='*'; + '''.format(dbfiledest,dbrdest) + output=self.ocommon.run_sql_cmd(sqlcmd,connect_str) + + def clone_db(self,crs_nodes): + """ + This function clone the DB + """ + if self.ocommon.check_key("GOLD_DB_BACKUP_LOC",self.ora_env_dict) and self.ocommon.check_key("GOLD_DB_NAME",self.ora_env_dict) and self.ocommon.check_key("DB_NAME",self.ora_env_dict) and self.ocommon.check_key("GOLD_SID_NAME",self.ora_env_dict) and self.ocommon.check_key("GOLD_PDB_NAME",self.ora_env_dict): + self.ocommon.log_info_message("GOLD_DB_BACKUP_LOC set to " + self.ora_env_dict["GOLD_DB_BACKUP_LOC"] ,self.file_name) + self.ocommon.log_info_message("GOLD_DB_NAME set to " + self.ora_env_dict["GOLD_DB_NAME"] ,self.file_name) + self.ocommon.log_info_message("DB_NAME set to " + self.ora_env_dict["DB_NAME"] ,self.file_name) + pfile='''/tmp/pfile_{0}'''.format( datetime.datetime.now().strftime('%d%m%Y%H%M')) + self.ocommon.create_file(pfile,"local",None,None) + fdata='''db_name={0}'''.format(self.ora_env_dict["GOLD_DB_NAME"]) + self.ocommon.append_file(pfile,fdata) + self.ocommon.start_db(self.ora_env_dict["GOLD_SID_NAME"],"nomount",pfile) + ## VV self.ocommon.catalog_bkp() + self.ocommon.restore_spfile() + cmd='''rm -f {0}'''.format(pfile) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,False) + self.ocommon.shutdown_db(self.ora_env_dict["GOLD_SID_NAME"]) + self.ocommon.start_db(self.ora_env_dict["GOLD_SID_NAME"],"nomount") + self.set_clonedb_params() + self.ocommon.shutdown_db(self.ora_env_dict["GOLD_SID_NAME"]) + self.ocommon.start_db(self.ora_env_dict["GOLD_SID_NAME"],"nomount") + self.ocommon.restore_bkp(self.ocommon.setdgprefix(self.ocommon.getcrsdgname())) + + osuser,dbhome,dbbase,oinv=self.ocommon.get_db_params() + osid=self.ora_env_dict["GOLD_SID_NAME"] + pfile=dbhome + "/dbs/init" + osid + ".ora" + spfile=dbhome + "/dbs/spfile" + osid + ".ora" + + self.ocommon.create_pfile(pfile,spfile) + self.ocommon.shutdown_db(self.ora_env_dict["GOLD_SID_NAME"]) + self.ocommon.set_cluster_mode(pfile,False) + self.ocommon.start_db(self.ora_env_dict["GOLD_SID_NAME"],"mount",pfile) + self.ocommon.change_dbname(pfile,self.ora_env_dict["DB_NAME"]) + + self.ocommon.start_db(self.ora_env_dict["DB_NAME"] + "1","mount",pfile) + spfile=self.ocommon.getdbdestdgname("+DATA") + "/" + self.ora_env_dict["DB_NAME"] + "/PARAMETERFILE/spfile" + self.ora_env_dict["DB_NAME"] + ".ora" + self.ocommon.create_spfile(spfile,pfile) + self.ocommon.resetlogs(self.ora_env_dict["DB_NAME"] + "1") + self.ocommon.shutdown_db(self.ora_env_dict["DB_NAME"] + "1") + self.ocommon.add_rac_db(osuser,dbhome,self.ora_env_dict["DB_NAME"],spfile) + instance_number=1 + for node in crs_nodes.split(","): + self.ocommon.add_rac_instance(osuser,dbhome,self.ora_env_dict["DB_NAME"],str(instance_number),node) + instance_number +=1 + + self.ocommon.start_rac_db(osuser,dbhome,self.ora_env_dict["DB_NAME"]) + self.ocommon.get_db_status(osuser,dbhome,self.ora_env_dict["DB_NAME"]) + self.ocommon.get_db_config(osuser,dbhome,self.ora_env_dict["DB_NAME"]) + self.ocommon.log_info_message("End clone_db()",self.file_name) + + def check_responsefile(self): + """ + This function returns the valid response file + """ + dbrsp=None + if self.ocommon.check_key("DBCA_RESPONSE_FILE",self.ora_env_dict): + dbrsp=self.ora_env_dict["DBCA_RESPONSE_FILE"] + self.ocommon.log_info_message("DBCA_RESPONSE_FILE parameter is set and file location is:" + dbrsp ,self.file_name) + else: + self.ocommon.log_error_message("DBCA response file does not exist at its location: " + dbrsp + ".Exiting..",self.file_name) + self.ocommon.prog_exit("127") + + if os.path.isfile(dbrsp): + return dbrsp + + def create_db(self): + """ + Perform the DB Creation + """ + cmd="" + prereq=" " + if self.ocommon.check_key("IGNORE_DB_PREREQS",self.ora_env_dict): + prereq=" -ignorePreReqs " + dbuser,dbhome,dbase,oinv=self.ocommon.get_db_params() + if self.ocommon.check_key("DBCA_RESPONSE_FILE",self.ora_env_dict): + dbrsp=self.check_responsefile() + cmd='''su - {0} -c "{1}/bin/dbca -silent {3} -createDatabase -responseFile {2}"'''.format(dbuser,dbhome,dbrsp,prereq) + else: + cmd=self.prepare_db_cmd() + + dbpasswd=self.ocommon.get_db_passwd() + tdepasswd=self.ocommon.get_tde_passwd() + self.ocommon.set_mask_str(dbpasswd) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,True) + ### Unsetting the encrypt value to None + self.ocommon.unset_mask_str() + if self.ocommon.check_key("DBCA_RESPONSE_FILE",self.ora_env_dict): + self.ocommon.reset_dbuser_passwd("sys",None,"all") + + def prepare_db_cmd(self): + """ + Perform the asm disk group creation + """ + prereq=" " + if self.ocommon.check_key("IGNORE_DB_PREREQS",self.ora_env_dict): + prereq=" -ignorePreReqs " + + tdewallet="" + dbuser,dbhome,dbase,oinv=self.ocommon.get_db_params() + pub_nodes,vip_nodes,priv_nodes=self.ocommon.process_cluster_vars("CRS_NODES") + crs_nodes=pub_nodes.replace(" ",",") + dbname,osid,dbuname=self.ocommon.getdbnameinfo() + dgname=self.ocommon.setdgprefix(self.ocommon.getcrsdgname()) + dbfiledest=self.ocommon.setdgprefix(self.ocommon.getdbdestdgname(dgname)) + cdbflag=self.ora_env_dict["CONTAINERDB_FLAG"] if self.ocommon.check_key("CONTAINERDB_FLAG",self.ora_env_dict) else "true" + stype=self.ora_env_dict["DB_STORAGE_TYPE"] if self.ocommon.check_key("DB_STORAGE_TYPE",self.ora_env_dict) else "ASM" + charset=self.ora_env_dict["DB_CHARACTERSET"] if self.ocommon.check_key("DB_CHARACTERSET",self.ora_env_dict) else "AL32UTF8" + redosize=self.ora_env_dict["DB_REDOFILE_SIZE"] if self.ocommon.check_key("DB_REDOFILE_SIZE",self.ora_env_dict) else "1024" + dbtype=self.ora_env_dict["DB_TYPE"] if self.ocommon.check_key("DB_TYPE",self.ora_env_dict) else "OLTP" + dbctype=self.ora_env_dict["DB_CONFIG_TYPE"] if self.ocommon.check_key("DB_CONFIG_TYPE",self.ora_env_dict) else "RAC" + arcmode=self.ora_env_dict["ENABLE_ARCHIVELOG"] if self.ocommon.check_key("ENABLE_ARCHIVELOG",self.ora_env_dict) else "true" + pdbsettings=self.get_pdb_params() + initparams=self.get_init_params() + if self.ocommon.check_key("SETUP_TDE_WALLET",self.ora_env_dict): + tdewallet='''-configureTDE true -tdeWalletPassword HIDDEN_STRING -tdeWalletRoot {0} -tdeWalletLoginType AUTO_LOGIN -encryptTablespaces all'''.format(dbfiledest) + #memorypct=self.get_memorypct() + + rspdata='''su - {0} -c "{1}/bin/dbca -silent {15} -createDatabase \ + -templateName General_Purpose.dbc \ + -gdbname {2} \ + -createAsContainerDatabase {3} \ + -sysPassword HIDDEN_STRING \ + -systemPassword HIDDEN_STRING \ + -datafileDestination {4} \ + -storageType {5} \ + -characterSet {6} \ + -redoLogFileSize {7} \ + -databaseType {8} \ + -databaseConfigType {9} \ + -nodelist {10} \ + -useOMF true \ + {12} \ + {13} \ + {16} \ + -enableArchive {14}"'''.format(dbuser,dbhome,dbname,cdbflag,dbfiledest,stype,charset,redosize,dbtype,dbctype,crs_nodes,dbname,pdbsettings,initparams,arcmode,prereq,tdewallet) + cmd='\n'.join(line.lstrip() for line in rspdata.splitlines()) + + return cmd + + def get_pdb_params(self): + """ + Perform the asm disk group creation + """ + pdbnum=self.ora_env_dict["PDB_COUNT"] if self.ocommon.check_key("PDB_COUNT",self.ora_env_dict) else "1" + pdbname=self.ora_env_dict["ORACLE_PDB_NAME"] if self.ocommon.check_key("ORACLE_PDB_NAME",self.ora_env_dict) else "ORCLPDB" + rspdata='''-numberOfPDBs {0} \ + -pdbAdminPassword HIDDEN_STRING \ + -pdbName {1}'''.format(pdbnum,pdbname) + cmd='\n'.join(line.lstrip() for line in rspdata.splitlines()) + return cmd + + def get_init_params(self): + """ + Perform the asm disk group creation + """ + sgasize=self.ora_env_dict["INIT_SGA_SIZE"] if self.ocommon.check_key("INIT_SGA_SIZE",self.ora_env_dict) else None + pgasize=self.ora_env_dict["INIT_PGA_SIZE"] if self.ocommon.check_key("INIT_PGA_SIZE",self.ora_env_dict) else None + processes=self.ora_env_dict["INIT_PROCESSES"] if self.ocommon.check_key("INIT_PROCESSES",self.ora_env_dict) else None + dbname,osid,dbuname=self.ocommon.getdbnameinfo() + dgname=self.ocommon.setdgprefix(self.ocommon.getcrsdgname()) + dbdest=self.ocommon.setdgprefix(self.ocommon.getdbdestdgname(dgname)) + dbrdest=self.ocommon.setdgprefix(self.ocommon.getdbrdestdgname(dbdest)) + dbrdestsize=self.ora_env_dict["DB_RECOVERY_FILE_DEST_SIZE"] if self.ocommon.check_key("DB_RECOVERY_FILE_DEST_SIZE",self.ora_env_dict) else None + cpucount=self.ora_env_dict["CPU_COUNT"] if self.ocommon.check_key("CPU_COUNT",self.ora_env_dict) else None + dbfiles=self.ora_env_dict["DB_FILES"] if self.ocommon.check_key("DB_FILES",self.ora_env_dict) else "1024" + lgbuffer=self.ora_env_dict["LOG_BUFFER"] if self.ocommon.check_key("LOG_BUFFER",self.ora_env_dict) else "256M" + dbrettime=self.ora_env_dict["DB_FLASHBACK_RETENTION_TARGET"] if self.ocommon.check_key("DB_FLASHBACK_RETENTION_TARGET",self.ora_env_dict) else "120" + dbblkck=self.ora_env_dict["DB_BLOCK_CHECKSUM"] if self.ocommon.check_key("DB_BLOCK_CHECKSUM",self.ora_env_dict) else "TYPICAL" + dblwp=self.ora_env_dict["DB_LOST_WRITE_PROTECT"] if self.ocommon.check_key("DB_LOST_WRITE_PROTECT",self.ora_env_dict) else "TYPICAL" + ptpc=self.ora_env_dict["PARALLEL_THREADS_PER_CPU"] if self.ocommon.check_key("PARALLEL_THREADS_PER_CPU",self.ora_env_dict) else "1" + dgbr1=self.ora_env_dict["DG_BROKER_CONFIG_FILE1"] if self.ocommon.check_key("DG_BROKER_CONFIG_FILE1",self.ora_env_dict) else dbdest + dgbr2=self.ora_env_dict["DG_BROKER_CONFIG_FILE2"] if self.ocommon.check_key("DG_BROKER_CONFIG_FILE2",self.ora_env_dict) else dbrdest + remotepasswdfile="REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE" + lgformat="LOG_ARCHIVE_FORMAT=%t_%s_%r.arc" + + initprm='''db_recovery_file_dest={0},db_create_file_dest={2},{3},{4},db_unique_name={5},db_files={6},LOG_BUFFER={7},DB_FLASHBACK_RETENTION_TARGET={8},DB_BLOCK_CHECKSUM={9},DB_LOST_WRITE_PROTECT={10},PARALLEL_THREADS_PER_CPU={11},DG_BROKER_CONFIG_FILE1={12},DG_BROKER_CONFIG_FILE2={13}'''.format(dbrdest,dbrdest,dbdest,remotepasswdfile,lgformat,dbuname,dbfiles,lgbuffer,dbrettime,dbblkck,dblwp,ptpc,dgbr1,dgbr2) + + if sgasize: + initprm= initprm + ''',sga_target={0},sga_max_size={0}'''.format(sgasize) + + if pgasize: + initprm= initprm + ''',pga_aggregate_size={0}'''.format(pgasize) + + if processes: + initprm= initprm + ''',processes={0}'''.format(processes) + + if cpucount: + initprm= initprm + ''',cpu_count={0}'''.format(cpucount) + + if dbrdestsize: + initprm = initprm + ''',db_recovery_file_dest_size={0}'''.format(dbrdestsize) + + initparams=""" -initparams '{0}'""".format(initprm) + + return initparams diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/oraracstdby.py b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/oraracstdby.py new file mode 100755 index 0000000000..f97238e3d8 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/oraracstdby.py @@ -0,0 +1,643 @@ +#!/usr/bin/python + +############################# +# Copyright 2021, Oracle Corporation and/or affiliates. All rights reserved. +# Licensed under the Universal Permissive License v 1.0 as shown at http://oss.oracle.com/licenses/upl +# Author: paramdeep.saini@oracle.com +############################ + +""" + This file contains to the code call different classes objects based on setup type +""" + +from distutils.log import debug +import os +import sys +import traceback + +from oralogger import * +from oraenv import * +from oracommon import * +from oramachine import * +from orasetupenv import * +from orasshsetup import * +from oracvu import * +from oragiprov import * +from oraasmca import * +from oraracprov import * + +class OraRacStdby: + """ + This class Add the RAC standby + """ + def __init__(self,oralogger,orahandler,oraenv,oracommon,oracvu,orasetupssh): + try: + self.ologger = oralogger + self.ohandler = orahandler + self.oenv = oraenv.get_instance() + self.ocommon = oracommon + self.ora_env_dict = oraenv.get_env_vars() + self.file_name = os.path.basename(__file__) + self.osetupssh = orasetupssh + self.ocvu = oracvu + self.ogiprov = OraGIProv(self.ologger,self.ohandler,self.oenv,self.ocommon,self.ocvu,self.osetupssh) + self.oasmca = OraAsmca(self.ologger,self.ohandler,self.oenv,self.ocommon,self.ocvu,self.osetupssh) + self.oraracprov = OraRacProv(self.ologger,self.ohandler,self.oenv,self.ocommon,self.ocvu,self.osetupssh) + except BaseException as ex: + ex_type, ex_value, ex_traceback = sys.exc_info() + trace_back = traceback.extract_tb(ex_traceback) + stack_trace = list() + for trace in trace_back: + stack_trace.append("File : %s , Line : %d, Func.Name : %s, Message : %s" % (trace[0], trace[1], trace[2], trace[3])) + self.ocommon.log_info_message(ex_type.__name__,self.file_name) + self.ocommon.log_info_message(ex_value,self.file_name) + self.ocommon.log_info_message(stack_trace,self.file_name) + + def setup(self): + """ + This function setup the RAC stndby on this machine + """ + self.ocommon.log_info_message("Start setup()",self.file_name) + ct = datetime.datetime.now() + bts = ct.timestamp() + sshFlag=False + self.ogiprov.setup() + pub_nodes,vip_nodes,priv_nodes=self.ocommon.process_cluster_vars("CRS_NODES") + crs_nodes=pub_nodes.replace(" ",",") + for node in crs_nodes.split(","): + self.oraracprov.clu_checks(node) + dbuser,dbhome,dbase,oinv=self.ocommon.get_db_params() + retcode1=self.ocvu.check_home(None,dbhome,dbuser) + status=self.ocommon.check_rac_installed(retcode1) + if not status: + self.oraracprov.perform_ssh_setup() + sshFlag=True + status=self.ocommon.check_home_inv(None,dbhome,dbuser) + if not status: + self.ocommon.log_info_message("Start oraracprov.db_sw_install()",self.file_name) + self.oraracprov.db_sw_install() + self.ocommon.log_info_message("End oraracprov.db_sw_install()",self.file_name) + self.ocommon.log_info_message("Start oraracprov.run_rootsh()",self.file_name) + self.oraracprov.run_rootsh() + self.ocommon.log_info_message("End oraracprov.run_rootsh()",self.file_name) + if not self.ocommon.check_key("SKIP_DBCA",self.ora_env_dict): + self.oraracprov.create_asmdg() + status,osid,host,mode=self.ocommon.check_dbinst() + hostname=self.ocommon.get_public_hostname() + if status: + msg='''Database instance {0} already exist on this machine {1}.'''.format(osid,hostname) + self.ocommon.log_info_message(self.ocommon.print_banner(msg),self.file_name) + else: + if not sshFlag: + self.oraracprov.perform_ssh_setup() + self.check_primary_db() + self.ocommon.log_info_message("Start configure_primary_db()",self.file_name) + self.configure_primary_db() + self.ocommon.log_info_message("End configure_primary_db()",self.file_name) + self.ocommon.log_info_message("Start create_standbylogs()",self.file_name) + self.create_standbylogs() + self.ocommon.log_info_message("End create_standbylogs()",self.file_name) + #self.populate_tnsfile() + #self.copy_tnsfile(dbhome,dbuser) + self.ocommon.log_info_message("Start create_db()",self.file_name) + self.create_db() + self.ocommon.log_info_message("End create_db()",self.file_name) + self.ocommon.log_info_message("Start configure_standby_db()",self.file_name) + self.configure_standby_db() + self.ocommon.log_info_message("End configure_standby_db()",self.file_name) + ### Calling populate TNS again as create_db reset the oldtnames.ora + #self.populate_tnsfile() + #self.copy_tnsfile(dbhome,dbuser) + self.configure_dgsetup() + self.restart_db() + + ct = datetime.datetime.now() + ets = ct.timestamp() + totaltime=ets - bts + self.ocommon.log_info_message("Total time for setup() = [ " + str(round(totaltime,3)) + " ] seconds",self.file_name) + + def get_stdby_variables(self): + """ + Getting stdby variables + """ + stdbydbuname =self.ora_env_dict["DB_UNIQUE_NAME"] if self.ocommon.check_key("DB_UNIQUE_NAME",self.ora_env_dict) else "SORCLCDB" + prmydbuname =self.ora_env_dict["PRIMARY_DB_UNIQUE_NAME"] if self.ocommon.check_key("PRIMARY_DB_UNIQUE_NAME",self.ora_env_dict) else None + prmydbport =self.ora_env_dict["PRIMARY_DB_SCAN_PORT"] if self.ocommon.check_key("PRIMARY_DB_SCAN_PORT",self.ora_env_dict) else 1521 + prmydbname =self.ora_env_dict["PRIMARY_DB_NAME"] if self.ocommon.check_key("PRIMARY_DB_NAME",self.ora_env_dict) else None + prmyscanname =self.ora_env_dict["PRIMARY_DB_SCAN_NAME"] if self.ocommon.check_key("PRIMARY_DB_SCAN_NAME",self.ora_env_dict) else None + + return stdbydbuname,prmydbuname,prmydbport,prmydbname,prmyscanname + + def get_primary_connect_str(self): + ''' + return primary connect str + ''' + stdbydbuname,prmydbuname,prmydbport,prmydbname,prmyscanname=self.get_stdby_variables() + osuser,dbhome,dbbase,oinv=self.ocommon.get_db_params() + osid=self.ora_env_dict["PRIMARY_DB_UNIQUE_NAME"] if self.ocommon.check_key("PRIMARY_DB_UNIQUE_NAME",self.ora_env_dict) else None + connect_str=self.ocommon.get_sqlplus_str(dbhome,osid,osuser,"sys",'HIDDEN_STRING',prmyscanname,prmydbport,osid,None,None,None) + + return connect_str,osuser,dbhome,dbbase,oinv,osid + + def get_standby_connect_str(self): + ''' + return standby connect str + ''' + stdbydbuname,prmydbuname,prmydbport,prmydbname,prmyscanname=self.get_stdby_variables() + osuser,dbhome,dbbase,oinv=self.ocommon.get_db_params() + stdbyscanname=self.ora_env_dict["SCAN_NAME"] if self.ocommon.check_key("SCAN_NAME",self.ora_env_dict) else self.prog_exit("127") + stdbyscanport=self.ora_env_dict["SCAN_PORT"] if self.ocommon.check_key("SCAN_PORT",self.ora_env_dict) else "1521" + connect_str=self.ocommon.get_sqlplus_str(dbhome,stdbydbuname,osuser,"sys",'HIDDEN_STRING',stdbyscanname,stdbyscanport,stdbydbuname,None,None,None +) + + return connect_str,osuser,dbhome,dbbase,oinv,stdbydbuname + + def get_stdby_dg_name(self): + ''' + return DG name + ''' + dgname=self.ora_env_dict["CRS_ASM_DISKGROUP"] if self.ocommon.check_key("CRS_ASM_DISKGROUP",self.ora_env_dict) else "+DATA" + dbrdest=self.ora_env_dict["DB_RECOVERY_FILE_DEST"] if self.ocommon.check_key("DB_RECOVERY_FILE_DEST",self.ora_env_dict) else dgname + dbrdestsize=self.ora_env_dict["DB_RECOVERY_FILE_DEST_SIZE"] if self.ocommon.check_key("DB_RECOVERY_FILE_DEST_SIZE",self.ora_env_dict) else "50G" + dbdest=self.ora_env_dict["DB_CREATE_FILE_DEST"] if self.ocommon.check_key("DB_CREATE_FILE_DEST",self.ora_env_dict) else dbrdest + + return self.ocommon.setdgprefix(dbrdest),dbrdestsize,self.ocommon.setdgprefix(dbdest),self.ocommon.setdgprefix(dgname) + + def check_primary_db(self): + """ + Checking primary DB before proceeding to STDBY Setup + """ + stdbydbuname,prmydbuname,prmydbport,prmydbname,prmyscanname=self.get_stdby_variables() + self.ocommon.log_info_message("Checking primary DB",self.file_name) + status=None + counter=1 + end_counter=45 + + connect_str,osuser,dbhome,dbbase,oinv,osid=self.get_primary_connect_str() + + while counter < end_counter: + status=self.ocommon.check_setup_status(osuser,dbhome,osid,connect_str) + if status == 'completed': + break + else: + msg='''Primary DB {0} setup is still not completed as primary check did not return "completed". Sleeping for 60 seconds and sleeping count is {0}'''.format(counter) + self.ocommon.log_info_message(msg,self.file_name) + time.sleep(60) + counter=counter+1 + + if status == 'completed': + msg='''Primary Database {0} is open!'''.format(prmydbuname) + self.ocommon.log_info_message(msg,self.file_name) + else: + msg='''Primary DB {0} is not in open state.Primary DB setup did not complete or failed. Exiting...''' + self.ocommon.log_error_message(msg,self.file_name) + self.ocommon.prog_exit("127") + + + def configure_primary_db(self): + """ + Setup Primary for standby + """ + stdbydbuname,prmydbuname,prmydbport,prmydbname,prmyscanname=self.get_stdby_variables() + connect_str,osuser,dbhome,dbbase,oinv,osid=self.get_primary_connect_str() + stdbyscanname=self.ora_env_dict["SCAN_NAME"] if self.ocommon.check_key("SCAN_NAME",self.ora_env_dict) else self.prog_exit("127") + stdbyscanport=self.ora_env_dict["SCAN_PORT"] if self.ocommon.check_key("SCAN_PORT",self.ora_env_dict) else "1521" + prmytnssvc=self.ocommon.get_tnssvc_str(prmydbuname,prmydbport,prmyscanname) + stdbytnssvc=self.ocommon.get_tnssvc_str(stdbydbuname,stdbyscanport,stdbyscanname) + msg='''Setting up Primary DB for standby''' + self.ocommon.log_info_message(msg,self.file_name) + stdbylgdg,dbrdestsize,stdbydbdg,dgname=self.get_stdby_dg_name() + lgdest1="""LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME={0}""".format(prmydbuname) + lgdest2='''SERVICE="{0}" ASYNC VALID_FOR=(ONLINE_LOGFILE,PRIMARY_ROLE) DB_UNIQUE_NAME={1}'''.format(stdbytnssvc,stdbydbuname) + dbconfig="""DG_CONFIG=({0},{1})""".format(prmydbuname,stdbydbuname) + prmydbdg=self.ocommon.get_init_params("db_create_file_dest",connect_str) + prmylsdg=self.ocommon.get_init_params("DB_RECOVERY_FILE_DEST",connect_str) + dbconv="""'{0}','{1}'""".format(stdbydbdg,prmydbdg) + lgconv="""'{0}','{1}'""".format(stdbylgdg,prmylsdg) + prmy_dbname=self.ocommon.get_init_params("DB_NAME",connect_str) + dgbroker=prmyscanname=self.ora_env_dict["DG_BROKER_START"] if self.ocommon.check_key("DG_BROKER_START",self.ora_env_dict) else "true" + + + sqlcmd=""" + alter database force logging; + alter database flashback on; + alter system set db_recovery_file_dest_size=30G scope=both sid='*'; + alter system set LOG_ARCHIVE_DEST_1='{0}' scope=both sid='*'; + alter system set LOG_ARCHIVE_DEST_2='{1}' scope=both sid='*'; + alter system set LOG_ARCHIVE_DEST_STATE_1=ENABLE scope=both sid='*'; + alter system set LOG_ARCHIVE_DEST_STATE_2=ENABLE scope=both sid='*'; + alter system set LOG_ARCHIVE_CONFIG='{2}' scope=both sid='*'; + alter system set FAL_SERVER='{9}' scope=both sid='*'; + alter system set STANDBY_FILE_MANAGEMENT=AUTO scope=both sid='*'; + alter system set DB_FILE_NAME_CONVERT={4} scope=both sid='*'; + alter system set LOG_FILE_NAME_CONVERT={5} scope=both sid='*'; + alter system set dg_broker_start=true scope=both sid='*'; + alter system set DB_BLOCK_CHECKSUM='TYPICAL' scope=both sid='*'; + alter system set DB_LOST_WRITE_PROTECT='TYPICAL' scope=both sid='*'; + alter system set DB_FLASHBACK_RETENTION_TARGET=120 scope=both sid='*'; + alter system set PARALLEL_THREADS_PER_CPU=1 scope=both sid='*'; + """.format(lgdest1,lgdest2,dbconfig,stdbydbuname,dbconv,lgconv,dgbroker,prmylsdg,prmydbdg,stdbytnssvc) + + output=self.ocommon.run_sql_cmd(sqlcmd,connect_str) + + def get_logfile_info(self,connect_str): + """ + get the primary log info + """ + sqlsetcmd=self.ocommon.get_sqlsetcmd() + sqlcmd1=''' + {0} + select max(thread#) from gv$log; + '''.format(sqlsetcmd) + + sqlcmd2=''' + {0} + select count(*) from gv$log; + '''.format(sqlsetcmd) + + sqlcmd3=''' + {0} + select * from (select count(*) from v$log group by thread#) where rownum < 2; + '''.format(sqlsetcmd) + + sqlcmd4=''' + {0} + select min(group#) from gv$log; + '''.format(sqlsetcmd) + + sqlcmd5=''' + {0} + select max(MEMBERS) from gv$log; + '''.format(sqlsetcmd) + + sqlcmd6=''' + {0} + select count(*) from gv$standby_log; + ''' .format(sqlsetcmd) + + sqlcmd7=''' + {0} + select max(group#) from gv$standby_log; + '''.format(sqlsetcmd) + + sqlcmd8=''' + {0} + select bytes from v$log where rownum < 2; + '''.format(sqlsetcmd) + + sqlcmd9=''' + {0} + select max(group#) from v$log; + '''.format(sqlsetcmd) + + maxthread=self.ocommon.run_sql_cmd(sqlcmd1,connect_str) + maxgrpcount=self.ocommon.run_sql_cmd(sqlcmd2,connect_str) + maxgrpnum=self.ocommon.run_sql_cmd(sqlcmd3,connect_str) + mingrpnum=self.ocommon.run_sql_cmd(sqlcmd4,connect_str) + maxgrpmemnum=self.ocommon.run_sql_cmd(sqlcmd5,connect_str) + maxstdbygrpcount=self.ocommon.run_sql_cmd(sqlcmd6,connect_str) + maxstdbygrpnum=self.ocommon.run_sql_cmd(sqlcmd7,connect_str) + filesize=self.ocommon.run_sql_cmd(sqlcmd8,connect_str) + maxgrp=self.ocommon.run_sql_cmd(sqlcmd9,connect_str) + + return int(maxthread),int(maxgrpcount),int(maxgrpnum),int(mingrpnum),int(maxgrpmemnum),int(maxstdbygrpcount),maxstdbygrpnum,int(filesize),int(maxgrp) + + def create_standbylogs(self): + """ + Setup standby logs on Primary + """ + stdbydbuname,prmydbuname,prmydbport,prmydbname,prmyscanname=self.get_stdby_variables() + connect_str,osuser,dbhome,dbbase,oinv,osid=self.get_primary_connect_str() + maxthread,maxgrpcount,maxgrpnum,mingrpnum,maxgrpmemnum,maxstdbygrpcount,maxstdbygrpnum,filesize,maxgrp=self.get_logfile_info(connect_str) + threadcount=1 + mingrpmemnum=1 + stdbygrp=0 + + msg=''' + Received Values : + Max Thread={0} + Max Log Group Count={1} + Max Log Group Number={2} + Min Log Group Num={3} + Max Group Member = {4} + Max Standby Group Count = {5} + Max Standby Group Number = {6} + File Size = {7} + Max Groups = {8} + '''.format(maxthread,maxgrpcount,maxgrpnum,mingrpnum,maxgrpmemnum,maxstdbygrpcount,maxstdbygrpnum,filesize,maxgrp) + + self.ocommon.log_info_message(msg,self.file_name) + dbrdest=self.ocommon.get_init_params("DB_RECOVERY_FILE_DEST",connect_str) + + if maxstdbygrpcount != 0: + if maxstdbygrpcount == ((maxgrp + 1) * maxthread): + msg1='''The required standby logs already exist. The current number of max primary group is {1} and max threads are {3}. The standby logs groups is to "((maxgrp + 1) * maxthread)"= {0} '''.format(((maxgrp + 1) * maxthread),maxgrp,maxthread) + self.ocommon.log_info_message(msg1,self.file_name) + else: + stdbygrp=(maxgrp + 1) * maxthread + msg1='''The current number of max primary log group is {1} and max threads are {2}. The required standby logs groups "((maxgrp + 1) * maxthread)"= {0}'''.format(((maxgrp + 1) * maxthread),maxgrp,maxthread) + self.ocommon.log_info_message(msg1,self.file_name) + + # Setting the standby logs to the value which will start after maxgrpcount + mingrpnum=(maxgrp+1) + newstdbygrp=stdbygrp + threadcount=1 + group_per_thread=((stdbygrp - maxgrp )/maxthread) + group_per_thread_count=1 + + msg='''Logfile thread maxthread={1}, groups per thread={2}'''.format(threadcount,maxthread,group_per_thread) + self.ocommon.log_info_message(msg,self.file_name) + msg='''Standby logfiles minigroup set to={0} and maximum group set to={1}'''.format(mingrpnum,newstdbygrp) + self.ocommon.log_info_message(msg,self.file_name) + msg='''Logfile group loop. mingrpnum={0},maxgrpnum={1}'''.format(mingrpnum,newstdbygrp) + self.ocommon.log_info_message(msg,self.file_name) + + while threadcount <= maxthread: + group_per_thread_count=1 + while group_per_thread_count <= group_per_thread: + mingrpmemnum=1 + while mingrpmemnum <= maxgrpmemnum: + if mingrpmemnum == 1: + self.add_stdby_log_grp(threadcount,mingrpnum,filesize,dbrdest,connect_str,None) + else: + self.add_stdby_log_grp(threadcount,mingrpnum,filesize,dbrdest,connect_str,'member') + mingrpmemnum = mingrpmemnum + 1 + group_per_thread_count=group_per_thread_count + 1 + mingrpnum = mingrpnum + 1 + threadcount = threadcount + 1 + if mingrpnum >= newstdbygrp: + break + + def add_stdby_log_grp(self,threadcount,stdbygrp,filesize,dbrdest,connect_str,type): + """ + This function will add standby log group + """ + sqlcmd1=None + sqlsetcmd=self.ocommon.get_sqlsetcmd() + if type is None: + sqlcmd1=''' + {3} + ALTER DATABASE ADD STANDBY LOGFILE THREAD {0} group {1} size {2}; + '''.format(threadcount,stdbygrp,filesize,sqlsetcmd) + + if type == 'member': + sqlcmd1=''' + {2} + ALTER DATABASE ADD STANDBY LOGFILE member '{0}' to group {1}; + '''.format(dbrdest,stdbygrp,sqlsetcmd) + + output=self.ocommon.run_sql_cmd(sqlcmd1,connect_str) + + + def populate_tnsfile(self): + """ + Populate TNS file" + """ + stdbydbuname,prmydbuname,prmydbport,prmydbname,prmyscanname=self.get_stdby_variables() + connect_str,osuser,dbhome,dbbase,oinv,osid=self.get_primary_connect_str() + prmyscanname=self.ora_env_dict["PRIMARY_DB_SCAN_NAME"] if self.ocommon.check_key("PRIMARY_DB_SCAN_NAME",self.ora_env_dict) else self.prog_exit("127") + prmyscanport=self.ora_env_dict["PRIMARY_DB_SCAN_PORT"] if self.ocommon.check_key("PRIMARY_DB_SCAN_PORT",self.ora_env_dict) else "1521" + stdbyscanname=self.ora_env_dict["SCAN_NAME"] if self.ocommon.check_key("SCAN_NAME",self.ora_env_dict) else self.prog_exit("127") + stdbyscanport=self.ora_env_dict["SCAN_PORT"] if self.ocommon.check_key("SCAN_PORT",self.ora_env_dict) else "1521" + self.create_local_tns_enteries(dbhome,prmydbuname,prmyscanname,prmyscanport,osuser,"oinstall") + self.create_local_tns_enteries(dbhome,stdbydbuname,stdbyscanname,stdbyscanport,osuser,"oinstall") + self.create_remote_tns_enteries(dbhome,stdbydbuname,connect_str,stdbyscanname,stdbyscanport) + + def create_local_tns_enteries(self,dbhome,dbuname,scan_name,port,osuser,osgroup): + """ + Add enteries in tnsnames.ora + """ + tnsfile='''{0}/network/admin/tnsnames.ora'''.format(dbhome) + status=self.ocommon.check_file(tnsfile,"local",None,None) + key='''{0}='''.format(dbuname) + tnsentry='\n' + '''{2}=(DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = {0})(PORT = {1})) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = {2})))'''.format(scan_name,port,dbuname) + + + if status: + fdata=self.ocommon.read_file(tnsfile) + match=re.search(key,fdata,re.MULTILINE) + if not match: + msg='''tnsnames.ora : {1} exist. Populating tnsentry: {0}'''.format(tnsentry,tnsfile) + self.ocommon.log_info_message(msg,self.file_name) + self.ocommon.append_file(tnsfile,tnsentry) + else: + msg='''tnsnames.ora : {1} doesn't exist, creating the file. Populating tnsentry: {0}'''.format(tnsentry,tnsfile) + self.ocommon.log_info_message(msg,self.file_name) + self.ocommon.write_file(tnsfile,tnsentry) + + cmd='''chown {1}:{2} {0}'''.format(tnsfile,osuser,osgroup) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + + def create_remote_tns_enteries(self,dbhome,dbuname,connect_str,scan_name,scan_port): + """ + Add enteries in remote tnsnames.ora + """ + sqlcmd=""" + begin + dbms_scheduler.create_job (job_name => 'OS_JOB', + job_type => 'executable', + job_action => '/opt/scripts/startup/scripts/cmdExec', + number_of_arguments => 4, + auto_drop => TRUE); + dbms_scheduler.set_job_argument_value ('OS_JOB', 1,'sudo'); + dbms_scheduler.set_job_argument_value ('OS_JOB', 2,'/usr/bin/python3'); + dbms_scheduler.set_job_argument_value ('OS_JOB', 3,'/opt/scripts/startup/scripts/main.py'); + dbms_scheduler.set_job_argument_value ('OS_JOB', 4,'--addtns=\"scan_name={0};scan_port={1};db_unique_name={2}\"'); + DBMS_SCHEDULER.RUN_JOB(JOB_NAME => 'OS_JOB',USE_CURRENT_SESSION => TRUE); + end; + / + exit; + """.format(scan_name,scan_port,dbuname) + + output=self.ocommon.run_sql_cmd(sqlcmd,connect_str) + + def copy_tnsfile(self,dbhome,osuser): + """ + Copy TNSfile to remote machine + """ + tnsfile='''{0}/network/admin/tnsnames.ora'''.format(dbhome) + self.ocommon.copy_file_cluster(tnsfile,tnsfile,osuser) + + def create_db(self): + """ + Perform the DB Creation + """ + cmd="" + dbuser,dbhome,dbase,oinv=self.ocommon.get_db_params() + cmd=self.prepare_db_cmd() + + dbpasswd=self.ocommon.get_db_passwd() + self.ocommon.set_mask_str(dbpasswd) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,None) + ### Unsetting the encrypt value to None + self.ocommon.unset_mask_str() + + def prepare_db_cmd(self): + """ + Perform the asm disk group creation + """ + stdbydbuname,prmydbuname,prmydbport,prmydbname,prmyscanname=self.get_stdby_variables() + connect_str,osuser,dbhome,dbbase,oinv,osid=self.get_primary_connect_str() + dbuser,dbhome,dbase,oinv=self.ocommon.get_db_params() + pub_nodes,vip_nodes,priv_nodes=self.ocommon.process_cluster_vars("CRS_NODES") + crs_nodes=pub_nodes.replace(" ",",") + + dgname=self.ora_env_dict["CRS_ASM_DISKGROUP"] if self.ocommon.check_key("CRS_ASM_DISKGROUP",self.ora_env_dict) else "+DATA" + dbfiledest=self.ora_env_dict["DB_DATA_FILE_DEST"] if self.ocommon.check_key("DB_DATA_FILE_DEST",self.ora_env_dict) else dgname + stype=self.ora_env_dict["DB_STORAGE_TYPE"] if self.ocommon.check_key("DB_STORAGE_TYPE",self.ora_env_dict) else "ASM" + dbctype=self.ora_env_dict["DB_CONFIG_TYPE"] if self.ocommon.check_key("DB_CONFIG_TYPE",self.ora_env_dict) else "RAC" + prmydbstr='''{0}:{1}/{2}'''.format(prmyscanname,prmydbport,prmydbuname) + initparams=self.get_init_params() + #memorypct=self.get_memorypct() + + rspdata='''su - {0} -c "echo HIDDEN_STRING | {1}/bin/dbca -silent -ignorePrereqFailure -createDuplicateDB \ + -gdbname {2} \ + -sid {3} \ + -createAsStandby \ + -adminManaged \ + -sysPassword HIDDEN_STRING \ + -datafileDestination {4} \ + -storageType {5} \ + -nodelist {6} \ + -useOMF true \ + -remoteDBConnString {7} \ + -initparams {8} \ + -dbUniqueName {3} \ + -databaseConfigType {9}"'''.format(dbuser,dbhome,prmydbname,stdbydbuname,self.ocommon.setdgprefix(dbfiledest),stype,crs_nodes,prmydbstr,initparams,dbctype) + cmd='\n'.join(line.lstrip() for line in rspdata.splitlines()) + + return cmd + + def get_init_params(self): + """ + Perform the asm disk group creation + """ + stdbydbuname,prmydbuname,prmydbport,prmydbname,prmyscanname=self.get_stdby_variables() + connect_str,osuser,dbhome,dbbase,oinv,osid=self.get_primary_connect_str() + + prmydbdg=self.ocommon.get_init_params("db_create_file_dest",connect_str) + prmylsdg=self.ocommon.get_init_params("DB_RECOVERY_FILE_DEST",connect_str) + stdbylgdg,dbrdestsize,stdbydbdg,dgname=self.get_stdby_dg_name() + dbrdest=stdbylgdg + + dbconfig="""DG_CONFIG=({0},{1})""".format(prmydbuname,stdbydbuname) + lgdest1="""LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ALL_LOGFILE,ALL_ROLE) DB_UNIQUE_NAME={0}""".format(stdbydbuname) + lgdest2="""SERVICE={0} ASYNC VALID_FOR=(ONLINE_LOGFILE,PRIMARY_ROLE) DB_UNIQUE_NAME={0}""".format(prmydbuname) + + sgasize=self.ora_env_dict["INIT_SGA_SIZE"] if self.ocommon.check_key("INIT_SGA_SIZE",self.ora_env_dict) else None + pgasize=self.ora_env_dict["INIT_PGA_SIZE"] if self.ocommon.check_key("INIT_PGA_SIZE",self.ora_env_dict) else None + processes=self.ora_env_dict["INIT_PROCESSES"] if self.ocommon.check_key("INIT_PROCESSES",self.ora_env_dict) else None + dbuname=self.ora_env_dict["DB_UNIQUE_NAME"] if self.ocommon.check_key("DB_UNIQUE_NAME",self.ora_env_dict) else "SORCLCDB" + dgname=self.ora_env_dict["CRS_ASM_DISKGROUP"] if self.ocommon.check_key("CRS_ASM_DISKGROUP",self.ora_env_dict) else "+DATA" + dbconv="""'{0}','{1}'""".format(prmydbdg,stdbydbdg) + lgconv="""'{0}','{1}'""".format(prmylsdg,stdbylgdg) + + + cpucount=self.ora_env_dict["CPU_COUNT"] if self.ocommon.check_key("CPU_COUNT",self.ora_env_dict) else None + remotepasswdfile="REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE" + lgformat="LOG_ARCHIVE_FORMAT=%t_%s_%r.arc" + + initprm="""db_recovery_file_dest={0},db_recovery_file_dest_size={1},db_create_file_dest={2}""".format(dbrdest,dbrdestsize,stdbydbdg,remotepasswdfile,lgformat,stdbydbuname,dbconv,lgconv,prmydbname,dbconfig,lgdest1,lgdest2,prmydbuname) + + #initprm="""db_recovery_file_dest={0},db_recovery_file_dest_size={1},db_create_file_dest={2},{3},{4},db_unique_name={5},db_file_name_convert={6},log_file_name_convert={7},db_name={8},LOG_ARCHIVE_CONFIG='{9}',LOG_ARCHIVE_DEST_1='{10}',LOG_ARCHIVE_DEST_2='{11}',STANDBY_FILE_MANAGEMENT='AUTO',FAL_SERVER={12}""".format(dbrdest,dbrdestsize,stdbydbdg,remotepasswdfile,lgformat,stdbydbuname,dbconv,lgconv,prmydbname,dbconfig,lgdest1,lgdest2,prmydbuname) + + if sgasize: + initprm= initprm + ''',sga_target={0},sga_max_size={0}'''.format(sgasize) + + if pgasize: + initprm= initprm + ''',pga_aggregate_size={0}'''.format(pgasize) + + if processes: + initprm= initprm + ''',processes={0}'''.format(processes) + + if cpucount: + initprm= initprm + ''',cpu_count={0}'''.format(cpucount) + + initparams='''{0}'''.format(initprm) + + return initparams + + def configure_standby_db(self): + """ + Setup standby after creation using DBCA + """ + stdbydbuname,prmydbuname,prmydbport,prmydbname,prmyscanname=self.get_stdby_variables() + connect_str,osuser,dbhome,dbbase,oinv,osid=self.get_standby_connect_str() + stdbyscanname=self.ora_env_dict["SCAN_NAME"] if self.ocommon.check_key("SCAN_NAME",self.ora_env_dict) else self.prog_exit("127") + stdbyscanport=self.ora_env_dict["SCAN_PORT"] if self.ocommon.check_key("SCAN_PORT",self.ora_env_dict) else "1521" + prmytnssvc=self.ocommon.get_tnssvc_str(prmydbuname,prmydbport,prmyscanname) + stdbytnssvc=self.ocommon.get_tnssvc_str(stdbydbuname,stdbyscanport,stdbyscanname) + + msg='''Setting parameters in standby DB''' + self.ocommon.log_info_message(msg,self.file_name) + stdbylgdg,dbrdestsize,stdbydbdg,dgname=self.get_stdby_dg_name() + lgdest1="""LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME={0}""".format(stdbydbuname) + lgdest2='''SERVICE="{0}" ASYNC VALID_FOR=(ONLINE_LOGFILE,PRIMARY_ROLE) DB_UNIQUE_NAME={1}'''.format(prmytnssvc,prmydbuname) + + + sqlcmd=""" + alter system set LOG_ARCHIVE_CONFIG='DG_CONFIG=({2},{3})' scope=both sid='*'; + alter system set dg_broker_config_file1='{4}' scope=spfile sid='*'; + alter system set dg_broker_config_file2='{4}' scope=spfile sid='*'; + alter system set FAL_SERVER='{5}' scope=both sid='*'; + alter system set dg_broker_start=true scope=both sid='*'; + alter system set LOG_ARCHIVE_DEST_1='{0}' scope=both sid='*'; + alter system set LOG_ARCHIVE_DEST_2='{1}' scope=both sid='*'; + alter system set LOG_ARCHIVE_DEST_STATE_1=ENABLE scope=both sid='*'; + alter system set LOG_ARCHIVE_DEST_STATE_2=ENABLE scope=both sid='*'; + alter system set DB_FILES=1024 scope=spfile sid='*'; + alter system set LOG_BUFFER=256M scope=spfile sid='*'; + alter system set DB_BLOCK_CHECKSUM='TYPICAL' scope=spfile sid='*'; + alter system set DB_LOST_WRITE_PROTECT='TYPICAL' scope=spfile sid='*'; + alter system set DB_FLASHBACK_RETENTION_TARGET=120 scope=spfile sid='*'; + alter system set PARALLEL_THREADS_PER_CPU=1 scope=spfile sid='*'; + alter database recover managed standby database cancel; + alter database flashback on; + alter database recover managed standby database disconnect; + """.format(lgdest1,lgdest2,prmydbuname,stdbydbuname,stdbydbdg,prmytnssvc) + + output=self.ocommon.run_sql_cmd(sqlcmd,connect_str) + hostname = self.ocommon.get_public_hostname() + self.ocommon.stop_rac_db(osuser,dbhome,stdbydbuname,hostname) + self.ocommon.start_rac_db(osuser,dbhome,stdbydbuname,hostname,None) + + def configure_dgsetup(self): + """ + Setup Data Guard + """ + stdbydbuname,prmydbuname,prmydbport,prmydbname,prmyscanname=self.get_stdby_variables() + osuser,dbhome,dbbase,oinv=self.ocommon.get_db_params() + hostname = self.ocommon.get_public_hostname() + inst_sid=self.ocommon.get_inst_sid(osuser,dbhome,stdbydbuname,hostname) + connect_str=self.ocommon.get_dgmgr_str(dbhome,inst_sid,osuser,"sys","HIDDEN_STRING",prmyscanname,prmydbport,prmydbuname,None,"sysdba",None) + stdbyscanname=self.ora_env_dict["SCAN_NAME"] if self.ocommon.check_key("SCAN_NAME",self.ora_env_dict) else self.prog_exit("127") + stdbyscanport=self.ora_env_dict["SCAN_PORT"] if self.ocommon.check_key("SCAN_PORT",self.ora_env_dict) else "1521" + prmytnssvc=self.ocommon.get_tnssvc_str(prmydbuname,prmydbport,prmyscanname) + stdbytnssvc=self.ocommon.get_tnssvc_str(stdbydbuname,stdbyscanport,stdbyscanname) + + dgcmd=''' + create configuration '{0}' as primary database is {0} connect identifier is "{2}"; + ADD DATABASE {1} AS CONNECT IDENTIFIER IS "{3}"; + enable configuration; + exit; + '''.format(prmydbuname,stdbydbuname,prmytnssvc,stdbytnssvc) + dbpasswd=self.ocommon.get_db_passwd() + self.ocommon.set_mask_str(dbpasswd) + output,error,retcode=self.ocommon.run_sqlplus(connect_str,dgcmd,None) + self.ocommon.log_info_message("Calling check_sql_err() to validate the sql command return status",self.file_name) + self.ocommon.check_dgmgrl_err(output,error,retcode,None) + self.ocommon.unset_mask_str() + + + def restart_db(self): + """ + restart DB + """ + stdbydbuname,prmydbuname,prmydbport,prmydbname,prmyscanname=self.get_stdby_variables() + connect_str,osuser,dbhome,dbbase,oinv,osid=self.get_standby_connect_str() + hostname = self.ocommon.get_public_hostname() + self.ocommon.stop_rac_db(osuser,dbhome,stdbydbuname,hostname) + self.ocommon.start_rac_db(osuser,dbhome,stdbydbuname,hostname,None) + + diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/orasetupenv.py b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/orasetupenv.py new file mode 100755 index 0000000000..ee435bc8c7 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/orasetupenv.py @@ -0,0 +1,794 @@ +#!/usr/bin/python + +############################# +# Copyright 2021, Oracle Corporation and/or affiliates. All rights reserved. +# Licensed under the Universal Permissive License v 1.0 as shown at http://oss.oracle.com/licenses/upl +# Author: paramdeep.saini@oracle.com +############################ + +""" + This file contains to the code call different classes objects based on setup type +""" + +from oralogger import * +from oraenv import * +from oracommon import * +from oramachine import * +from orasshsetup import * +from oracvu import * + +import os +import re +import sys +import itertools +from time import sleep, perf_counter +#from threading import Thread +from multiprocessing import Process + +class OraSetupEnv: + """ + This class setup the env before setting up the rac env + """ + def __init__(self,oralogger,orahandler,oraenv,oracommon,oracvu,orasetupssh): + try: + self.ologger = oralogger + self.ohandler = orahandler + self.oenv = oraenv.get_instance() + self.ocommon = oracommon + self.ocvu = oracvu + self.osetupssh = orasetupssh + self.ora_env_dict = oraenv.get_env_vars() + self.file_name = os.path.basename(__file__) + except BaseException as ex: + ex_type, ex_value, ex_traceback = sys.exc_info() + trace_back = sys.tracebacklimit.extract_tb(ex_traceback) + stack_trace = list() + for trace in trace_back: + stack_trace.append("File : %s , Line : %d, Func.Name : %s, Message : %s" % (trace[0], trace[1], trace[2], trace[3])) + self.ocommon.log_info_message(ex_type.__name__,self.file_name) + self.ocommon.log_info_message(ex_value,self.file_name) + self.ocommon.log_info_message(stack_trace,self.file_name) + + def setup(self): + """ + This function setup the grid on this machine + """ + + self.ocommon.log_info_message("Start setup()",self.file_name) + ct = datetime.datetime.now() + bts = ct.timestamp() + if self.ocommon.check_key("RESET_PASSWORD",self.ora_env_dict): + self.ocommon.log_info_message("RESET_PASSWORD variable is set. Resetting the OS password for users: " + self.ora_env_dict["RESET_PASSWORD"],self.file_name) + for user in self.ora_env_dict["RESET_PASSWORD"].split(','): + self.ocommon.reset_os_password(user) + elif self.ocommon.check_key("CUSTOM_RUN_FLAG",self.ora_env_dict): + self.populate_env_vars() + else: + if self.ocommon.check_key("DBCA_RESPONSE_FILE",self.ora_env_dict): + self.ocommon.update_rac_env_vars_from_rspfile(self.ora_env_dict["DBCA_RESPONSE_FILE"]) + if not self.ocommon.check_key("SINGLE_NETWORK",self.ora_env_dict): + install_node,pubhost=self.ocommon.get_installnode() + if install_node.lower() == pubhost.lower(): + if not self.ocommon.check_key("GRID_RESPONSE_FILE",self.ora_env_dict): + self.validate_private_nodes() + #self.ocommon.update_domainfrom_resolvconf_file() + self.populate_env_vars() + self.check_statefile() + self.env_var_checks() + self.stdby_env_var_checks() + self.set_gateway() + self.add_ntp_conf() + self.touch_fstab() + self.reset_systemd() + self.check_systemd() + self.set_ping_permission() + self.set_common_script() + self.add_domain_search() + self.add_dns_servers() + self.populate_etchosts("localhost") + self.populate_user_profiles() + #self.setup_ssh_for_k8s() + self.setup_gi_sw() + self.set_asmdev_perm() + self.reset_grid_user_passwd() + self.setup_db_sw() + self.adjustMemlockLimits() + self.reset_db_user_passwd() + # self.ocommon.log_info_message("Start crs_sw_install()",self.file_name) + # self.crs_sw_install() + # self.ocommon.log_info_message("End crs_sw_install()",self.file_name) + self.setup_ssh_for_k8s() + self.set_banner() + + ct = datetime.datetime.now() + ets = ct.timestamp() + totaltime=ets - bts + self.ocommon.log_info_message("Total time for setup() = [ " + str(round(totaltime,3)) + " ] seconds",self.file_name) + + ########### SETUP_MACHINE ENDS here #################### + + ## Function to perfom DB checks ###### + def populate_env_vars(self): + """ + Populate the env vars if not set + """ + self.ocommon.populate_rac_env_vars() + if self.ocommon.check_key("CRS_GPC",self.ora_env_dict): + if self.ocommon.ora_env_dict["CRS_GPC"].lower() == 'true': + self.ora_env_dict=self.ocommon.add_key("DB_CONFIG_TYPE","SINGLE",self.ora_env_dict) + pubnode=self.ocommon.get_public_hostname() + crs_nodes="pubhost="+pubnode + if not self.ocommon.check_key("CRS_NODES",self.ora_env_dict): + self.ora_env_dict=self.ocommon.add_key("CRS_NODES",crs_nodes,self.ora_env_dict) + else: + self.ora_env_dict=self.ocommon.update_key("CRS_NODES",crs_nodes,self.ora_env_dict) + else: + if not self.ocommon.check_key("CRS_NODES",self.ora_env_dict): + msg="CRS_NODES is not passed as an env variable. If CRS_NODES is not passed as env variable then user must pass PUBLIC_HOSTS,VIRTUAL_HOSTS and PRIVATE_HOST as en env variable so that CRS_NODES can be populated." + self.ocommon.log_error_message(msg,self.file_name) + self.populate_crs_nodes() + + def check_statefile(self): + """ + populate the state file + """ + file=self.oenv.statelogfile_name() + if not self.ocommon.check_file(file,"local",None,None): + self.ocommon.create_file(file,"local",None,None) + if self.ocommon.check_key("OP_TYPE",self.ora_env_dict): + if self.ora_env_dict["OP_TYPE"] == 'setuprac': + self.ocommon.update_statefile("provisioning") + elif self.ora_env_dict["OP_TYPE"] == 'nosetup': + self.ocommon.update_statefile("provisioning") + elif self.ora_env_dict["OP_TYPE"] == 'addnode': + self.ocommon.update_statefile("addnode") + else: + pass + + def populate_crs_nodes(self): + """ + Populate CRS_NODES variable using PUBLIC_HOSTS,VIRTUAL_HOSTS and PRIVATE_HOSTS + """ + pub_node_list=[] + virt_node_list=[] + priv_node_list=[] + + crs_nodes="" + if not self.ocommon.check_key("PUBLIC_HOSTS",self.ora_env_dict): + self.ocommon.log_error_message("PUBLIC_HOSTS list is not found in env variable list.Exiting...",self.file_name) + self.ocommon.prog_exit("127") + else: + pub_node_list=self.ora_env_dict["PUBLIC_HOSTS"].split(",") + + if not self.ocommon.check_key("VIRTUAL_HOSTS",self.ora_env_dict): + self.ocommon.log_error_message("VIRTUAL_HOSTS list is not found in env variable list.Exiting...",self.file_name) + self.ocommon.prog_exit("127") + else: + virt_node_list=self.ora_env_dict["VIRTUAL_HOSTS"].split(",") + + if not self.ocommon.check_key("CRS_PRIVATE_IP1",self.ora_env_dict) and not self.ocommon.check_key("CRS_PRIVATE_IP2",self.ora_env_dict): + if self.ocommon.check_key("PRIVATE_HOSTS",self.ora_env_dict): + priv_node_list=self.ora_env_dict["PRIVATE_HOSTS"].split(",") + + if not self.ocommon.check_key("SINGLE_NETWORK",self.ora_env_dict): + if len(pub_node_list) == len(virt_node_list) and len(pub_node_list) == len(priv_node_list): + for (pubnode,vipnode,privnode) in zip(pub_node_list,virt_node_list,priv_node_list): + crs_nodes= crs_nodes + "pubhost=" + pubnode + "," + "viphost=" + vipnode + "," + "privhost=" + privnode + ";" + else: + if len(pub_node_list) == len(virt_node_list): + for (pubnode,vipnode,privnode) in zip(pub_node_list,virt_node_list): + crs_nodes= crs_nodes + "pubhost=" + pubnode + "," + "viphost=" + vipnode + ";" + else: + self.ocommon.log_error_message("public node and virtual host node count is not equal",self.file_name) + self.ocommon.prog_exit("127") + else: + if len(pub_node_list) == len(virt_node_list): + for (pubnode,vipnode,privnode) in zip(pub_node_list,virt_node_list): + crs_nodes= crs_nodes + "pubhost=" + pubnode + "," + "viphost=" + vipnode + ";" + + crs_nodes=crs_nodes.strip(";") + self.ora_env_dict=self.ocommon.add_key("CRS_NODES",crs_nodes,self.ora_env_dict) + self.ocommon.log_info_message("CRS_NODES is populated: " + self.ora_env_dict["CRS_NODES"] ,self.file_name) + + def validate_private_nodes(self): + """ + This function validate the private network + """ + priv_node_status=False + + if self.ocommon.check_key("PRIVATE_HOSTS",self.ora_env_dict): + priv_node_status=True + priv_node_list=self.ora_env_dict["PRIVATE_HOSTS"].split(",") + else: + self.ocommon.log_info_message("PRIVATE_HOSTS is not set.",self.file_name) + + if self.ocommon.check_key("CRS_GPC",self.ora_env_dict): + pubnode=self.ocommon.get_public_hostname() + domain=self.ora_env_dict["PUBLIC_HOSTS_DOMAIN"] if self.ocommon.check_key("PUBLIC_HOSTS_DOMAIN",self.ora_env_dict) else self.ocommon.get_host_domain() + if domain is None: + self.ocommon.log_error_message("PUBLIC_HOSTS_DOMAIN is not set.",self.file_name) + value=self.ocommon.get_ip(pubnode,domain) + if not self.ocommon.check_key("CRS_PRIVATE_IP1",self.ora_env_dict): + self.ora_env_dict=self.ocommon.add_key("CRS_PRIVATE_IP1",value,self.ora_env_dict) + else: + self.ora_env_dict=self.ocommon.update_key("CRS_PRIVATE_IP1",value,self.ora_env_dict) + priv_node_status=True + else: + if self.ocommon.check_key("CRS_PRIVATE_IP1",self.ora_env_dict): + priv_node_status=True + priv_ip1_list=self.ora_env_dict["CRS_PRIVATE_IP1"].split(",") + for ip in priv_ip1_list: + self.ocommon.ping_ip(ip,True) + else: + self.ocommon.log_info_message("CRS_PRIVATE_IP1 is not set.",self.file_name) + + if self.ocommon.check_key("CRS_PRIVATE_IP2",self.ora_env_dict): + priv_node_status=True + priv_ip2_list=self.ora_env_dict["CRS_PRIVATE_IP2"].split(",") + for ip in priv_ip2_list: + self.ocommon.ping_ip(ip,True) + else: + self.ocommon.log_info_message("CRS_PRIVATE_IP2 is not set.",self.file_name) + + if not priv_node_status: + self.ocommon.log_error_message("PRIVATE_HOSTS or CRS_PRIVATE_IP1 or CRS_PRIVATE_IP2 list is not found in env variable list.Exiting...",self.file_name) + self.ocommon.prog_exit("127") + + def env_var_checks(self): + """ + check the env vars + """ + self.ocommon.check_env_variable("GRID_HOME",True) + self.ocommon.check_env_variable("GRID_BASE",True) + self.ocommon.check_env_variable("INVENTORY",True) + self.ocommon.check_env_variable("DB_HOME",False) + self.ocommon.check_env_variable("DB_BASE",False) + + def stdby_env_var_checks(self): + """ + Check the stby env variable + """ + if self.ocommon.check_key("OP_TYPE",self.ora_env_dict): + if self.ora_env_dict["OP_TYPE"] == 'setupracstandby': + self.ocommon.check_env_variable("DB_UNIQUE_NAME",False) + self.ocommon.check_env_variable("PRIMARY_DB_SCAN_PORT",False) + self.ocommon.check_env_variable("PRIMARY_DB_NAME",True) + self.ocommon.check_env_variable("PRIMARY_DB_SERVICE_NAME",False) + self.ocommon.check_env_variable("PRIMARY_DB_UNIQUE_NAME",True) + self.ocommon.check_env_variable("PRIMARY_DB_SCAN_NAME",True) + + def set_gateway(self): + """ + Set the default gateway + """ + if self.ocommon.check_key("DEFAULT_GATEWAY",self.ora_env_dict): + self.ocommon.log_info_message("DEFAULT_GATEWAY variable is set. Validating the gateway gw",self.file_name) + if self.ocommon.validate_ip(self.ora_env_dict["DEFAULT_GATEWAY"]): + #cmd='''ip route; ip route del default''' + cmd='''ip route; ip route flush 0/0;ip route''' + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,None) + ### Set the Default gw + self.ocommon.log_info_message("Setting default gateway based on new gateway setting",self.file_name) + cmd='''route add default gw {0}'''.format(self.ora_env_dict["DEFAULT_GATEWAY"]) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,None) + else: + self.ocommon.log_error_message("DEFAULT_GATEWAY IP is not correct. Exiting..",self.file_name) + self.ocommon.prog_exit("NONE") + + def add_ntp_conf(self): + """ + This function start the NTP daemon + """ + if self.ocommon.check_key("NTP_START",self.ora_env_dict): + self.ocommon.log_info_message("NTP_START variable is set. Touching /etc/ntpd.conf",self.file_name) + cmd='''touch /etc/ntp.conf''' + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,True) + ### Start NTP + self.ocommon.log_info_message("NTP_START variable is set. Starting NTPD",self.file_name) + cmd='''systemctl start ntpd''' + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,True) + + def populate_etchosts(self,entry): + """ + Populating hosts file + """ + cmd=None + etchostfile="/etc/hosts" + if not self.ocommon.detect_k8s_env(): + if self.ocommon.check_key("HOSTFILE",self.ora_env_dict): + if os.path.exists(self.ora_env_dict["HOSTFILE"]): + cmd='''cat {0} > /etc/hosts'''.format(self.ora_env_dict["HOSTFILE"]) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + else: + lentry='''127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 \n::1 localhost localhost.localdomain localhost6 localhost6.localdomain6''' + + self.write_etchost("localhost.localdomain",etchostfile,"write",lentry) + if not self.ocommon.check_key("CRS_GPC",self.ora_env_dict): + if self.ocommon.check_key("CRS_NODES",self.ora_env_dict): + pub_nodes,vip_nodes,priv_nodes=self.ocommon.process_cluster_vars("CRS_NODES") + pub_nodes1=pub_nodes.replace(" ",",") + vip_nodes1=vip_nodes.replace(" ",",") + for node in pub_nodes1.split(","): + self.ocommon.log_info_message("The node set to :" + node + "-" + pub_nodes1,self.file_name) + self.write_etchost(node,etchostfile,"append",None) + for node in vip_nodes1.split(","): + self.write_etchost(node,etchostfile,"append",None) + + def write_etchost(self,node,file,mode,lentry): + """ + This funtion write an entry to /etc/host if the entry doesn't exit + """ + if node == "": + self.ocommon.log_info_message("write_etchost(): Node is : [NULL]. PASS",self.file_name) + return + if mode == 'append': + #fdata=self.ocommon.read_file(file) + #match=re.search(node,fdata,re.MULTILINE) + #if not match: + domain=self.ora_env_dict["PUBLIC_HOSTS_DOMAIN"] if self.ocommon.check_key("PUBLIC_HOSTS_DOMAIN",self.ora_env_dict) else self.ocommon.get_host_domain() + if domain is None: + self.ocommon.log_error_message("PUBLIC_HOSTS_DOMAIN is not set.",self.file_name) + if self.ocommon.check_key("PUBLIC_HOSTS_DOMAIN",self.ora_env_dict): + self.ora_env_dict=self.ocommon.update_key("PUBLIC_HOSTS_DOMAIN",domain,self.ora_env_dict) + else: + self.ora_env_dict=self.ocommon.add_key("PUBLIC_HOSTS_DOMAIN",domain,self.ora_env_dict) + self.ocommon.log_info_message("Domain is :" + self.ora_env_dict["PUBLIC_HOSTS_DOMAIN"],self.file_name) + self.ocommon.log_info_message("The hostname :" + node + "." + domain,self.file_name) + ip=self.ocommon.get_ip(node,domain) + # self.ocommon.log_info_message(" The Ip set to :", ip) + entry='''{0} {1} {2}'''.format(ip,node + "." + domain,node) + # self.ocommon.log_info_message(" The entry set to :", entry) + cmd='''echo {0} >> {1}'''.format(entry,file) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + elif mode == 'write': + #fdata=self.ocommon.read_file(file) + #match=re.search(node,fdata,re.MULTILINE) + #if not match: + #self.ocommon.log_info_message(" The lentry set to :", lentry) + cmd='''echo "{0}" > "{1}"'''.format(lentry,file) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + else: + pass + + def touch_fstab(self): + """ + This function toch fstab + """ + cmd='''touch /etc/fstab''' + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,True) + + def reset_systemd(self): + """ + This function reset the systemd + """ + self.ocommon.log_info_message("Checking systemd failed units.",self.file_name) + cmd="""systemctl | grep failed | awk '{ print $2 }'""" + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,True) + self.ocommon.log_info_message("Disabling failed units.",self.file_name) + if output: + for svc in output.split('\n'): + if svc: + cmd='''systemctl disable {0}'''.format(svc) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,True) + self.ocommon.log_info_message("Resetting systemd.",self.file_name) + cmd='''systemctl reset-failed''' + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,None) + + def check_systemd(self): + """ + This function check systemd and exit the program if systemd status is not running + """ + self.ocommon.log_info_message("Checking systemd. It must be in running state to setup clusterware inside containers for clusterware.",self.file_name) + cmd="""systemctl status | awk '/State:/{ print $0 }' | grep -v 'awk /State:/' | awk '{ print $2 }'""" + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,True) + if 'running' in output: + self.ocommon.log_info_message("Systemctl status check passed!",self.file_name) + else: + self.ocommon.log_error_message("Systemctl is not in running state.",self.file_name) + #self.ocommon.prog_exit("None") + + def set_ping_permission(self): + """ + setting ping permission + """ + pass + #self.ocommon.log_info_message("Setting ping utility permissions so that it works correctly inside container",self.file_name) + #cmd='''chmod 6755 /usr/bin/ping;chmod 6755 /bin/ping''' + #output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + #self.ocommon.check_os_err(output,error,retcode,None) + + def adjustMemlockLimits(self): + """ + Adjust the soft and hard memory limits for the oracle db + """ + oracleDBConfigFile=None + gridDBConfigFile=None + memoryFile=None + + cmd='''mount | grep -i cgroup | awk \'{ print $1 }\'''' + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + oraversion=self.ocommon.get_rsp_version("INSTALL",None) + version=oraversion.split(".",1)[0].strip() + if int(version) < 23: + oracleDBConfigFile="/etc/security/limits.d/oracle-database-preinstall-23c.conf" + gridDBConfigFile="/etc/security/limits.d/grid-database-preinstall-23c.conf" + else: + oracleDBConfigFile="/etc/security/limits.d/oracle-database-preinstall-23ai.conf" + gridDBConfigFile="/etc/security/limits.d/grid-database-preinstall-23ai.conf" + + cgroupVersion=output.strip() + if cgroupVersion == 'cgroup2': + memoryFile="/sys/fs/cgroup/memory.max" + else: + memoryFile="/sys/fs/cgroup/memory/memory.limit_in_bytes" + + if self.ocommon.check_file(memoryFile,"local",None,None): + self.ocommon.log_info_message("memoryFile=[" + memoryFile + "]",self.file_name) + + cmd='''expr `cat {0}` / 1024'''.format(memoryFile) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + containerMemory=output.strip() + cmd='''expr {0} \* 9 / 10'''.format(containerMemory) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + containerMemory=output.strip() + self.ocommon.log_info_message("containerMemory=[" + containerMemory + "]",self.file_name) + + cmd='''grep " memlock " {0} | grep -v "^#" | grep hard | awk \'{{ print $4 }}\''''.format(oracleDBConfigFile) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + fileMemlockVal=output.strip() + + cmd='''sed -i -e \'s,{0},{1},g\' {2}'''.format(fileMemlockVal,containerMemory,oracleDBConfigFile) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + + cmd='''grep " memlock " {0} | grep -v "^#" | grep hard | awk \'{{ print $4 }}\''''.format(gridDBConfigFile) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + fileMemlockVal=output.strip() + + cmd='''sed -i -e \'s,{0},{1},g\' {2}'''.format(fileMemlockVal,containerMemory,gridDBConfigFile) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + + def set_common_script(self): + """ + This function set the 775 permission on common script dir + """ + if self.ocommon.check_key("COMMON_SCRIPTS",self.ora_env_dict): + self.ocommon.log_info_message("COMMON_SCRIPTS variable is set.",self.file_name) + if os.path.isdir(self.ora_env_dict["COMMON_SCRIPTS"]): + self.ocommon.log_info_message("COMMON_SCRIPT variable is set. Changing permissions and ownership",self.file_name) + cmd='''chown -R grid:oinstall {0}; chmod 775 {0}'''.format(self.ora_env_dict["COMMON_SCRIPTS"]) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,True) + else: + self.ocommon.log_info_message("COMMON_SCRIPT variable is set but directory doesn't exist!",self.file_name) + + def set_asmdev_perm(self): + """ + This function set the correct permissions for ASM Disks + """ + self.ocommon.set_asmdisk_perm("CRS_ASM_DEVICE_LIST",True) + self.ocommon.set_asmdisk_perm("REDO_ASM_DEVICE_LIST",None) + self.ocommon.set_asmdisk_perm("RECO_ASM_DEVICE_LIST",None) + self.ocommon.set_asmdisk_perm("DB_ASM_DEVICE_LIST",None) + if self.ocommon.check_key("CLUSTER_TYPE",self.ora_env_dict): + if self.ora_env_dict["CLUSTER_TYPE"] == 'DOMAIN': + if self.ocommon.check_key("GIMR_ASM_DEVICE_LIST",self.ora_env_dict): + self.ocommon.set_asmdisk_perm("GIMR_ASM_DEVICE_LIST",True) + + ## Function add DOMAIN Server + def add_domain_search(self): + """ + This function update search in /etc/resolv.conf + """ + dns_search_flag=None + search_domain='search' + if self.ocommon.check_key("PUBLIC_HOSTS_DOMAIN",self.ora_env_dict): + self.ocommon.log_info_message("PUBLIC_HOSTS_DOMAIN variable is set. Populating /etc/resolv.conf.",self.file_name) + dns_search_flag=True + for domain in self.ora_env_dict["PUBLIC_HOSTS_DOMAIN"].split(','): + search_domain = search_domain + ' ' + domain + + if self.ocommon.check_key("PRIVATE_HOSTS_DOMAIN",self.ora_env_dict): + self.ocommon.log_info_message("PRIVATE_HOSTS_DOMAIN variable is set. Populating /etc/resolv.conf.",self.file_name) + dns_search_flag=True + for domain in self.ora_env_dict["PRIVATE_HOSTS_DOMAIN"].split(','): + search_domain = search_domain + ' ' + domain + + if self.ocommon.check_key("CUSTOM_DOMAIN",self.ora_env_dict): + self.ocommon.log_info_message("CUSTOM_DOMAIN variable is set. Populating /etc/resolv.conf.",self.file_name) + dns_search_flag=True + for domain in self.ora_env_dict["CUSTOM_DOMAIN"].split(','): + search_domain = search_domain + ' ' + domain + + if dns_search_flag: + self.ocommon.log_info_message("Search Domain {0} is ready. Adding enteries in /etc/resolv.conf".format(search_domain),self.file_name) + cmd='''echo "{0}" > /etc/resolv.conf'''.format(search_domain) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,True) + + ## Function to perfom grid sw installation ###### + def add_dns_servers(self): + """ + This function add the dns servers + """ + if self.ocommon.check_key("DNS_SERVERS",self.ora_env_dict): + self.ocommon.log_info_message("DNS_SERVERS variable is set. Populating /etc/resolv.conf with DNS servers.",self.file_name) + for server in self.ora_env_dict["DNS_SERVERS"].split(','): + if server not in open('/etc/resolv.conf').read(): + cmd='''echo "nameserver {0}" >> /etc/resolv.conf'''.format(server) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,True) + else: + self.ocommon.log_info_message("DNS_SERVERS variable is not set.",self.file_name) + + ## Function to perfom oracle sw installation ###### + def setup_gi_sw(self): + """ + This function unzip the Grid and Oracle Software + """ + gihome="" + oinv="" + gibase="" + giuser="" + gigrp="" + giswfie="" + ### Unzipping Gi Software + if self.ocommon.check_key("OP_TYPE",self.ora_env_dict): + #if self.ocommon.check_key("OP_TYPE",self.ora_env_dict) and any(optype == self.ora_env_dict["OP_TYPE"] for optype not in ("racaddnode")): + if self.ocommon.check_key("STAGING_SOFTWARE_LOC",self.ora_env_dict) and self.ocommon.check_key("GRID_SW_ZIP_FILE",self.ora_env_dict): + giswfile=self.ora_env_dict["STAGING_SOFTWARE_LOC"] + "/" + self.ora_env_dict["GRID_SW_ZIP_FILE"] + if os.path.isfile(giswfile): + if not self.ocommon.check_key("COPY_GRID_SOFTWARE",self.ora_env_dict): + self.ora_env_dict=self.ocommon.add_key("COPY_GRID_SOFTWARE","True",self.ora_env_dict) + giuser,gihome,gibase,oinv=self.ocommon.get_gi_params() + gigrp=self.ora_env_dict["OINSTALL"] + self.ocommon.log_info_message("copy Software flag is set",self.file_name) + self.ocommon.log_info_message("Setting up oracle invetnory directory!",self.file_name) + self.setup_sw_dirs(oinv,giuser,gigrp) + self.ocommon.log_info_message("Setting up Grid_BASE directory!",self.file_name) + self.setup_sw_dirs(gibase,giuser,gigrp) + self.ocommon.log_info_message("Setting up Grid_HOME directory!",self.file_name) + self.setup_sw_dirs(gihome,giuser,gigrp) + dir = os.listdir(gihome) + if len(dir) == 0: + self.ocommon.log_info_message("Grid software file is set : " + giswfile ,self.file_name) + self.ocommon.log_info_message("Starting grid software unzipping file",self.file_name) + cmd='''su - {0} -c \" unzip -q {1} -d {2}\"'''.format(giuser,giswfile,gihome) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,True) + self.ora_env_dict=self.ocommon.add_key("GI_SW_UNZIPPED_FLAG","true",self.ora_env_dict) + else: + self.ocommon.log_error_message("oracle gi home directory is not empty. skipping software unzipping...",self.file_name) + else: + install_node,pubhost=self.ocommon.get_installnode() + if install_node.lower() == pubhost.lower(): + self.ocommon.log_error_message("grid software file " + giswfile + " doesn't exist. Exiting...",self.file_name) + self.ocommon.prog_exit("127") + else: + self.ocommon.log_info_message("grid software file " + giswfile + " doesn't exist. software will be copied from install node..." + install_node.lower(),self.file_name) + + ## Function to unzip the software + def setup_db_sw(self): + """ + unzip the software + """ + dbhome="" + dbbase="" + dbuser="" + gigrp="" + dbswfile="" + ### Unzipping Gi Software + if self.ocommon.check_key("OP_TYPE",self.ora_env_dict): + #if self.ocommon.check_key("OP_TYPE",self.ora_env_dict) and any(optype == self.ora_env_dict["OP_TYPE"] for optype not in ("racaddnode")): + if self.ocommon.check_key("STAGING_SOFTWARE_LOC",self.ora_env_dict) and self.ocommon.check_key("DB_SW_ZIP_FILE",self.ora_env_dict): + dbswfile=self.ora_env_dict["STAGING_SOFTWARE_LOC"] + "/" + self.ora_env_dict["DB_SW_ZIP_FILE"] + if os.path.isfile(dbswfile): + if not self.ocommon.check_key("COPY_DB_SOFTWARE",self.ora_env_dict): + self.ora_env_dict=self.ocommon.add_key("COPY_DB_SOFTWARE","True",self.ora_env_dict) + dbuser,dbhome,dbbase,oinv=self.ocommon.get_db_params() + gigrp=self.ora_env_dict["OINSTALL"] + self.ocommon.log_info_message("Copy Software flag is set",self.file_name) + self.ocommon.log_info_message("Setting up ORACLE_BASE directory!",self.file_name) + self.setup_sw_dirs(dbbase,dbuser,gigrp) + self.ocommon.log_info_message("Setting up DB_HOME directory!",self.file_name) + self.setup_sw_dirs(dbhome,dbuser,gigrp) + dir = os.listdir(dbhome) + if len(dir) == 0: + self.ocommon.log_info_message("DB software file is set : " + dbswfile , self.file_name) + self.ocommon.log_info_message("Starting db software unzipping file",self.file_name) + cmd='''su - {0} -c \" unzip -q {1} -d {2}\"'''.format(dbuser,dbswfile,dbhome) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,True) + self.ora_env_dict=self.ocommon.add_key("RAC_SW_UNZIPPED_FLAG","true",self.ora_env_dict) + else: + self.ocommon.log_error_message("oracle db home directory is not empty. skipping software unzipping...",self.file_name) + else: + install_node,pubhost=self.ocommon.get_installnode() + if install_node.lower() == pubhost.lower(): + self.ocommon.log_error_message("db software file " + dbswfile + " doesn't exist. exiting...",self.file_name) + self.ocommon.prog_exit("127") + else: + self.ocommon.log_info_message("db software file " + dbswfile + " doesn't exist. software will be copied from install node..." + install_node.lower(),self.file_name) + + def setup_sw_dirs(self,dir,user,group): + """ + This function setup the Oracle Software directories if not already created + """ + if os.path.isdir(dir): + self.ocommon.log_info_message("Directory " + dir + " already exist!",self.file_name) + else: + self.ocommon.log_info_message("Creating dir " + dir,self.file_name) + cmd='''mkdir -p {0}'''.format(dir) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,True) + #### + self.ocommon.log_info_message("Changing the permissions of directory",self.file_name) + cmd='''chown -R {0}:{1} {2}'''.format(user,group,dir) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,True) + +###### Checking GI Home ####### + def reset_grid_user_passwd(self): + """ + This function check the Gi home and if it is not setup the it will reset the GI user password + """ + if self.ocommon.check_key("OP_TYPE",self.ora_env_dict): + if self.ora_env_dict["OP_TYPE"] == 'nosetup': + if not self.ocommon.check_key("SSH_PRIVATE_KEY",self.ora_env_dict) and not self.ocommon.check_key("SSH_PUBLIC_KEY",self.ora_env_dict): + user=self.ora_env_dict["GRID_USER"] + self.ocommon.log_info_message("Resetting OS Password for OS user : " + user,self.file_name) + self.ocommon.reset_os_password(user) + +###### Checking RAC Home ####### + def reset_db_user_passwd(self): + """ + This function check the RAC home and if it is not setup the it will reset the DB user password + """ + if self.ocommon.check_key("OP_TYPE",self.ora_env_dict): + if self.ora_env_dict["OP_TYPE"] == 'nosetup': + if not self.ocommon.check_key("SSH_PRIVATE_KEY",self.ora_env_dict) and not self.ocommon.check_key("SSH_PUBLIC_KEY",self.ora_env_dict): + user=self.ora_env_dict["DB_USER"] + self.ocommon.log_info_message("Resetting OS Password for OS user : " + user,self.file_name) + self.ocommon.reset_os_password(user) + +###### Setting up parallel Oracle and Grid User setup using Keys #### + def setup_ssh_using_keys(self,sshi): + """ + Setting up ssh using keys + """ + self.ocommon.log_info_message("I am in setup_ssh_using_keys",self.file_name) + uohome=sshi.split(":") + self.ocommon.log_info_message("I am in setup_ssh_using_keys + uhome[0] and uhome[1]",self.file_name) + self.osetupssh.setupsshdirs(uohome[0],uohome[1],None) + self.osetupssh.setupsshusekey(uohome[0],uohome[1],None) + #self.osetupssh.verifyssh(uohome[0],None) + +###### Setting up ssh for K8s ####### + def setup_ssh_for_k8s(self): + """ + This function setup ssh using private and public key in K8s env + """ + if self.ocommon.check_key("SSH_PRIVATE_KEY",self.ora_env_dict) and self.ocommon.check_key("SSH_PUBLIC_KEY",self.ora_env_dict): + if self.ocommon.check_file(self.ora_env_dict["SSH_PRIVATE_KEY"],True,None,None) and self.ocommon.check_file(self.ora_env_dict["SSH_PUBLIC_KEY"],True,None,None): + self.ocommon.log_info_message("Begin SSH Setup using SSH_PRIVATE_KEY and SSH_PUBLIC_KEY",self.file_name) + giuser,gihome,obase,invloc=self.ocommon.get_gi_params() + dbuser,dbhome,dbase,oinv=self.ocommon.get_db_params() + self.ocvu.cluvfy_updcvucfg(gihome,giuser) + SSH_USERS=giuser + ":" + gihome,dbuser + ":" + dbhome + for sshi in SSH_USERS: + self.setup_ssh_using_keys(sshi) + + self.ocommon.log_info_message("End SSH Setup using SSH_PRIVATE_KEY and SSH_PUBLIC_KEY",self.file_name) + else: + if self.ocommon.detect_k8s_env(): + self.ocommon.log_error_message("SSH_PRIVATE_KEY and SSH_PUBLIC_KEY is ot set in K8s env. Exiting..",self.file_name) + self.ocommon.prog_exit("127") +###### Install CRS Software on node ###### + def crs_sw_install(self): + """ + This function performs the crs software install on all the nodes + """ + giuser,gihome,gibase,oinv=self.ocommon.get_gi_params() + status=True + if not self.ocommon.check_key("GI_HOME_INSTALLED_FLAG",self.ora_env_dict): + status=self.ocommon.check_home_inv(None,gihome,giuser) + if not status and self.ocommon.check_key("COPY_GRID_SOFTWARE",self.ora_env_dict): + pub_nodes,vip_nodes,priv_nodes=self.ocommon.process_cluster_vars("CRS_NODES") + crs_nodes=pub_nodes.replace(" ",",") + osdba=self.ora_env_dict["OSDBA_GROUP"] if self.ocommon.check_key("OSDBA",self.ora_env_dict) else "asmdba" + osoper=self.ora_env_dict["OSPER_GROUP"] if self.ocommon.check_key("OSPER_GROUP",self.ora_env_dict) else "asmoper" + osasm=self.ora_env_dict["OSASM_GROUP"] if self.ocommon.check_key("OSASM_GROUP",self.ora_env_dict) else "asmadmin" + unixgrp="oinstall" + hostname=self.ocommon.get_public_hostname() + lang=self.ora_env_dict["LANGUAGE"] if self.ocommon.check_key("LANGUAGE",self.ora_env_dict) else "en" + node=hostname + copyflag=" -noCopy " + if not self.ocommon.check_key("COPY_GRID_SOFTWARE",self.ora_env_dict): + copyflag=" -noCopy " + oraversion=self.ocommon.get_rsp_version("INSTALL",None) + version=oraversion.split(".",1)[0].strip() + + #self.crs_sw_install_on_node(giuser,copyflag,crs_nodes,oinv,gihome,gibase,osdba,osoper,osasm,version,node) + self.ocommon.log_info_message("Running CRS Sw install on node " + node,self.file_name) + self.ocommon.crs_sw_install_on_node(giuser,copyflag,crs_nodes,oinv,gihome,gibase,osdba,osoper,osasm,version,node) + self.ocommon.run_orainstsh_local(giuser,node,oinv) + self.ocommon.run_rootsh_local(gihome,giuser,node) + +###### Setting up ssh for K8s ####### + def populate_user_profiles(self): + """ + This function setup the user profiles if the env is k8s + """ + giuser,gihome,obase,invloc=self.ocommon.get_gi_params() + dbuser,dbhome,dbase,oinv=self.ocommon.get_db_params() + gipath='''{0}/bin:/bin:/usr/bin:/sbin:/usr/local/bin'''.format(gihome) + dbpath='''{0}/bin:/bin:/usr/bin:/sbin:/usr/local/bin'''.format(dbhome) + gildpath='''{0}/lib:/lib/:/usr/lib'''.format(gihome) + dbldpath='''{0}/lib:/lib/:/usr/lib'''.format(dbhome) + cdgihome='''cd {0}'''.format(gihome) + cddbhome='''cd {0}'''.format(dbhome) + cdgilogs='''cd {0}/diag/crs/*/crs/trace'''.format(obase) + cddblogs='''cd {0}/diag/rdbms/'''.format(dbase) + cdinvlogs='''cd {0}/logs'''.format(invloc) + + if not self.ocommon.check_key("PROFILE_FLAG",self.ora_env_dict): + self.ora_env_dict=self.ocommon.add_key("PROFILE_FLAG","TRUE",self.ora_env_dict) + + tmpdir=self.ocommon.get_tmpdir() + self.ocommon.set_user_profile(giuser,"TMPDIR",tmpdir,"export") + self.ocommon.set_user_profile(giuser,"TEMP",tmpdir,"export") + self.ocommon.set_user_profile(dbuser,"TMPDIR",tmpdir,"export") + self.ocommon.set_user_profile(dbuser,"TEMP",tmpdir,"export") + if self.ocommon.check_key("PROFILE_FLAG",self.ora_env_dict): + self.ocommon.set_user_profile(giuser,"ORACLE_HOME",gihome,"export") + self.ocommon.set_user_profile(giuser,"GRID_HOME",gihome,"export") + self.ocommon.set_user_profile(giuser,"PATH",gipath,"export") + self.ocommon.set_user_profile(giuser,"LD_LIBRARY_PATH",gildpath,"export") + self.ocommon.set_user_profile(dbuser,"ORACLE_HOME",dbhome,"export") + self.ocommon.set_user_profile(dbuser,"DB_HOME",dbhome,"export") + self.ocommon.set_user_profile(dbuser,"PATH",dbpath,"export") + self.ocommon.set_user_profile(dbuser,"LD_LIBRARY_PATH",dbldpath,"export") + #### Setting alias + self.ocommon.set_user_profile(giuser,"cdgihome",cdgihome,"alias") + self.ocommon.set_user_profile(giuser,"cddbhome",cddbhome,"alias") + self.ocommon.set_user_profile(dbuser,"cddbhome",cddbhome,"alias") + self.ocommon.set_user_profile(giuser,"cdgilogs",cdgilogs,"alias") + self.ocommon.set_user_profile(dbuser,"cddblogs",cddblogs,"alias") + self.ocommon.set_user_profile(dbuser,"cdinvlogs",cdinvlogs,"alias") + self.ocommon.set_user_profile(giuser,"cdinvlogs",cdinvlogs,"alias") + + +##### Set the banner ### + def set_banner(self): + """ + This function set the banner + """ + if self.ocommon.check_key("OP_TYPE",self.ora_env_dict): + if self.ocommon.check_key("GI_SW_UNZIPPED_FLAG",self.ora_env_dict) and self.ora_env_dict["OP_TYPE"] == 'nosetup': + msg="Since OP_TYPE is setup to default value(nosetup),setup will be initated by other nodes based on the value OP_TYPES" + self.ocommon.log_info_message(self.ocommon.print_banner(msg),self.file_name) + elif self.ocommon.check_key("GI_SW_UNZIPPED_FLAG",self.ora_env_dict) and self.ora_env_dict["OP_TYPE"] != 'nosetup': + msg="Since OP_TYPE is set to " + self.ora_env_dict["OP_TYPE"] + " ,setup will be initated on this node" + self.ocommon.log_info_message(self.ocommon.print_banner(msg),self.file_name) + else: + giuser,gihome,obase,invloc=self.ocommon.get_gi_params() + pubhostname = self.ocommon.get_public_hostname() + retcode1=self.ocvu.check_home(pubhostname,gihome,giuser) + if retcode1 == 0: + self.ora_env_dict=self.ocommon.add_key("GI_HOME_INSTALLED_FLAG","true",self.ora_env_dict) + status=self.ocommon.check_gi_installed(retcode1,gihome,giuser,pubhostname,invloc) + if status: + msg="Grid is already installed on this machine" + self.ocommon.log_info_message(self.ocommon.print_banner(msg),self.file_name) + self.ora_env_dict=self.ocommon.add_key("GI_HOME_CONFIGURED_FLAG","true",self.ora_env_dict) + else: + msg="Grid is not installed on this machine" + self.ocommon.log_info_message(self.ocommon.print_banner(msg),self.file_name) diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/orasshsetup.py b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/orasshsetup.py new file mode 100755 index 0000000000..d01f802633 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/orasshsetup.py @@ -0,0 +1,256 @@ +#!/usr/bin/python + +############################# +# Copyright 2021, Oracle Corporation and/or affiliates. All rights reserved. +# Licensed under the Universal Permissive License v 1.0 as shown at http://oss.oracle.com/licenses/upl +# Author: paramdeep.saini@oracle.com +############################ + +""" + This file contains to the code call different classes objects based on setup type +""" + +from oralogger import * +from oraenv import * +from oracommon import * +from oramachine import * +from orasetupenv import * +from orasshsetup import * +from oracvu import * + +import os +import sys + +class OraSetupSSH: + """ + This class setup the env before setting up the rac env + """ + def __init__(self,oralogger,orahandler,oraenv,oracommon): + try: + self.ologger = oralogger + self.ohandler = orahandler + self.oenv = oraenv.get_instance() + self.ocommon = oracommon + self.ora_env_dict = oraenv.get_env_vars() + self.file_name = os.path.basename(__file__) + except BaseException as ex: + ex_type, ex_value, ex_traceback = sys.exc_info() + trace_back = sys.tracebacklimit.extract_tb(ex_traceback) + stack_trace = list() + for trace in trace_back: + stack_trace.append("File : %s , Line : %d, Func.Name : %s, Message : %s" % (trace[0], trace[1], trace[2], trace[3])) + self.ocommon.log_info_message(ex_type.__name__,self.file_name) + self.ocommon.log_info_message(ex_value,self.file_name) + self.ocommon.log_info_message(stack_trace,self.file_name) + def setup(self): + """ + This function setup ssh between computes + """ + self.ocommon.log_info_message("Start setup()",self.file_name) + ct = datetime.datetime.now() + bts = ct.timestamp() + if self.ocommon.check_key("SKIP_SSH_SETUP",self.ora_env_dict): + self.ocommon.log_info_message("Skipping SSH setup as SKIP_SSH_SETUP flag is set",self.file_name) + else: + SSH_USERS=[self.ora_env_dict["GRID_USER"] + ":" + self.ora_env_dict["GRID_HOME"],self.ora_env_dict["DB_USER"] + ":" + self.ora_env_dict["DB_HOME"]] + if (self.ocommon.check_key("SSH_PRIVATE_KEY",self.ora_env_dict)) and (self.ocommon.check_key("SSH_PUBLIC_KEY",self.ora_env_dict)): + if self.ocommon.check_file(self.ora_env_dict["SSH_PRIVATE_KEY"],True,None,None) and self.ocommon.check_file(self.ora_env_dict["SSH_PUBLIC_KEY"],True,None,None): + for sshi in SSH_USERS: + uohome=sshi.split(":") + self.setupsshusekey(uohome[0],uohome[1],None) + #self.verifyssh(uohome[0],None) + else: + for sshi in SSH_USERS: + uohome=sshi.split(":") + exiting_cls_node=self.ocommon.get_existing_clu_nodes(False) + if exiting_cls_node: + self.setupssh(uohome[0],uohome[1],"ADDNODE") + else: + self.setupssh(uohome[0],uohome[1],"INSTALL") + + #self.verifyssh(uohome[0],None) + + ct = datetime.datetime.now() + ets = ct.timestamp() + totaltime=ets - bts + self.ocommon.log_info_message("Total time for setup() = [ " + str(round(totaltime,3)) + " ] seconds",self.file_name) + + def setupssh(self,user,ohome,ctype): + """ + This function setup the ssh between user as SKIP_SSH_SETUP flag is not set + """ + self.ocommon.reset_os_password(user) + passwd=self.ocommon.get_os_password() + password=passwd.replace("\n", "") + giuser,gihome,gibase,oinv=self.ocommon.get_gi_params() + expect=self.ora_env_dict["EXPECT"] if self.ocommon.check_key("EXPECT",self.ora_env_dict) else "/bin/expect" + script_dir=self.ora_env_dict["SSHSCR_DIR"] if self.ocommon.check_key("SSHSCR_DIR",self.ora_env_dict) else "/opt/scripts/startup/scripts" + + + sshscr=self.ora_env_dict["SSHSCR"] if self.ocommon.check_key("SSHSCR",self.ora_env_dict) else "bin/cluvfy" + if user == 'grid': + sshscr="runcluvfy.sh" + else: + sshscr="bin/cluvfy" + file='''{0}/{1}'''.format(gihome,sshscr) + if not self.ocommon.check_file(file,"local",None,None): + sshscr="runcluvfy.sh" + + cluster_nodes="" + # Run ssh-keyscan for each node + oraversion=self.ocommon.get_rsp_version("INSTALL",None) + version = oraversion.split(".", 1)[0].strip() + if ctype == 'INSTALL': + cluster_nodes=self.ocommon.get_cluster_nodes() + cluster_nodes = cluster_nodes.replace(" ",",") + i=0 + while i < 5: + self.ocommon.log_info_message('''SSH setup in progress. Count set to {0}'''.format(i),self.file_name) + self.ocommon.set_mask_str(password.strip()) + if int(version) == 19 or int(version) == 21: + self.performsshsetup(user,gihome,sshscr,cluster_nodes,version,password,i,expect,script_dir) + else: + self.performsshsetup(user,gihome,sshscr,cluster_nodes,version,password,i,expect,script_dir) + retcode=self.verifyssh(user,gihome,sshscr,cluster_nodes,version) + if retcode == 0: + break + else: + i = i + 1 + self.ocommon.log_info_message('''SSH setup verification failed. Trying again..''',self.file_name) + elif ctype == 'ADDNODE': + cluster_nodes=self.ocommon.get_cluster_nodes() + cluster_nodes = cluster_nodes.replace(" ",",") + exiting_cls_node=self.ocommon.get_existing_clu_nodes(True) + new_nodes=cluster_nodes + "," + exiting_cls_node + cmd='''su - {0} -c "rm -rf ~/.ssh ; mkdir -p ~/.ssh ; chmod 700 ~/.ssh"'''.format(user) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,False) + i=0 + while i < 5: + # Run ssh-keyscan for each node + for node in cluster_nodes.split(","): + self.ocommon.log_info_message(f"Adding {node} to known_hosts.", self.file_name) + keyscan_cmd = '''su - {0} -c "ssh-keyscan -H {1} >> ~/.ssh/known_hosts"'''.format(user, node) + keyscan_output, keyscan_error, keyscan_retcode = self.ocommon.execute_cmd(keyscan_cmd, None, None) + self.ocommon.check_os_err(keyscan_output, keyscan_error, keyscan_retcode, False) + self.performsshsetup(user,gihome,sshscr,new_nodes,version,password,i,expect,script_dir) + retcode=self.verifyssh(user,gihome,sshscr,new_nodes,version) + if retcode == 0: + break + else: + i = i + 1 + self.ocommon.log_info_message('''SSH setup verification failed. Trying again..''',self.file_name) + else: + cluster_nodes=self.ocommon.get_cluster_nodes() + + def verifyssh(self,user,gihome,sshscr,cls_nodes,version): + """ + This function setup the ssh between user as SKIP_SSH_SETUP flag is not set + """ + self.ocommon.log_info_message("Verifying SSH between nodes " + cls_nodes, self.file_name) + retcode1=0 + if int(version) == 19 or int(version) == 21: + nodes_list=cls_nodes.split(" ") + for node in nodes_list: + cmd='''su - {0} -c "ssh -o BatchMode=yes -o ConnectTimeout=5 {0}@{1} echo ok 2>&1"'''.format(user,node) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,None) + if retcode != 0: + retcode1=255 + else: + cls_nodes = cls_nodes.replace(" ",",") + cmd='''su - {0} -c "{1}/{2} comp admprv -n {3} -o user_equiv -sshonly -verbose"'''.format(user,gihome,sshscr,cls_nodes) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,None) + retcode1=retcode + + return retcode1 + + def performsshsetup(self,user,gihome,sshscr,cls_nodes,version,password,counter,expect,script_dir): + """ + This functions set the ssh between cluster nodes + """ + self.ocommon.set_mask_str(password.strip()) + self.ocommon.log_info_message('''SSH setup in progress. Count set to {0}'''.format(counter),self.file_name) + if int(version) == 19 or int(version) == 21: + sshscr="setupSSH.expect" + cluster_nodes = cls_nodes.replace(","," ") + sshcmd='''su - {0} -c "{1} {2}/{3} {0} \\"{4}/oui/prov/resources/scripts\\" \\"{5}\\" \\"{6}\\""'''.format(user,expect,script_dir,sshscr,gihome,cluster_nodes,'HIDDEN_STRING') + sshcmd_output, sshcmd_error, sshcmd_retcode = self.ocommon.execute_cmd(sshcmd, None, None) + self.ocommon.check_os_err(sshcmd_output, sshcmd_error, sshcmd_retcode, False) + else: + cmd='''su - {0} -c "echo \"{4}\" | {1}/{2} comp admprv -n {3} -o user_equiv -fixup"'''.format(user,gihome,sshscr,new_nodes,'HIDDEN_STRING') + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,None) + + self.ocommon.unset_mask_str() + + + def setupsshusekey(self,user,ohome,ctype): + """ + This function setup the ssh between user as SKIP_SSH_SETUP flag is not set + This will be using existing key to setup the ssh + """ + # Populate Known Host file + i=1 + + cluster_nodes="" + new_nodes=self.ocommon.get_cluster_nodes() + existing_cls_node=self.ocommon.get_existing_clu_nodes(None) + giuser,gihome,gibase,oinv=self.ocommon.get_gi_params() + oraversion=self.ocommon.get_rsp_version("INSTALL",None) + version = oraversion.split(".", 1)[0].strip() + sshscr=self.ora_env_dict["SSHSCR"] if self.ocommon.check_key("SSHSCR",self.ora_env_dict) else "bin/cluvfy" + if user == 'grid': + sshscr="runcluvfy.sh" + else: + sshscr="bin/cluvfy" + file='''{0}/{1}'''.format(gihome,sshscr) + if not self.ocommon.check_file(file,"local",None,None): + sshscr="runcluvfy.sh" + # node=exiting_cls_node.split(" ")[0] + if existing_cls_node is not None: + cluster_nodes= existing_cls_node.replace(","," ") + " " + new_nodes + else: + cluster_nodes=new_nodes + + for node1 in cluster_nodes.split(" "): + for node in cluster_nodes.split(" "): + i=1 + cmd='''su - {0} -c "ssh -o StrictHostKeyChecking=no -x -l {0} {3} \\"ssh-keygen -R {1};ssh -o StrictHostKeyChecking=no -x -l {0} {1} \\\"/bin/sh -c true\\\"\\""''' .format(user,node,ohome,node1) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,None) + if int(retcode) != 0: + while (i < 5): + self.ocommon.log_info_message('''SSH setup failed for the cmd {0}. Trying again and count is {1}'''.format(cmd,i),self.file_name) + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,None) + if (retcode == 0): + break + else: + time.sleep(5) + i=i+1 + + retcode=self.verifyssh(user,gihome,sshscr,new_nodes,version) + + def setupsshdirs(self,user,ohome,ctype): + """ + This function setup the ssh directories + """ + sshdir='''/home/{0}/.ssh'''.format(user) + privkey=self.ora_env_dict["SSH_PRIVATE_KEY"] + pubkey=self.ora_env_dict["SSH_PUBLIC_KEY"] + group="oinstall" + cmd1='''mkdir -p {0}'''.format(sshdir) + cmd2='''chmod 700 {0}'''.format(sshdir) + cmd3='''cat {0} > {1}/id_rsa'''.format(privkey,sshdir) + cmd4='''cat {0} > {1}/id_rsa.pub'''.format(pubkey,sshdir) + cmd5='''chmod 400 {0}/id_rsa'''.format(sshdir) + cmd6='''chmod 644 {0}/id_rsa.pub'''.format(sshdir) + cmd7='''chown -R {0}:{1} {2}'''.format(user,group,sshdir) + cmd8='''cat {0} > {1}/authorized_keys'''.format(pubkey,sshdir) + cmd9='''chmod 600 {0}/authorized_keys'''.format(sshdir) + cmd10='''chown -R {0}:{1} {2}/authorized_keys'''.format(user,group,sshdir) + for cmd in cmd1,cmd2,cmd3,cmd4,cmd5,cmd6,cmd7,cmd8,cmd9,cmd10: + output,error,retcode=self.ocommon.execute_cmd(cmd,None,None) + self.ocommon.check_os_err(output,error,retcode,False) diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/setupSSH.expect b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/setupSSH.expect new file mode 100755 index 0000000000..73178da636 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/scripts/setupSSH.expect @@ -0,0 +1,47 @@ +#!/usr/bin/expect -f +# LICENSE UPL 1.0 +# +# Copyright (c) 1982-2018 Oracle and/or its affiliates. All rights reserved. +# +# Since: January, 2018 +# Author: sanjay.singh@oracle.com, paramdeep.saini@oracle.com +# Description: Setup SSH between nodes +# +# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS HEADER. +# + +set username [lindex $argv 0]; +set script_loc [lindex $argv 1]; +set cluster_nodes [lindex $argv 2]; +set ssh_pass [lindex $argv 3]; + +set timeout 120 + +# Procedure to setup ssh from server +proc sshproc { ssh_pass } { + expect { + # Send password at 'Password' prompt and tell expect to continue(i.e. exp_continue) + -re "\[P|p]assword:" { exp_send "$ssh_pass\r" + exp_continue } + # Tell expect stay in this 'expect' block and for each character that SCP prints while doing the copy + # reset the timeout counter back to 0. + -re . { exp_continue } + timeout { return 1 } + eof { return 0 } + } +} + +# Execute sshUserSetup.sh Script +set ssh_cmd "$script_loc/sshUserSetup.sh -user $username -hosts \"${cluster_nodes}\" -logfile /tmp/${username}_SetupSSH.log -advanced -exverify -noPromptPassphrase -confirm" + +#set ssh_cmd "$script_loc/sshUserSetup.sh -user $username -hosts \"${cluster_nodes}\" -logfile /tmp/${username}_SetupSSH.log -advanced -noPromptPassphrase -confirm" + +eval spawn $ssh_cmd +set ssh_results [sshproc $ssh_pass] + +if { $ssh_results == 0 } { + exit 0 +} + +# Error attempting SSH, so exit with non-zero status +exit 1 diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/setup_rac_host.sh b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/setup_rac_host.sh new file mode 100755 index 0000000000..ea5be618ce --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/setup_rac_host.sh @@ -0,0 +1,736 @@ +#!/bin/bash + +NODEDIRS=0 +SLIMENV=0 +IGNOREOSVERSION=0 +validate_environment_variables() { + local podman_compose_file="$1" + # shellcheck disable=SC2207,SC2016 + local env_variables=($(grep -oP '\${\K[^}]*' "$podman_compose_file" | sort -u)) + local missing_variables=() + + for var in "${env_variables[@]}"; do + if [[ -z "${!var}" ]]; then + missing_variables+=("$var") + fi + done + + if [ ${#missing_variables[@]} -eq 0 ]; then + echo "All required environment variables are present and exported." + return 0 + else + echo "The following required environment variables from podman-compose.yml(or may be wrong podman-compose.yml?) are missing or not exported:" + printf '%s\n' "${missing_variables[@]}" + return 1 + fi +} +# Function to set up environment variables +setup_nfs_variables() { + export HEALTHCHECK_INTERVAL=60s + export HEALTHCHECK_TIMEOUT=120s + export HEALTHCHECK_RETRIES=240 + export RACNODE1_CONTAINER_NAME=racnodep1 + export RACNODE1_HOST_NAME=racnodep1 + export RACNODE1_PUBLIC_IP=10.0.20.170 + export RACNODE1_CRS_PRIVATE_IP1=192.168.17.170 + export RACNODE1_CRS_PRIVATE_IP2=192.168.18.170 + export INSTALL_NODE=racnodep1 + export RAC_IMAGE_NAME=localhost/oracle/database-rac:21.3.0 + export CRS_NODES="\"pubhost:racnodep1,viphost:racnodep1-vip;pubhost:racnodep2,viphost:racnodep2-vip\"" + export SCAN_NAME=racnodepc1-scan + export CRS_ASM_DISCOVERY_STRING="/oradata" + export CRS_ASM_DEVICE_LIST="/oradata/asm_disk01.img,/oradata/asm_disk02.img,/oradata/asm_disk03.img,/oradata/asm_disk04.img,/oradata/asm_disk05.img" + export RACNODE2_CONTAINER_NAME=racnodep2 + export RACNODE2_HOST_NAME=racnodep2 + export RACNODE2_PUBLIC_IP=10.0.20.171 + export RACNODE2_CRS_PRIVATE_IP1=192.168.17.171 + export RACNODE2_CRS_PRIVATE_IP2=192.168.18.171 + export DNS_CONTAINER_NAME=rac-dnsserver + export DNS_HOST_NAME=racdns + export DNS_IMAGE_NAME="oracle/rac-dnsserver:latest" + export RAC_NODE_NAME_PREFIXD="racnoded" + export RAC_NODE_NAME_PREFIXP="racnodep" + export DNS_DOMAIN=example.info + export PUBLIC_NETWORK_NAME="rac_pub1_nw" + export PUBLIC_NETWORK_SUBNET="10.0.20.0/24" + export PRIVATE1_NETWORK_NAME="rac_priv1_nw" + export PRIVATE1_NETWORK_SUBNET="192.168.17.0/24" + export PRIVATE2_NETWORK_NAME="rac_priv2_nw" + export PRIVATE2_NETWORK_SUBNET="192.168.18.0/24" + export PWD_SECRET_FILE=/opt/.secrets/pwdfile.enc + export KEY_SECRET_FILE=/opt/.secrets/key.pem + export DNS_PUBLIC_IP=10.0.20.25 + export DNS_PRIVATE1_IP=192.168.17.25 + export DNS_PRIVATE2_IP=192.168.18.25 + export CMAN_CONTAINER_NAME=racnodepc1-cman + export CMAN_HOST_NAME=racnodepc1-cman + export CMAN_IMAGE_NAME="localhost/oracle/client-cman:21.3.0" + export CMAN_PUBLIC_IP=10.0.20.15 + export CMAN_PUBLIC_HOSTNAME="racnodepc1-cman" + export DB_HOSTDETAILS="HOST=racnodepc1-scan:RULE_ACT=accept,HOST=racnodep1:IP=10.0.20.170" + export STORAGE_CONTAINER_NAME="racnode-storage" + export STORAGE_HOST_NAME="racnode-storage" + export STORAGE_IMAGE_NAME="localhost/oracle/rac-storage-server:latest" + export ORACLE_DBNAME="ORCLCDB" + export STORAGE_PUBLIC_IP=10.0.20.80 + export NFS_STORAGE_VOLUME="/scratch/stage/rac-storage/$ORACLE_DBNAME" + export DB_SERVICE=service:soepdb + + if [ -f /etc/selinux/config ]; then + # Check SELinux state + selinux_state=$(grep -E '^SELINUX=' /etc/selinux/config | cut -d= -f2) + + if [[ "$selinux_state" == "enforcing" || "$selinux_state" == "permissive" || "$selinux_state" == "targeted" ]]; then + echo "SELinux is enabled with state: $selinux_state. Proceeding with installation." + else + echo "SELinux is either disabled or in an unknown state: $selinux_state. Skipping installation." + echo "INFO: NFS Environment variables setup completed successfully." + return 0 + fi + else + echo "/etc/selinux/config not found. Skipping SELinux check." + echo "INFO: NFS Environment variables setup completed successfully." + return 0 + fi + + +# Create rac-storage.te file +cat < /var/opt/rac-storage.te +module rac-storage 1.0; + +require { + type container_init_t; + type hugetlbfs_t; + type nfsd_fs_t; + type rpc_pipefs_t; + type default_t; + type kernel_t; + class filesystem mount; + class filesystem unmount; + class file { read write open }; + class dir { read watch }; + class bpf { map_create map_read map_write }; + class system module_request; + class fifo_file { open read write }; +} + +#============= container_init_t ============== +allow container_init_t hugetlbfs_t:filesystem mount; +allow container_init_t nfsd_fs_t:filesystem mount; +allow container_init_t rpc_pipefs_t:filesystem mount; +allow container_init_t nfsd_fs_t:file { read write open }; +allow container_init_t nfsd_fs_t:dir { read watch }; +allow container_init_t rpc_pipefs_t:dir { read watch }; +allow container_init_t rpc_pipefs_t:fifo_file { open read write }; +allow container_init_t rpc_pipefs_t:filesystem unmount; +allow container_init_t self:bpf map_create; +allow container_init_t self:bpf { map_read map_write }; +allow container_init_t default_t:dir read; +allow container_init_t kernel_t:system module_request; +EOF + + # Change directory to /var/opt + cd /var/opt || { echo "Failed to change directory to /var/opt. Exiting."; exit 1; } + + # Make the policy module + make -f /usr/share/selinux/devel/Makefile rac-storage.pp || { echo "Failed to make rac-storage.pp. Exiting."; exit 1; } + + # Install the policy module + semodule -i rac-storage.pp || { echo "Failed to install rac-storage.pp. Exiting."; exit 1; } + + # List installed modules and grep for rac-storage + semodule -l | grep rac-storage + + echo "INFO: NFS Environment variables setup completed successully." + return 0 +} +setup_blockdevices_variables(){ + export HEALTHCHECK_INTERVAL=60s + export HEALTHCHECK_TIMEOUT=120s + export HEALTHCHECK_RETRIES=240 + export RACNODE1_CONTAINER_NAME=racnodep1 + export RACNODE1_HOST_NAME=racnodep1 + export RACNODE1_PUBLIC_IP=10.0.20.170 + export RACNODE1_CRS_PRIVATE_IP1=192.168.17.170 + export RACNODE1_CRS_PRIVATE_IP2=192.168.18.170 + export INSTALL_NODE=racnodep1 + export RAC_IMAGE_NAME=localhost/oracle/database-rac:21.3.0 + export CRS_NODES="\"pubhost:racnodep1,viphost:racnodep1-vip;pubhost:racnodep2,viphost:racnodep2-vip\"" + export SCAN_NAME=racnodepc1-scan + export ASM_DEVICE1="/dev/asm-disk1" + export ASM_DEVICE2="/dev/asm-disk2" + export CRS_ASM_DEVICE_LIST="${ASM_DEVICE1},${ASM_DEVICE2}" + export ASM_DISK1="/dev/oracleoci/oraclevdd" + export ASM_DISK2="/dev/oracleoci/oraclevde" + export CRS_ASM_DISCOVERY_STRING="/dev/asm-disk*" + export RACNODE2_CONTAINER_NAME=racnodep2 + export RACNODE2_HOST_NAME=racnodep2 + export RACNODE2_PUBLIC_IP=10.0.20.171 + export RACNODE2_CRS_PRIVATE_IP1=192.168.17.171 + export RACNODE2_CRS_PRIVATE_IP2=192.168.18.171 + export PWD_SECRET_FILE=/opt/.secrets/pwdfile.enc + export KEY_SECRET_FILE=/opt/.secrets/key.pem + export DNS_CONTAINER_NAME=rac-dnsserver + export DNS_HOST_NAME=racdns + export DNS_IMAGE_NAME="oracle/rac-dnsserver:latest" + export RAC_NODE_NAME_PREFIXD="racnoded" + export RAC_NODE_NAME_PREFIXP="racnodep" + export DNS_DOMAIN=example.info + export PUBLIC_NETWORK_NAME="rac_pub1_nw" + export PUBLIC_NETWORK_SUBNET="10.0.20.0/24" + export PRIVATE1_NETWORK_NAME="rac_priv1_nw" + export PRIVATE1_NETWORK_SUBNET="192.168.17.0/24" + export PRIVATE2_NETWORK_NAME="rac_priv2_nw" + export PRIVATE2_NETWORK_SUBNET="192.168.18.0/24" + export DNS_PUBLIC_IP=10.0.20.25 + export DNS_PRIVATE1_IP=192.168.17.25 + export DNS_PRIVATE2_IP=192.168.18.25 + export CMAN_CONTAINER_NAME=racnodepc1-cman + export CMAN_HOST_NAME=racnodepc1-cman + export CMAN_IMAGE_NAME="localhost/oracle/client-cman:21.3.0" + export CMAN_PUBLIC_IP=10.0.20.15 + export CMAN_PUBLIC_HOSTNAME="racnodepc1-cman" + export DB_HOSTDETAILS="HOST=racnodepc1-scan:RULE_ACT=accept,HOST=racnodep1:IP=10.0.20.170" + export DB_SERVICE=service:soepdb + echo "INFO: BlockDevices Environment variables setup completed successully." + return 0 +} + + +# Function to set up DNS Podman container +setup_dns_container() { + podman-compose up -d ${DNS_CONTAINER_NAME} + success_message_line="DNS Server IS READY TO USE" + last_lines="" + start_time=$(date +%s) + + # Monitor logs until success message is found or timeout occurs + while true; do + current_time=$(date +%s) + elapsed_time=$((current_time - start_time)) + + if [ $elapsed_time -ge 600 ]; then + # If 60 minutes elapsed, print a timeout message and exit + echo "ERROR: Success message not found in DNS Container logs after 10 minutes." >&2 + break + fi + + # Read the last 10 lines from the logs + last_lines=$(podman logs --tail 5 "${DNS_CONTAINER_NAME}" 2>&1) + + # Check if the success message is present in the output + if echo "$last_lines" | grep -q "$success_message_line"; then + echo "###########################################" + echo "INFO: DNS Container is setup successfully." + echo "###########################################" + break + fi + + # Print the last 10 lines from the logs + echo "$last_lines" >&2 + + # Sleep for a short duration before checking logs again + sleep 15 + done + return 0 +} + +setup_rac_container() { + podman-compose --podman-run-args="-t -i --systemd=always --cpuset-cpus 0-1 --memory 16G --memory-swap 32G" up -d ${RACNODE1_CONTAINER_NAME} + podman-compose stop ${RACNODE1_CONTAINER_NAME} + + podman-compose --podman-run-args="-t -i --systemd=always --cpuset-cpus 0-1 --memory 16G --memory-swap 32G" up -d ${RACNODE2_CONTAINER_NAME} + podman-compose stop ${RACNODE2_CONTAINER_NAME} + + podman network disconnect ${PUBLIC_NETWORK_NAME} ${RACNODE1_CONTAINER_NAME} + podman network disconnect ${PRIVATE1_NETWORK_NAME} ${RACNODE1_CONTAINER_NAME} + podman network disconnect ${PRIVATE2_NETWORK_NAME} ${RACNODE1_CONTAINER_NAME} + + podman network disconnect ${PUBLIC_NETWORK_NAME} ${RACNODE2_CONTAINER_NAME} + podman network disconnect ${PRIVATE1_NETWORK_NAME} ${RACNODE2_CONTAINER_NAME} + podman network disconnect ${PRIVATE2_NETWORK_NAME} ${RACNODE2_CONTAINER_NAME} + + podman network connect ${PUBLIC_NETWORK_NAME} --ip ${RACNODE1_PUBLIC_IP} ${RACNODE1_CONTAINER_NAME} + podman network connect ${PRIVATE1_NETWORK_NAME} --ip ${RACNODE1_CRS_PRIVATE_IP1} ${RACNODE1_CONTAINER_NAME} + podman network connect ${PRIVATE2_NETWORK_NAME} --ip ${RACNODE1_CRS_PRIVATE_IP2} ${RACNODE1_CONTAINER_NAME} + + podman network connect ${PUBLIC_NETWORK_NAME} --ip ${RACNODE2_PUBLIC_IP} ${RACNODE2_CONTAINER_NAME} + podman network connect ${PRIVATE1_NETWORK_NAME} --ip ${RACNODE2_CRS_PRIVATE_IP1} ${RACNODE2_CONTAINER_NAME} + podman network connect ${PRIVATE2_NETWORK_NAME} --ip ${RACNODE2_CRS_PRIVATE_IP2} ${RACNODE2_CONTAINER_NAME} + + podman-compose start ${RACNODE1_CONTAINER_NAME} + podman-compose start ${RACNODE2_CONTAINER_NAME} + + RAC_LOG="/tmp/orod/oracle_rac_setup.log" + success_message_line="ORACLE RAC DATABASE IS READY TO USE" + last_lines="" + start_time=$(date +%s) + + # Monitor logs until success message is found or timeout occurs + while true; do + current_time=$(date +%s) + elapsed_time=$((current_time - start_time)) + + if [ $elapsed_time -ge 3600 ]; then + # If 60 minutes elapsed, print a timeout message and exit + echo "ERROR: Success message not found in the logs after 60 minutes." >&2 + break + fi + + # Read the last 10 lines from the logs + last_lines=$(podman exec ${RACNODE1_CONTAINER_NAME} /bin/bash -c "tail -n 10 $RAC_LOG" 2>&1) + + # Check if the success message is present in the output + if echo "$last_lines" | grep -q "$success_message_line"; then + echo "###############################################" + echo "INFO: Oracle RAC Containers setup successfully." + echo "###############################################" + break + fi + + # Print the last 10 lines from the logs + echo "$last_lines" >&2 + + # Sleep for a short duration before checking logs again + sleep 15 + done + return 0 + +} + +setup_storage_container() { + export ORACLE_DBNAME=ORCLCDB + mkdir -p $NFS_STORAGE_VOLUME + rm -rf $NFS_STORAGE_VOLUME/asm_disk0* + podman rm -f ${STORAGE_CONTAINER_NAME} + podman-compose --podman-run-args="-t -i --systemd=always" up -d ${STORAGE_CONTAINER_NAME} + STOR_LOG="/tmp/storage_setup.log" + export_message_line1="Export list for racnode-storage:" + export_message_line2="/oradata *" + last_lines="" + start_time=$(date +%s) + # Monitor logs until export message is found or timeout occurs + while true; do + current_time=$(date +%s) + elapsed_time=$((current_time - start_time)) + + if [ $elapsed_time -ge 1800 ]; then + # If 30 minutes elapsed, print a timeout message and exit + echo "ERROR: Successful message not found in the storage container logs after 30 minutes." >&2 + break + fi + # Read the last 10 lines from the logs + last_lines=$(podman exec ${STORAGE_CONTAINER_NAME} tail -n 10 "$STOR_LOG" 2>&1) + # Check if both lines of the export message are present in the output + if echo "$last_lines" | grep -q "$export_message_line1" && echo "$last_lines" | grep -q "$export_message_line2"; then + echo "############################################################" + echo "INFO: NFS Storage Container exporting /oradata successfully." + echo "############################################################" + break + fi + # Print the last 10 lines from the logs + echo "$last_lines" >&2 + # Sleep for a short duration before checking logs again + sleep 15 + done + podman volume inspect racstorage &> /dev/null && podman volume rm racstorage + sleep 5 + podman volume create --driver local \ + --opt type=nfs \ + --opt o=addr=$STORAGE_PUBLIC_IP,rw,bg,hard,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 \ + --opt device=$STORAGE_PUBLIC_IP:/oradata \ + racstorage + return 0 +} + + +setup_cman_container() { + podman-compose up -d ${CMAN_CONTAINER_NAME} + success_message_line="CONNECTION MANAGER IS READY TO USE" + last_lines="" + start_time=$(date +%s) + + # Monitor logs until success message is found or timeout occurs + while true; do + current_time=$(date +%s) + elapsed_time=$((current_time - start_time)) + + if [ $elapsed_time -ge 600 ]; then + # If 60 minutes elapsed, print a timeout message and exit + echo "ERROR: Success message not found in CMAN Container logs after 10 minutes." >&2 + break + fi + + # Read the last 10 lines from the logs + last_lines=$(podman logs --tail 5 "${CMAN_CONTAINER_NAME}" 2>&1) + + # Check if the success message is present in the output + if echo "$last_lines" | grep -q "$success_message_line"; then + echo "###########################################" + echo "INFO: CMAN Container is setup successfully." + echo "###########################################" + break + fi + + # Print the last 10 lines from the logs + echo "$last_lines" >&2 + + # Sleep for a short duration before checking logs again + sleep 15 + done + return 0 +} + +setup_rac_networks() { + podman network create --driver=bridge --subnet=${PUBLIC_NETWORK_SUBNET} ${PUBLIC_NETWORK_NAME} + podman network create --driver=bridge --subnet=${PRIVATE1_NETWORK_SUBNET} ${PRIVATE1_NETWORK_NAME} --disable-dns --internal + podman network create --driver=bridge --subnet=${PRIVATE2_NETWORK_SUBNET} ${PRIVATE2_NETWORK_NAME} --disable-dns --internal + echo "INFO: Oracle RAC Container Networks setup successfully" + return 0 +} + + +function DisplayUsage(){ + echo "Usage : + $0 [<-slimenv> <-nodedirs=dir1,dir2,...,dirn>] [-ignoreOSVersion] [-blockdevices-env|-cleanup|-dns|-networks|-nfs-env|-prepare-rac-env|-rac|-storage] [-help]" + return 0 +} + +# Function to check if a command is available +check_command() { + if ! command -v "$1" &>/dev/null; then + return 1 + fi +} + +# Function to install Podman +install_podman() { + if ! check_command podman; then + echo "INFO: Podman is not installed. Installing..." + sudo dnf install -y podman + else + echo "INFO: Podman is already installed." + fi + return 0 +} + +# Function to install Podman-Compose +install_podman_compose() { + if ! check_command podman-compose; then + echo "INFO: Podman-Compose is not installed. Installing..." + # Enable EPEL repository for Oracle Linux 8 + sudo dnf config-manager --enable ol8_developer_EPEL + # Install Podman-Compose + sudo dnf install -y podman-compose + else + echo "INFO: Podman-Compose is already installed." + fi + return 0 +} + +function setupSELinuxContext(){ + + dnf install selinux-policy-devel -y + [ -f /var/opt/rac-podman.te ] && cp /var/opt/rac-podman.te /var/opt/rac-podman.te.ORG + [ -f /var/opt/rac-podman.te ] && rm -rf /var/opt/rac-podman.te + cat > /var/opt/rac-podman.te < /dev/null; then + echo "INFO: Deleting existing secret $secret_name..." + # shellcheck disable=SC2086 + podman secret rm $secret_name + fi + + # Create the new secret + echo "INFO: Creating new secret $secret_name..." + # shellcheck disable=SC2086 + podman secret create $secret_name $file_path +} + +create_secrets() { + # Check if RAC_SECRET environment variable is defined + if [ -z "$RAC_SECRET" ]; then + echo "ERROR: RAC_SECRET environment variable is not defined." + return 1 + fi + mkdir -p /opt/.secrets/ + # shellcheck disable=SC2086 + echo $RAC_SECRET > /opt/.secrets/pwdfile.txt + # shellcheck disable=SC2164 + cd /opt/.secrets + openssl genrsa -out key.pem + openssl rsa -in key.pem -out key.pub -pubout + openssl pkeyutl -in pwdfile.txt -out pwdfile.enc -pubin -inkey key.pub -encrypt + rm -rf /opt/.secrets/pwdfile.txt + # Delete and create secrets + delete_and_create_secret "pwdsecret" "/opt/.secrets/pwdfile.enc" + delete_and_create_secret "keysecret" "/opt/.secrets/key.pem" + echo "INFO: Secrets created." + # shellcheck disable=SC2164 + cd - + return 0 +} + +check_system_resources() { + # Check swap space in GB + swap_space=$(free -g | grep Swap | awk '{print $2}') + if [ "$swap_space" -ge 16 ]; then + echo "INFO: Swap space is sufficient ($swap_space GB)." + else + echo "ERROR: Swap space is insufficient ($swap_space GB). Minimum 32 GB required." + return 1 + fi + + # Check physical memory (RAM) in GB + total_memory=$(free -g | grep Mem | awk '{print $2}') + if [ "$total_memory" -ge 16 ]; then + echo "INFO: Physical memory is sufficient ($total_memory GB)." + else + echo "ERROR: Physical memory is insufficient ($total_memory GB). Minimum 32 GB required." + return 1 + fi + + # Both swap space and physical memory meet the requirements + return 0 +} + +setup_host_prepreq(){ + kernelVersionSupported=1 + # shellcheck disable=SC2317 + # shellcheck disable=SC2006 + OSVersion=`grep "Oracle Linux Server release 8" /etc/oracle-release` + OSstatus=$? + if [ ${OSstatus} -eq 0 ]; then + OSVersionSupported=1 + else + OSVersionSupported=0 + fi + + echo "INFO: Setting Podman env on OS [${OSVersion}]" + # shellcheck disable=SC2006,SC2086 + kernelVersion=`uname -r | cut -d. -f1,2` + # shellcheck disable=SC2006,SC2086 + majorKernelVersion=`echo ${kernelVersion} | cut -d. -f1` + # shellcheck disable=SC2006,SC2086 + minorKernelVersion=`echo ${kernelVersion} | cut -d. -f2` + + echo "Running on Kernel [${kernelVersion}]" +# shellcheck disable=SC2006,SC2086 + if [ ${majorKernelVersion} -lt 5 ]; then + kernelVersionSupported=0 + fi +# shellcheck disable=SC2086 + if [ $majorKernelVersion -eq 5 ]; then + # shellcheck disable=SC2086 + if [ ${minorKernelVersion} -lt 14 ]; then + kernelVersionSupported=0 + fi + fi +# shellcheck disable=SC2166 + if [ $OSVersionSupported -eq 0 -o $kernelVersionSupported -eq 0 ]; then + if [ ${IGNOREOSVERSION} == "0" ]; then + echo "ERROR: OSVersion=${OSVersion}.. KernelVersion=${kernelVersion}. Exiting." + return 1 + fi + fi + + echo "Setting kernel parameters in /etc/sysctl.conf" + sed -i '/fs.aio-max-nr=/d' /etc/sysctl.conf + sed -i '/fs.file-max=/d' /etc/sysctl.conf + sed -i '/net.core.rmem_max=/d' /etc/sysctl.conf + sed -i '/net.core.rmem_default=/d' /etc/sysctl.conf + sed -i '/net.core.wmem_max=/d' /etc/sysctl.conf + sed -i '/net.core.wmem_default=/d' /etc/sysctl.conf + sed -i '/vm.nr_hugepages=/d' /etc/sysctl.conf + + echo -e "fs.aio-max-nr=1048576\nfs.file-max=6815744\nnet.core.rmem_max=4194304\nnet.core.rmem_default=262144\nnet.core.wmem_max=1048576\nnet.core.wmem_default=262144\nvm.nr_hugepages=16384" >> /etc/sysctl.conf + + if [ ${SLIMENV} -eq 1 ]; then + echo "INFO: Slim environment specified" + if [ ${NODEDIRS} -eq 0 ]; then + echo "ERROR: Missing NodeDirs for SlimEnv. Exiting" + DisplayUsage + return 1 + fi + # shellcheck disable=SC2006,SC2001,SC2086 + nodeHomeDirs=`echo ${node_dirs} | sed -e 's/.*?=\(.*\)/\1/g'` + # shellcheck disable=SC2162 + IFS=',' read -a nodeHomeValues <<< "${nodeHomeDirs}" + for nodeHome in "${nodeHomeValues[@]}" + do + echo "INFO: Creating directory $nodeHome" + # shellcheck disable=SC2086 + mkdir -p $nodeHome + done + fi + + if [ ${OSVersionSupported} -eq 1 ]; then + echo "INFO: Starting chronyd service" + systemctl start chronyd + fi +# shellcheck disable=SC2002 + cat /sys/devices/system/clocksource/clocksource0/available_clocksource | grep tsc + # shellcheck disable=SC2181 + if [ $? -eq 0 ]; then + echo "INFO: Setting current clocksource" + echo "tsc">/sys/devices/system/clocksource/clocksource0/current_clocksource + cat /sys/devices/system/clocksource/clocksource0/current_clocksource + + sed -i -e 's/\(GRUB_CMDLINE_LINUX=.*\)"/\1 tsc"/g' ./grub + else + echo "INFO: clock source [tsc] not available on the system" + fi + + df -h /dev/shm + + # shellcheck disable=SC2006 + freeSHM=`df -h /dev/shm | tail -n +2 | awk '{ print $4 }'` + echo "INFO: Available shm = [${freeSHM}]" + # shellcheck disable=SC2086,SC2060,SC2006 + freeSHM=`echo ${freeSHM} | tr -d [:alpha:]` + # shellcheck disable=SC2129,SC2086 + if [ ${freeSHM} -lt 4 ]; then + echo "ERROR: Low free space [${freeSHM}] in /dev/shm. Need at least 4GB space. Exiting." + DisplayUsage + return 1 + fi + install_podman + install_podman_compose + # shellcheck disable=SC2006 + selinux_state=$(grep -E '^SELINUX=' /etc/selinux/config | cut -d= -f2) + if [[ "$selinux_state" == "enforcing" || "$selinux_state" == "permissive" || "$selinux_state" == "targeted" ]]; then + echo "INFO: SELinux Enabled. Setting up SELinux Context" + setupSELinuxContext + else + echo "INFO: SELinux Disabled." + fi + create_secrets || return 1 + check_system_resources || return 1 + echo "INFO: Finished setting up the pre-requisites for Podman-Host" + return 0 +} + +cleanup_env(){ + podman rm -f ${DNS_CONTAINER_NAME} + podman rm -f ${STORAGE_CONTAINER_NAME} + podman rm -f $RACNODE1_CONTAINER_NAME + podman rm -f $RACNODE2_CONTAINER_NAME + podman rm -f ${CMAN_CONTAINER_NAME} + podman network inspect $PUBLIC_NETWORK_NAME &> /dev/null && podman network rm $PUBLIC_NETWORK_NAME + podman network inspect $PRIVATE1_NETWORK_NAME &> /dev/null && podman network rm $PRIVATE1_NETWORK_NAME + podman network inspect $PRIVATE2_NETWORK_NAME &> /dev/null && podman network rm $PRIVATE2_NETWORK_NAME + podman volume inspect racstorage &> /dev/null && podman volume rm racstorage + echo "INFO: Oracle Container RAC Environment Cleanup Successfully" + return 0 +} + +while [ $# -gt 0 ]; do + case "$1" in + -slimenv) + SLIMENV=1 + ;; + -nodedirs=*) + NODEDIRS=1 + node_dirs="${1#*=}" + ;; + -ignoreOSVersion) + IGNOREOSVERSION=1 + ;; + -help|-h) + DisplayUsage + ;; + -nfs-env) + setup_nfs_variables || echo "ERROR: Oracle RAC Environment Variables for NFS devices setup has failed." + ;; + -blockdevices-env) + setup_blockdevices_variables || echo "ERROR: Oracle RAC Environment variables for Block devices setup has failed." + ;; + -dns) + validate_environment_variables podman-compose.yml || exit 1 + setup_dns_container || echo "ERROR: Oracle RAC DNS Container Setup has failed." + ;; + -rac) + validate_environment_variables podman-compose.yml || exit 1 + setup_rac_container || echo "ERROR: Oracle RAC Container Setup has failed." + ;; + -storage) + validate_environment_variables podman-compose.yml || exit 1 + setup_storage_container || echo "ERROR: Oracle RAC Storage Container Setup has failed." + ;; + -cman) + validate_environment_variables podman-compose.yml || exit 1 + setup_cman_container || echo "ERROR: Oracle RAC Connection Manager Container Setup has failed." + ;; + -cleanup) + validate_environment_variables podman-compose.yml || exit 1 + cleanup_env || echo "ERROR: Oracle RAC Environment Cleanup Setup has failed." + ;; + -networks) + validate_environment_variables podman-compose.yml || exit 1 + setup_rac_networks || echo "ERROR: Oracle RAC Container Networks setup has failed." + ;; + -prepare-rac-env) + setup_host_prepreq || echo "ERROR: Oracle RAC preparation setups have failed." + ;; + *) + printf "***************************\n" + # shellcheck disable=SC2059 + printf "* Error: Invalid argument [$1] specified.*\n" + printf "***************************\n" + DisplayUsage + ;; + esac + shift +done \ No newline at end of file diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/docs/CLEANUP.md b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/CLEANUP.md new file mode 100644 index 0000000000..bcd25a5d76 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/CLEANUP.md @@ -0,0 +1,37 @@ +# Cleanup Oracle RAC Container Environment +Execute below commands to cleanup Oracle RAC Container Environment- +```bash +podman inspect rac-dnsserver &> /dev/null && podman rm -f rac-dnsserver +podman inspect racnode-storage &> /dev/null && podman rm -f racnode-storage +podman inspect racnodep1 &> /dev/null && podman rm -f racnodep1 +podman inspect racnodep2 &> /dev/null && podman rm -f racnodep2 +podman inspect racnodepc1-cman &> /dev/null && podman rm -f racnodepc1-cman +podman network inspect rac_pub1_nw &> /dev/null && podman network rm rac_pub1_nw +podman network inspect rac_priv1_nw &> /dev/null && podman network rm rac_priv1_nw +podman network inspect rac_priv2_nw &> /dev/null && podman network rm rac_priv2_nw +podman volume inspect racstorage &> /dev/null && podman volume rm racstorage +``` + +If you have setup using Block Devices, then cleanup ASM Disks- +```bash +dd if=/dev/zero of=/dev/oracleoci/oraclevdd bs=8k count=10000 +dd if=/dev/zero of=/dev/oracleoci/oraclevde bs=8k count=10000 +``` +If you have setup using Oracle Slim Image, then cleanup data folders- +```bash +rm -rf /scratch/rac/cluster01/node1/* +rm -rf /scratch/rac/cluster01/node2/* +``` + +If you have setup using User Defined Response files, then cleanup response files- +```bash +rm -rf /scratch/common_scripts/podman/rac/* +``` + +## License + +All scripts and files hosted in this repository which are required to build the container images are, unless otherwise noted, released under UPL 1.0 license. + +## Copyright + +Copyright (c) 2014-2024 Oracle and/or its affiliates. \ No newline at end of file diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/docs/CONNECTING.md b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/CONNECTING.md new file mode 100644 index 0000000000..f1f976720b --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/CONNECTING.md @@ -0,0 +1,191 @@ +# Connecting to an Oracle RAC Database +Follow this document to validate and connect to Oracle RAC Container Database. + +## Using this documentation +- [Connecting to an Oracle RAC Database](#connecting-to-an-oracle-rac-database) + - [Using this documentation](#using-this-documentation) + - [Validating Oracle RAC Containers](#validating-oracle-rac-containers) + - [Validating Oracle Grid Infrastructure](#validating-oracle-grid-infrastructure) + - [Validating Oracle RAC Database](#validating-oracle-rac-database) + - [Debugging Oracle RAC Containers](#debugging-oracle-rac-containers) + - [Client Connection](#client-connection) + - [License](#license) + - [Copyright](#copyright) + +## Validating Oracle RAC Containers +First Validate if Container is healthy or not by running- +```bash +podman ps -a + +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +598385416fd7 localhost/oracle/rac-dnsserver:latest /bin/sh -c exec $... 55 minutes ago Up 55 minutes (healthy) rac-dnsserver +835e3d113898 localhost/oracle/rac-storage-server:latest 55 minutes ago Up 55 minutes (healthy) racnode-storage +9ba7bbee9095 localhost/oracle/database-rac:21.3.0 52 minutes ago Up 52 minutes (healthy) racnodep1 +ebbf520b0c95 localhost/oracle/database-rac:21.3.0 52 minutes ago Up 52 minutes (healthy) racnodep2 +36df843594d9 localhost/oracle/client-cman:21.3.0 /bin/sh -c exec $... 12 minutes ago Up 12 minutes (healthy) 0.0.0.0:1521->1521/tcp racnodepc1-cman +``` + +Look for `(healthy)` next to container names under `STATUS` section. + +To connect to the container execute following command: +```bash +podman exec -i -t racnodep1 /bin/bash +``` +## Validating Oracle Grid Infrastructure +Validate if Oracle Grid is up and running from within Container- +```bash +su - grid +#Verify the status of Oracle Clusterware stack: +[grid@racnodep1 ~]$ crsctl check cluster -all +************************************************************** +racnodep1: +CRS-4537: Cluster Ready Services is online +CRS-4529: Cluster Synchronization Services is online +CRS-4533: Event Manager is online +************************************************************** +racnodep2: +CRS-4537: Cluster Ready Services is online +CRS-4529: Cluster Synchronization Services is online +CRS-4533: Event Manager is online +************************************************************** + +[grid@racnodep1 u01]$ crsctl check crs +CRS-4638: Oracle High Availability Services is online +CRS-4537: Cluster Ready Services is online +CRS-4529: Cluster Synchronization Services is online +CRS-4533: Event Manager is online + +[grid@racnodep1 u01]$ crsctl stat res -t +-------------------------------------------------------------------------------- +Name Target State Server State details +-------------------------------------------------------------------------------- +Local Resources +-------------------------------------------------------------------------------- +ora.LISTENER.lsnr + ONLINE ONLINE racnodep1 STABLE + ONLINE ONLINE racnodep2 STABLE +ora.chad + ONLINE ONLINE racnodep1 STABLE + ONLINE ONLINE racnodep2 STABLE +ora.helper + OFFLINE OFFLINE racnodep1 STABLE + OFFLINE OFFLINE racnodep2 STABLE +ora.net1.network + ONLINE ONLINE racnodep1 STABLE + ONLINE ONLINE racnodep2 STABLE +ora.ons + ONLINE ONLINE racnodep1 STABLE + ONLINE ONLINE racnodep2 STABLE +-------------------------------------------------------------------------------- +Cluster Resources +-------------------------------------------------------------------------------- +ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup) + 1 ONLINE ONLINE racnodep1 STABLE + 2 ONLINE ONLINE racnodep2 STABLE +ora.ASMNET2LSNR_ASM.lsnr(ora.asmgroup) + 1 ONLINE ONLINE racnodep1 STABLE + 2 ONLINE ONLINE racnodep2 STABLE +ora.DATA.dg(ora.asmgroup) + 1 ONLINE ONLINE racnodep1 STABLE + 2 ONLINE ONLINE racnodep2 STABLE +ora.LISTENER_SCAN1.lsnr + 1 ONLINE ONLINE racnodep1 STABLE +ora.LISTENER_SCAN2.lsnr + 1 ONLINE ONLINE racnodep1 STABLE +ora.LISTENER_SCAN3.lsnr + 1 ONLINE ONLINE racnodep2 STABLE +ora.asm(ora.asmgroup) + 1 ONLINE ONLINE racnodep1 Started,STABLE + 2 ONLINE ONLINE racnodep2 Started,STABLE +ora.asmnet1.asmnetwork(ora.asmgroup) + 1 ONLINE ONLINE racnodep1 STABLE + 2 ONLINE ONLINE racnodep2 STABLE +ora.asmnet2.asmnetwork(ora.asmgroup) + 1 ONLINE ONLINE racnodep1 STABLE + 2 ONLINE ONLINE racnodep2 STABLE +ora.cdp1.cdp + 1 ONLINE ONLINE racnodep1 STABLE +ora.cdp2.cdp + 1 ONLINE ONLINE racnodep1 STABLE +ora.cdp3.cdp + 1 ONLINE ONLINE racnodep2 STABLE +ora.cvu + 1 ONLINE ONLINE racnodep1 STABLE +ora.orclcdb.db + 1 ONLINE ONLINE racnodep1 Open,HOME=/u01/app/o + racle/product/23ai/db + home_1,STABLE + 2 ONLINE ONLINE racnodep2 Open,HOME=/u01/app/o + racle/product/23ai/db + home_1,STABLE +ora.orclcdb.orclpdb.pdb + 1 ONLINE ONLINE racnodep1 READ WRITE,STABLE + 2 ONLINE ONLINE racnodep2 READ WRITE,STABLE +ora.orclcdb.soepdb.svc + 1 ONLINE ONLINE racnodep1 STABLE + 2 ONLINE ONLINE racnodep2 STABLE +ora.racnodep1.vip + 1 ONLINE ONLINE racnodep1 STABLE +ora.racnodep2.vip + 1 ONLINE ONLINE racnodep2 STABLE +ora.rhpserver + 1 OFFLINE OFFLINE STABLE +ora.scan1.vip + 1 ONLINE ONLINE racnodep1 STABLE +ora.scan2.vip + 1 ONLINE ONLINE racnodep1 STABLE +ora.scan3.vip + 1 ONLINE ONLINE racnodep2 STABLE +-------------------------------------------------------------------------------- + +/u01/app/21c/grid/bin/olsnodes -n +racnodep1 1 +racnodep2 2 +``` +## Validating Oracle RAC Database +Validate Oracle RAC Database from within Container- +```bash +su - oracle + +#Confirm the status of Oracle Database instances: +[oracle@racnodep1 ~]$ srvctl status database -d ORCLCDB +Instance ORCLCDB1 is running on node racnodep1 +Instance ORCLCDB2 is running on node racnodep2 + +# Validate network configuration and connectivity: +[oracle@racnodep1 ~]$ srvctl config scan +SCAN name: racnodepc1-scan, Network: 1 +Subnet IPv4: 10.0.20.0/255.255.255.0/eth0, static +Subnet IPv6: +SCAN 1 IPv4 VIP: 10.0.20.237 +SCAN VIP is enabled. +SCAN 2 IPv4 VIP: 10.0.20.238 +SCAN VIP is enabled. +SCAN 3 IPv4 VIP: 10.0.20.236 +SCAN VIP is enabled. +``` + +## Debugging Oracle RAC Containers +If the install fails for any reason, log in to container using the above command and check `/tmp/orod/oracle_rac_setup.log`. You can also review the Grid Infrastructure logs located at `$GRID_BASE/diag/crs` and check for failure logs. If the failure occurred during the database creation then check the database logs. + + +## Client Connection +* If you are using the podman network created using MACVLAN driver, and you have configured DNS appropriately, then you can connect using the public Single Client Access (SCAN) listener directly from any external client. To connect with the SCAN, use the following connection string, where `` is the SCAN name for the database, and `` is the database system identifier: + + ```bash + system/@//:1521/ + ``` + +* If you are using a connection manager and exposed the port 1521 on the host, then connect from an external client using the following connection string, where `` is the host container, and `` is the database system identifier: + + ```bash + system/@//:1521/ + ``` +* If you are using bridge driver and not using connection manager, you need to connect application to the same bridge network which you are using for Oracle RAC. +## License + +All scripts and files hosted in this repository which are required to build the container images are, unless otherwise noted, released under UPL 1.0 license. + +## Copyright + +Copyright (c) 2014-2024 Oracle and/or its affiliates. \ No newline at end of file diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/docs/DELETION.md b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/DELETION.md new file mode 100644 index 0000000000..52f7242169 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/DELETION.md @@ -0,0 +1,31 @@ +# Deleting a Node from Existing RAC on Container Cluster +First identify the node you want to remove from RAC Container Cluster, then login to container and execute below- +```bash +cd /opt/scripts/startup/scripts/ +python3 main.py --delracnode="del_rachome=true;del_gridnode=true" +``` +E.g In this example we will delete racnodep3 from a cluster of 3 nodes viz. racnodep1,racnodep2, racnodep3. +```bash +podman exec -it racnodep3 bash +cd /opt/scripts/startup/scripts/ +python3 main.py --delracnode="del_rachome=true;del_gridnode=true" +``` +Validate racnodep3 is deleted successfully from Oracle RAC on Container Cluster - +```bash +podman exec -it racnodep1 bash +[root@racnodep1 bin]# /u01/app/23.3.0/grid/bin/olsnodes -n +racnodep1 1 +racnodep2 2 +``` +Now racnodep3 container can be removed by running command- +```bash +podman rm -f racnodep3 +``` + +## License + +All scripts and files hosted in this repository which are required to build the container images are, unless otherwise noted, released under UPL 1.0 license. + +## Copyright + +Copyright (c) 2014-2024 Oracle and/or its affiliates. \ No newline at end of file diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/docs/ENVIRONMENTVARIABLES.md b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/ENVIRONMENTVARIABLES.md new file mode 100644 index 0000000000..723f0057f4 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/ENVIRONMENTVARIABLES.md @@ -0,0 +1,55 @@ +# Environment Variables Explained for Oracle RAC on Podman + +This section provides information about the environment variables that can be used when creating Oracle RAC on Containers. + +| Environment Variable | Mandatory/Optional | Usage | Description | +|--------------------------|---------------------|------------|--------------------------------------------------------------| +| DNS_SERVERS | Mandatory | All | Specify the comma-separated list of DNS server IP addresses where both Oracle RAC nodes are resolved. | +| OP_TYPE | Mandatory | All | Specify the operation type. It can accept setuprac/setupgrid/addgridnode/racaddnode/setupracstandby. | +| CRS_NODES | Mandatory | All | Specify the CRS nodes in the format pubhost:pubhost1,viphost:viphost1;pubhost:pubhost2,viphost:viphost2. You can add as many hosts separated by semicolon. publhost and viphost are separated by comma. | +| SCAN_NAME | Mandatory | All | Specify the SCAN name. | +| CRS_ASM_DEVICE_LIST | Mandatory | All | Specify the ASM disk lists. | +| PUBLIC_HOSTS_DOMAIN | Optional | All | Specify public domain where RAC Containers are resolving to. | +| CRS_ASM_DISCOVERY_STRING | Optional | All | Specify the discovery string for ASM. | +| ORACLE_SID | Optional | All | Default value set to ORCLCDB. | +| ORACLE_PDB | Optional | All | Default value set to ORCLPDB. | +| ORACLE_CHARACTERSET | Optional | All | Default value set to AL32UTF8. | +| PWD_KEY | Mandatory | All | Pass the podman secret name for the key used while generating podman secrets. Default set to keysecret. | +| DB_PWD_FILE | Mandatory | All | Pass the podman secret name for the Oracle RAC Database to be used while generating podman secrets. Default set to pwdsecret. | +| INIT_SGA_SIZE | Optional | All | Set this environment variable when you want to set the size of SGA for RAC containers. | +| INIT_PGA_SIZE | Optional | All | Set this environment variable when you want to set the size of PGA for RAC containers. | +| CRS_PRIVATE_IP1 | Mandatory | All | Set this environment variable when you want to set the private IP for the first private network for RAC container. | +| CRS_PRIVATE_IP2 | Mandatory | All | Set this environment variable when you want to set the private IP for the second private network for RAC container. | +| INSTALL_NODE | Mandatory | All | Set this environment variable to the new Oracle node where the actual RAC cluster installation will happen. e.g., racnodep1/racnodep3 etc. | +| EXISTING_CLS_NODE | Mandatory | Mandatory only during Node Addition to existing RAC Cluster | This is set during addition of node to Existing RAC Cluster. Set this environment variable to existing Oracle RAC node e.g., racnodep1, racnodep2. | +| DB_ASM_DEVICE_LIST | Optional | All | Comma-separated list of ASM disk names with their full paths. | +| RECO_ASM_DEVICE_LIST | Optional | All | Comma-separated list of ASM disk names with their full paths. | +| DB_DATA_FILE_DEST | Optional | All | Name of the diskgroup where database data files will be stored. | +| DB_RECOVERY_FILE_DEST | Optional | All | Name of the diskgroup where database recovery files (archivelogs) will be stored. | +| CMAN_HOST | Optional | All | Specify the host for Oracle Connection Manager (CMAN). Default value is set to racnodepc1-cman. | +| CMAN_PORT | Optional | All | Specify the port for Oracle Connection Manager (CMAN). Default port is set to 1521. | +| DB_UNIQUE_NAME | Mandatory | Standby (DG Setup) | Specify the unique name for the standby database. | +| PRIMARY_DB_SCAN_NAME | Mandatory | Standby (DG Setup) | Specify the SCAN name of the primary database. | +| CRS_ASM_DISKGROUP | Mandatory | Standby (DG Setup) | Specify the ASM diskgroup for the standby database. | +| PRIMARY_DB_UNIQUE_NAME | Mandatory | Standby (DG Setup) | Specify the unique name of the primary database. | +| PRIMARY_DB_NAME | Mandatory | Standby (DG Setup) | Specify the name of the primary database. | +| DB_BLOCK_CHECKSUM | Mandatory | Primary and Standby (DG Setup) | Specify the type of DB block checksum to use. | +| DB_SERVICE | Optional | All | Specify the database service. Format: service:soepdb. | +| GRID_HOME | Mandatory | Setup using Slim Image | Path to Oracle Grid Infrastructure home directory. Default value is `/u01/app/21c/grid`. | +| GRID_BASE | Mandatory | Setup using Slim Image | Path to the base directory of Oracle Grid Infrastructure. Default value is `/u01/app/grid`. | +| DB_HOME | Mandatory | Setup using Slim Image | Path to Oracle Database home directory. Default value is `/u01/app/oracle/product/21c/dbhome_1`. | +| DB_BASE | Mandatory | Setup using Slim Image | Path to the base directory of Oracle Database. Default value is `/u01/app/oracle`. | +| INVENTORY | Mandatory | Setup using Slim Image | Path to the Oracle Inventory directory. Default value is `/u01/app/oraInventory`. | +| STAGING_SOFTWARE_LOC | Mandatory | Setup using Slim Image | Location where the Oracle software zip files are staged. Default value is `/scratch/software/21c/goldimages/240308`. | +| GRID_SW_ZIP_FILE | Mandatory | Setup using Slim Image | Name of the Oracle Grid Infrastructure software zip file. Default value is `LINUX.X64_213000_grid_home.zip`. | +| DB_SW_ZIP_FILE | Mandatory | Setup using Slim Image | Name of the Oracle Database software zip file. Default value is `LINUX.X64_213000_db_home.zip`. | +| GRID_RESPONSE_FILE | Mandatory | Setup using User Defined Response Files | Path to the Oracle Grid Infrastructure response file. Default value is `/tmp/grid_21c.rsp`. | +| DBCA_RESPONSE_FILE | Mandatory | Setup using User Defined Response Files | Path to the Oracle Database Configuration Assistant (DBCA) response file. Default value is `/tmp/dbca_21c.rsp`. | + +## License + +All scripts and files hosted in this repository which are required to build the container images are, unless otherwise noted, released under UPL 1.0 license. + +## Copyright + +Copyright (c) 2014-2024 Oracle and/or its affiliates. diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/docs/README_1.md b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/README_1.md new file mode 100644 index 0000000000..278834c2f6 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/README_1.md @@ -0,0 +1,1078 @@ +# Oracle Real Application Clusters in Linux Containers + +Learn about container deployment options for Oracle Real Application Clusters (Oracle RAC) Release 21c (21.3) + +## Overview of Running Oracle RAC in Containers + +Oracle Real Application Clusters (Oracle RAC) is an option to the award-winning Oracle Database Enterprise Edition. Oracle RAC is a cluster database with a shared cache architecture that overcomes the limitations of traditional shared-nothing and shared-disk approaches to provide highly scalable and available database solutions for all business applications. +Oracle RAC uses Oracle Clusterware as a portable cluster software that allows clustering of independent servers so that they cooperate as a single system and Oracle Automatic Storage Management (Oracle ASM) to provide simplified storage management that is consistent across all servers and storage platforms. +Oracle Clusterware and Oracle ASM are part of the Oracle Grid Infrastructure, which bundles both solutions in an easy to deploy software package. + +For more information on Oracle RAC Database 21c refer to the [Oracle Database documentation](http://docs.oracle.com/en/database/). + +## Using this Image + +To create an Oracle RAC environment, complete these steps in order: + +- [Oracle Real Application Clusters in Linux Containers](#oracle-real-application-clusters-in-linux-containers) + - [Overview of Running Oracle RAC in Containers](#overview-of-running-oracle-rac-in-containers) + - [Using this Image](#using-this-image) + - [Section 1 : Prerequisites for running Oracle RAC in containers](#section-1--prerequisites-for-running-oracle-rac-in-containers) + - [Section 2: Building Oracle RAC Database Container Images](#section-2-building-oracle-rac-database-container-images) + - [Oracle RAC Container Image for Docker](#oracle-rac-container-image-for-docker) + - [Oracle RAC Container Image for Podman](#oracle-rac-container-image-for-podman) + - [Section 3: Network and Password Management](#section-3--network-and-password-management) + - [Section 4: Oracle RAC on Docker](#section-4-oracle-rac-on-docker) + - [Section 4.1 : Prerequisites for Running Oracle RAC on Docker](#section-41--prerequisites-for-running-oracle-rac-on-docker) + - [Section 4.2: Setup Oracle RAC Container on Docker](#section-42-setup-oracle-rac-container-on-docker) + - [Deploying Oracle RAC on Container with Block Devices on Docker](#deploying-oracle-rac-on-container-with-block-devices-on-docker) + - [Deploying Oracle RAC on Container With Oracle RAC Storage Container](#deploying-oracle-rac-on-container-with-oracle-rac-storage-container) + - [Assign networks to Oracle RAC docker containers](#assign-networks-to-oracle-rac-docker-containers) + - [Start the first docker container](#start-the-first-docker-container) + - [Connect to the Oracle RAC docker container](#connect-to-the-oracle-rac-docker-container) + - [Section 4.3: Adding an Oracle RAC Node using a Docker Container](#section-43-adding-an-oracle-rac-node-using-a-docker-container) + - [Deploying Oracle RAC Additional Node on Container with Block Devices on Docker](#deploying-oracle-rac-additional-node-on-container-with-block-devices-on-docker) + - [Deploying Oracle RAC Additional Node on Container with Oracle RAC Storage Container on Docker](#deploying-oracle-rac-additional-node-on-container-with-oracle-rac-storage-container-on-docker) + - [Assign Network to additional Oracle RAC docker container](#assign-network-to-additional-oracle-rac-docker-container) + - [Start Oracle RAC docker container](#start-oracle-rac-docker-container) + - [Connect to the Oracle RAC docker container](#connect-to-the-oracle-rac-docker-container) + - [Section 5: Oracle RAC on Podman](#section-5-oracle-rac-on-podman) + - [Section 5.1 : Prerequisites for Running Oracle RAC on Podman](#section-51--prerequisites-for-running-oracle-rac-on-podman) + - [Section 5.2: Setup RAC Containers on Podman](#section-52-setup-rac-containers-on-podman) + - [Deploying Oracle RAC Containers with Block Devices on Podman](#deploying-oracle-rac-containers-with-block-devices-on-podman) + - [Deploying Oracle RAC on Container With Oracle RAC Storage Container on Podman](#deploying-oracle-rac-on-container-with-oracle-rac-storage-container-on-podman) + - [Assign networks to Oracle RAC podman containers](#assign-networks-to-oracle-rac-podman-containers) + - [Start the first podman container](#start-the-first-podman-container) + - [Connect to the Oracle RAC container](#connect-to-the-oracle-rac-podman-container) + - [Section 5.3: Adding a Oracle RAC Node using a container on Podman](#section-53-adding-a-oracle-rac-node-using-a-container-on-podman) + - [Deploying Oracle RAC Additional Node on Container with Block Devices on Podman](#deploying-oracle-rac-additional-node-on-container-with-block-devices-on-podman) + - [Deploying Oracle RAC Additional Node on Container with Oracle RAC Storage Container on Podman](#deploying-oracle-rac-additional-node-on-container-with-oracle-rac-storage-container-on-podman) + - [Assign Network to additional Oracle RAC podman container](#assign-network-to-additional-oracle-rac-podman-container) + - [Start Oracle RAC podman container](#start-oracle-rac-podman-container) + - [Section 6: Connecting to an Oracle RAC Database](#section-6-connecting-to-an-oracle-rac-database) + - [Section 7: Environment Variables for the First Node](#section-7-environment-variables-for-the-first-node) + - [Section 8: Environment Variables for the Second and Subsequent Nodes](#section-8-environment-variables-for-the-second-and-subsequent-nodes) + - [Section 9: Building a Patched Oracle RAC Container Image](#section-9-building-a-patched-oracle-rac-container-image) + - [Section 10 : Sample Container Files for Older Releases](#section-10--sample-container-files-for-older-releases) + - [Docker](#docker-container-files) + - [Podman](#podman-container-files) + - [Section 11 : Support](#section-11--support) + - [Docker](#docker-support) + - [Podman](#podman-support) + - [Section 12 : License](#section-12--license) + - [Section 11 : Copyright](#section-11--copyright) + +## Section 1 : Prerequisites for running Oracle RAC in containers + +Before you proceed to section two, you must complete each of the steps listed in this section. + +To review the resource requirements for Oracle RAC, see Oracle Database 21c Release documentation [Oracle Grid Infrastructure Installation and Upgrade Guide](https://docs.oracle.com/en/database/oracle/oracle-database/21/cwlin/index.html) + +Complete each of the following prerequisites: + +1. Ensure that each container that you will deploy as part of your cluster meets the minimum hardware requirements for Oracle RAC and Oracle Grid Infrastructure software. +2. Ensure all data files, control files, redo log files, and the server parameter file (`SPFILE`) used by the Oracle RAC database reside on shared storage that is accessible by all the Oracle RAC database instances. An Oracle RAC database is a shared-everything database, so each Oracle RAC Node must have the same access. +3. Configure the following addresses manually in your DNS. + + - Public IP address for each container + - Private IP address for each container + - Virtual IP address for each container + - Three single client access name (SCAN) addresses for the cluster. +4. Block storage: If you are planning to use block devices for shared storage, then allocate block devices for OCR, voting and database files. +5. NFS storage: If you are planning to use NFS storage for OCR, Voting Disk and Database files, then configure NFS storage and export at least one NFS mount. You can also use `/docker-images/OracleDatabase/RAC/OracleRACStorageServer` container for shared file system on NFS. +6. Set`/etc/sysctl.conf`parameters: For Oracle RAC, you must set following parameters at host level in `/etc/sysctl.conf`: + + ```INI + fs.aio-max-nr = 1048576 + fs.file-max = 6815744 + net.core.rmem_max = 4194304 + net.core.rmem_default = 262144 + net.core.wmem_max = 1048576 + net.core.wmem_default = 262144 + net.core.rmem_default = 262144 + ``` + +7. List and reload parameters: After the `/etc/sysctl.conf` file is modified, run the following commands: + + ```bash + sysctl -a + sysctl -p + ``` + +8. To resolve VIPs and SCAN IPs, we are using a DNS container in this guide. Before proceeding to the next step, create a [DNS server container](../OracleDNSServer/README.md). +If you have a pre-configured DNS server in your environment, then you can replace `-e DNS_SERVERS=172.16.1.25`, `--dns=172.16.1.25`, `-e DOMAIN=example.com` and `--dns-search=example.com` parameters in **Section 2: Building Oracle RAC Database Podman Install Images** with the `DOMAIN_NAME` and `DNS_SERVER` based on your environment. +You must ensure that you have the`Podman-docker` package installed on your OL8 Podman host to run the command using the `docker` utility. + +9. If you are running RAC on Podman, you need to make sure you have installed the `podman-docker` rpm so that podman commands can be run using `docker` utility. +10. The Oracle RAC `Dockerfile` does not contain any Oracle software binaries. Download the following software from the [Oracle Technology Network](https://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html) and stage them under `/docker-images/OracleDatabase/RAC/OracleRealApplicationCluster/containerfiles/` folder. + + - Oracle Database 21c Grid Infrastructure (21.3) for Linux x86-64 + - Oracle Database 21c (21.3) for Linux x86-64 + + - If you are deploying Oracle RAC on Podman then execute following, otherwise skip to next section. + - Because Oracle RAC on Podman is supported on Release 21c (21.7) or later, you must download the grid release update (RU) from [support.oracle.com](https://support.oracle.com/portal/). In this case, we downloaded RU `34155589`. + + - Download the following one-off patches for release 21.7 from [support.oracle.com](https://support.oracle.com/portal/) + - `34339952` + - `32869666` + +**Notes** + +- If you are planning to use a `DNSServer` container for SCAN IPs, VIPs resolution, then configure the DNSServer. For testing purposes only, use the Oracle `DNSServer` image to deploy a container providing DNS resolutions. Please check [OracleDNSServer](../OracleDNSServer/README.md) for details. +- `OracleRACStorageServer` docker image can be used only for testing purpose. Please check [OracleRACStorageServer](../OracleRACStorageServer/README.md) for details. +- To run Oracle RAC using Podman on multiple hosts, refer [Podman macvlan network](https://docs.podman.io/en/latest/markdown/podman-network-create.1.html). +To run Oracle RAC using Oracle Container Runtime for Docker on multiple hosts, refer [Docker macvlan network](https://docs.docker.com/network/macvlan/). +- If the Docker or Podman bridge network is not available outside your host, you can use the Oracle Connection Manager [CMAN image](https://github.com/oracle/docker-images/tree/main/OracleDatabase/RAC/OracleConnectionManager) to access the Oracle RAC Database from outside the host. + +## Section 2: Building Oracle RAC Database Container Images + +**IMPORTANT :** This section assumes that you have gone through all the prerequisites in Section 1 and completed all the steps, based on your environment. Do not uncompress the binaries and patches. + +To assist in building the images, you can use the [`buildContainerImage.sh`](https://github.com/oracle/docker-images/blob/master/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/buildContainerImage.sh) script. See the following for instructions and usage. + +### Oracle RAC Container Image for Docker +If you are planing to deploy Oracle RAC container image on Podman, skip to the section [Oracle RAC Container Image for Podman](#oracle-rac-container-image-for-podman). + +```bash +./buildContainerImage.sh -v -o '--build-arg BASE_OL_IMAGE=oraclelinux:7' -i + +# Example: Building Oracle RAC Docker Image +./buildContainerImage.sh -v 21.3.0 -o '--build-arg BASE_OL_IMAGE=oraclelinux:7' -i +``` +***Note*** +- `IGNORE_PREREQ` is default `false` while building full image, if you want to skip this during dbca/grid installation or basically set `-ignorePrereq` while building the container image, set this to `true`. + +### Oracle RAC Container Image for Podman +If you are planing to deploy Oracle RAC container image on Docker, skip to the section [Oracle RAC Container Image for Docker](#oracle-rac-container-image-for-docker). + + ```bash + ./buildContainerImage.sh -v -o '--build-arg BASE_OL_IMAGE=oraclelinux:8' -i + + # Example: Building Oracle RAC Full image + ./buildContainerImage.sh -v 21.3.0 -o '--build-arg BASE_OL_IMAGE=oraclelinux:8' -i + ``` +- After the `21.3.0` Oracle RAC container image is built, start building a patched image with the download 21.7 RU and one-offs. To build the patch image, refer [Example of how to create a patched database image](https://github.com/oracle/docker-images/tree/main/OracleDatabase/RAC/OracleRealApplicationClusters/samples/applypatch). + +**Notes** + +- The resulting images will contain the Oracle Grid Infrastructure binaries and Oracle RAC Database binaries. +- If you are behind a proxy wall, then you must set the `https_proxy` environment variable based on your environment before building the image. + +## Section 3: Network and Password Management + +1. Before you start the installation, you must plan your private and public network. You can create a network bridge on every container host so containers running within that host can communicate with each other. +For example, create `rac_pub1_nw` for the public network (`172.16.1.0/24`) and `rac_priv1_nw` (`192.168.17.0/24`) for a private network. You can use any network subnet for testing. In this document we reference the public network on `172.16.1.0/24` and the private network on `192.168.17.0/24`. + + ```bash + # docker network create --driver=bridge --subnet=172.16.1.0/24 rac_pub1_nw + # docker network create --driver=bridge --subnet=192.168.17.0/24 rac_priv1_nw + ``` + +- To run Oracle RAC using Oracle Container Runtime for Docker on multiple hosts, you will need to create a [Docker macvlan network](https://docs.docker.com/network/macvlan/) using the following commands: + + ```bash + # docker network create -d macvlan --subnet=172.16.1.0/24 --gateway=172.16.1.1 -o parent=eth0 rac_pub1_nw + # docker network create -d macvlan --subnet=192.168.17.0/24 --gateway=192.168.17.1 -o parent=eth1 rac_priv1_nw + ``` + +2. Specify the secret volume for resetting the grid, oracle, and database user password during node creation or node addition. The volume can be a shared volume among all the containers. For example: + + ```bash + # mkdir /opt/.secrets/ + ``` +- If your environment uses Docker, run `openssl rand -hex 64 -out /opt/.secrets/pwd.key`. For Podman, run `openssl rand -hex -out /opt/.secrets/pwd.key` +- Edit the `/opt/.secrets/common_os_pwdfile` and seed the password for the grid, oracle and database users. For this deployment scenario, it will be a common password for the grid, oracle, and database users. Run the command: + + ```bash + # openssl enc -aes-256-cbc -salt -in /opt/.secrets/common_os_pwdfile -out /opt/.secrets/common_os_pwdfile.enc -pass file:/opt/.secrets/pwd.key + # rm -f /opt/.secrets/common_os_pwdfile + ``` +3. Create `rac_host_file` on both Podman and Docker hosts: + + ```bash + # mkdir /opt/containers/ + # touch /opt/containers/rac_host_file + ``` + +**Notes** + +- To run Oracle RAC using Podman on multiple hosts, refer [Podman macvlan network](https://docs.podman.io/en/latest/markdown/podman-network-create.1.html). +To run Oracle RAC using Oracle Container Runtime for Docker on multiple hosts, refer [Docker macvlan network](https://docs.docker.com/network/macvlan/). +- If the Docker or Podman bridge network is not available outside your host, you can use the Oracle Connection Manager [CMAN image](https://github.com/oracle/docker-images/tree/main/OracleDatabase/RAC/OracleConnectionManager) to access the Oracle RAC Database from outside the host. +- If you want to specify a different password for each of the user accounts, then create three different files, encrypt them under `/opt/.secrets`, and pass the file name to the container using the environment variable. Environment variables can be ORACLE_PWD_FILE for the oracle user, GRID_PWD_FILE for the grid user, and DB_PWD_FILE for the database password. +- If you want to use a common password for the oracle, grid, and database users, then you can assign a password file name to COMMON_OS_PWD_FILE environment variable. + +## Section 4: Oracle RAC on Docker + +If you are deploying Oracle RAC On Podman, skip to the [Section 5: Oracle RAC on Podman](#section-5-oracle-rac-on-podman). + +**Note** Oracle RAC is supported for production use on Docker starting with Oracle Database 21c (21.3). On earlier releases, Oracle RAC on Docker is supported for development and and test environments. To deploy Oracle RAC on Docker, use the pre-built images available on the Oracle Container Registry. Execute the following steps in a given order to deploy RAC on Docker: + +To create an Oracle RAC environment on Docker, complete each of these steps in order. + +### Section 4.1 : Prerequisites for Running Oracle RAC on Docker + +To run Oracle RAC on Docker, you must install and configure [Oracle Container Runtime for Docker](https://docs.oracle.com/cd/E52668_01/E87205/html/index.html) on Oracle Linux 7. You must have sufficient space on docker file system (`/var/lib/docker`), configured with the Docker OverlayFS storage driver option `overlay2`. + +**IMPORTANT:** Completing prerequisite steps is a requirement for successful configuration. + +Complete each prerequisite step in order, customized for your environment. + +1. Verify that you have enough memory and CPU resources available for all containers. For this `README.md`, we used the following configuration: + + - 2 Docker hosts + - CPU Cores: 1 Socket with 4 cores, with 2 threads for each core Intel® Xeon® Platinum 8167M CPU at 2.00 GHz + - RAM: 60GB + - Swap memory: 32 GB + - Oracle Linux 7.9 or later with the Unbreakable Enterprise Kernel 6: 5.4.17-2102.200.13.el7uek.x86_64. + +2. Oracle RAC must run certain processes in real-time mode. To run processes inside a container in real-time mode, you must make changes to the Docker configuration files. For details, see the [`dockerd` documentation](https://docs.docker.com/engine/reference/commandline/dockerd/#examples). Edit the Docker Daemon based on Docker version: + + - Check the Docker version. In the following output, the Oracle `docker-engine` version is 19.3. + + ```bash + rpm -qa | grep docker + docker-cli-19.03.11.ol-9.el7.x86_64 + docker-engine-19.03.11.ol-9.el7.x86_64 + ``` + + - If Oracle `docker-engine` version is greater than or equal to 19.3: Edit `/usr/lib/systemd/system/docker.service` and add additional parameters in the `[Service]` section for the `dockerd` daemon: + + ```bash + ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --cpu-rt-runtime=950000 + ``` + + - If Oracle docker-engine version is less than 19.3: Edit `/etc/sysconfig/docker` and add following + + ```bash + OPTIONS='--selinux-enabled --cpu-rt-runtime=950000' + ``` + +3. After you have modified the `dockerd` daemon, reload the daemon with the changes you have made: + + ```bash + systemctl daemon-reload + systemctl stop docker + systemctl start docker + ``` + +### Section 4.2: Setup Oracle RAC Container on Docker + +This section provides step by step procedure to deploy Oracle RAC on container with block devices and storage container. To understand the details of environment variable, refer For the details of environment variables [Section 7: Environment Variables for the First Node](#section-7-environment-variables-for-the-first-node) + +Refer the [Section 3: Network and Password Management](#section-3--network-and-password-management) and setup the network on a container host based on your Oracle RAC environment. If you have already done the setup, ignore and proceed further. + +#### Deploying Oracle RAC on Container with Block Devices on Docker + +If you are using an NFS volume, skip to the section [Deploying Oracle RAC on Container With Oracle RAC Storage Container](#deploying-oracle-rac-on-container-with-oracle-rac-storage-container). + +Make sure the ASM devices do not have any existing file system. To clear any other file system from the devices, use the following command: + + ```bash + # dd if=/dev/zero of=/dev/xvde bs=8k count=100000 + ``` + +Repeat for each shared block device. In the preceding example, `/dev/xvde` is a shared Xen virtual block device. + +Now create the Oracle RAC container using the image. You can use the following example to create a container: + + ```bash + # docker create -t -i \ + --hostname racnode1 \ + --volume /boot:/boot:ro \ + --volume /dev/shm \ + --tmpfs /dev/shm:rw,exec,size=4G \ + --volume /opt/containers/rac_host_file:/etc/hosts \ + --volume /opt/.secrets:/run/secrets:ro \ + --dns=172.16.1.25 \ + --dns-search=example.com \ + --device=/dev/xvde:/dev/asm_disk1 \ + --device=/dev/xvdf:/dev/asm_disk2 \ + --privileged=false \ + --cap-add=SYS_NICE \ + --cap-add=SYS_RESOURCE \ + --cap-add=NET_ADMIN \ + -e DNS_SERVERS="172.16.1.25" \ + -e NODE_VIP=172.16.1.160 \ + -e VIP_HOSTNAME=racnode1-vip \ + -e PRIV_IP=192.168.17.150 \ + -e PRIV_HOSTNAME=racnode1-priv \ + -e PUBLIC_IP=172.16.1.150 \ + -e PUBLIC_HOSTNAME=racnode1 \ + -e SCAN_NAME=racnode-scan \ + -e OP_TYPE=INSTALL \ + -e DOMAIN=example.com \ + -e ASM_DEVICE_LIST=/dev/asm_disk1,/dev/asm_disk2 \ + -e ASM_DISCOVERY_DIR=/dev \ + -e CMAN_HOSTNAME=racnode-cman1 \ + -e CMAN_IP=172.16.1.15 \ + -e COMMON_OS_PWD_FILE=common_os_pwdfile.enc \ + -e PWD_KEY=pwd.key \ + --restart=always --tmpfs=/run -v /sys/fs/cgroup:/sys/fs/cgroup:ro \ + --cpu-rt-runtime=95000 --ulimit rtprio=99 \ + --name racnode1 \ + oracle/database-rac:21.3.0 + ``` + +**Note:** Change environment variables such as `NODE_IP`, `PRIV_IP`, `PUBLIC_IP`, `ASM_DEVICE_LIST`, `PWD_FILE`, and `PWD_KEY` based on your environment. Also, ensure you use the correct device names on each host. + +#### Deploying Oracle RAC on Container With Oracle RAC Storage Container + +If you are using block devices, skip to the section [Deploying Oracle RAC on Container with Block Devices on Docker](#deploying-oracle-rac-on-container-with-block-devices-on-docker) + +Now create the Oracle RAC container using the image. You can use the following example to create a container: + + ```bash + # docker create -t -i \ + --hostname racnode1 \ + --volume /boot:/boot:ro \ + --volume /dev/shm \ + --tmpfs /dev/shm:rw,exec,size=4G \ + --volume /opt/containers/rac_host_file:/etc/hosts \ + --volume /opt/.secrets:/run/secrets:ro \ + --dns=172.16.1.25 \ + --dns-search=example.com \ + --privileged=false \ + --volume racstorage:/oradata \ + --cap-add=SYS_NICE \ + --cap-add=SYS_RESOURCE \ + --cap-add=NET_ADMIN \ + -e DNS_SERVERS="172.16.1.25" \ + -e NODE_VIP=172.16.1.160 \ + -e VIP_HOSTNAME=racnode1-vip \ + -e PRIV_IP=192.168.17.150 \ + -e PRIV_HOSTNAME=racnode1-priv \ + -e PUBLIC_IP=172.16.1.150 \ + -e PUBLIC_HOSTNAME=racnode1 \ + -e SCAN_NAME=racnode-scan \ + -e OP_TYPE=INSTALL \ + -e DOMAIN=example.com \ + -e ASM_DISCOVERY_DIR=/oradata \ + -e ASM_DEVICE_LIST=/oradata/asm_disk01.img,/oradata/asm_disk02.img,/oradata/asm_disk03.img,/oradata/asm_disk04.img,/oradata/asm_disk05.img \ + -e CMAN_HOSTNAME=racnode-cman1 \ + -e CMAN_IP=172.16.1.15 \ + -e COMMON_OS_PWD_FILE=common_os_pwdfile.enc \ + -e PWD_KEY=pwd.key \ + --restart=always \ + --tmpfs=/run -v /sys/fs/cgroup:/sys/fs/cgroup:ro \ + --cpu-rt-runtime=95000 \ + --ulimit rtprio=99 \ + --name racnode1 \ + oracle/database-rac:21.3.0 + ``` + +**Notes:** + +- Change environment variables such as `NODE_IP`, `PRIV_IP`, `PUBLIC_IP`, `ASM_DEVICE_LIST`, `PWD_FILE`, and `PWD_KEY` based on your environment. Also, ensure you use the correct device names on each host. +- You must have created the `racstorage` volume before the creation of the Oracle RAC Container. For details, please refer [OracleRACStorageServer](../OracleRACStorageServer/README.md). +- For details about the available environment variables, refer the [Section 7](#section-7-environment-variables-for-the-first-node). + +#### Assign networks to Oracle RAC docker containers + +You need to assign the Docker networks created in section 1 to containers. Execute the following commands: + + ```bash + # docker network disconnect bridge racnode1 + # docker network connect rac_pub1_nw --ip 172.16.1.150 racnode1 + # docker network connect rac_priv1_nw --ip 192.168.17.150 racnode1 + ``` + +#### Start the first docker container + +To start the first container, run the following command: + + ```bash + # docker start racnode1 + ``` + +It can take at least 40 minutes or longer to create the first node of the cluster. To check the logs, use the following command from another terminal session: + + ```bash + # docker logs -f racnode1 + ``` + +You should see the database creation success message at the end: + + ```bash + #################################### + ORACLE RAC DATABASE IS READY TO USE! + #################################### + ``` + +#### Connect to the Oracle RAC docker container + +To connect to the container execute the following command: + +```bash +# docker exec -i -t racnode1 /bin/bash +``` + +If the install fails for any reason, log in to the container using the preceding command and check `/tmp/orod.log`. You can also review the Grid Infrastructure logs located at `$GRID_BASE/diag/crs` and check for failure logs. If the failure occurred during the database creation then check the database logs. + +### Section 4.3: Adding an Oracle RAC Node using a Docker Container + +Before proceeding to the next step, ensure Oracle Grid Infrastructure is running and the Oracle RAC Database is open as per instructions in [Section 4.2: Setup Oracle RAC on Docker](#section-42-setup-oracle-rac-container-on-docker). Otherwise, the node addition process will fail. + +Refer the [Section 3: Network and Password Management](#section-3--network-and-password-management) and setup the network on a container host based on your Oracle RAC environment. If you have already done the setup, ignore and proceed further. + +To understand the details of environment variable, refer For the details of environment variables [Section 8](#section-8-environment-variables-for-the-second-and-subsequent-nodes) + +Reset the password on the existing Oracle RAC node for SSH setup between an existing node in the cluster and the new node. Password must be the same on all the nodes for the `grid` and `oracle` users. Execute the following command on an existing node of the cluster. + +```bash +docker exec -i -t -u root racnode1 /bin/bash +sh /opt/scripts/startup/resetOSPassword.sh --help +sh /opt/scripts/startup/resetOSPassword.sh --op_type reset_grid_oracle --pwd_file common_os_pwdfile.enc --secret_volume /run/secrets --pwd_key_file pwd.key +``` + +**Note:** If you do not have a common secret volume among Oracle RAC containers, populate the password file with the same password that you have used on the new node, encrypt the file, and execute `resetOSPassword.sh` on the existing node of the cluster. + +#### Deploying Oracle RAC Additional Node on Container with Block Devices on Docker + +If you are using an NFS volume, skip to the section [Deploying Oracle RAC on Container with Oracle RAC Storage Container on Docker](#deploying-oracle-rac-on-container-with-oracle-rac-storage-container) + +To create additional nodes, use the following command: + +```bash +# docker create -t -i \ + --hostname racnode2 \ + --volume /dev/shm \ + --tmpfs /dev/shm:rw,exec,size=4G \ + --volume /boot:/boot:ro \ + --dns-search=example.com \ + --volume /opt/containers/rac_host_file:/etc/hosts \ + --volume /opt/.secrets:/run/secrets:ro \ + --dns=172.16.1.25 \ + --dns-search=example.com \ + --device=/dev/xvde:/dev/asm_disk1 \ + --device=/dev/zvdf:/dev/asm_disk2 \ + --privileged=false \ + --cap-add=SYS_NICE \ + --cap-add=SYS_RESOURCE \ + --cap-add=NET_ADMIN \ + -e DNS_SERVERS="172.16.1.25" \ + -e EXISTING_CLS_NODES=racnode1 \ + -e NODE_VIP=172.16.1.161 \ + -e VIP_HOSTNAME=racnode2-vip \ + -e PRIV_IP=192.168.17.151 \ + -e PRIV_HOSTNAME=racnode2-priv \ + -e PUBLIC_IP=172.16.1.151 \ + -e PUBLIC_HOSTNAME=racnode2 \ + -e DOMAIN=example.com \ + -e SCAN_NAME=racnode-scan \ + -e ASM_DISCOVERY_DIR=/dev \ + -e ASM_DEVICE_LIST=/dev/asm_disk1,/dev/asm_disk2 \ + -e ORACLE_SID=ORCLCDB \ + -e OP_TYPE=ADDNODE \ + -e COMMON_OS_PWD_FILE=common_os_pwdfile.enc \ + -e PWD_KEY=pwd.key \ + --tmpfs=/run -v /sys/fs/cgroup:/sys/fs/cgroup:ro \ + --cpu-rt-runtime=95000 \ + --ulimit rtprio=99 \ + --restart=always \ + --name racnode2 \ + oracle/database-rac:21.3.0 +``` + +For details of all environment variables and parameters, refer to [Section 7](#section-7-environment-variables-for-the-first-node). + +#### Deploying Oracle RAC Additional Node on Container with Oracle RAC Storage Container on Docker + +If you are using physical block devices for shared storage, skip to [Deploying Oracle RAC on Container with Block Devices on Docker](#deploying-oracle-rac-on-container-with-block-devices-on-docker) + +Use the existing `racstorage:/oradata` volume when creating the additional container using the image. + +For example: + +```bash +# docker create -t -i \ + --hostname racnode2 \ + --volume /dev/shm \ + --tmpfs /dev/shm:rw,exec,size=4G \ + --volume /boot:/boot:ro \ + --dns-search=example.com \ + --volume /opt/containers/rac_host_file:/etc/hosts \ + --volume /opt/.secrets:/run/secrets:ro \ + --dns=172.16.1.25 \ + --dns-search=example.com \ + --privileged=false \ + --volume racstorage:/oradata \ + --cap-add=SYS_NICE \ + --cap-add=SYS_RESOURCE \ + --cap-add=NET_ADMIN \ + -e DNS_SERVERS="172.16.1.25" \ + -e EXISTING_CLS_NODES=racnode1 \ + -e NODE_VIP=172.16.1.161 \ + -e VIP_HOSTNAME=racnode2-vip \ + -e PRIV_IP=192.168.17.151 \ + -e PRIV_HOSTNAME=racnode2-priv \ + -e PUBLIC_IP=172.16.1.151 \ + -e PUBLIC_HOSTNAME=racnode2 \ + -e DOMAIN=example.com \ + -e SCAN_NAME=racnode-scan \ + -e ASM_DISCOVERY_DIR=/oradata \ + -e ASM_DEVICE_LIST=/oradata/asm_disk01.img,/oradata/asm_disk02.img,/oradata/asm_disk03.img,/oradata/asm_disk04.img,/oradata/asm_disk05.img \ + -e ORACLE_SID=ORCLCDB \ + -e OP_TYPE=ADDNODE \ + -e COMMON_OS_PWD_FILE=common_os_pwdfile.enc \ + -e PWD_KEY=pwd.key \ + --tmpfs=/run -v /sys/fs/cgroup:/sys/fs/cgroup:ro \ + --cpu-rt-runtime=95000 \ + --ulimit rtprio=99 \ + --restart=always \ + --name racnode2 \ + oracle/database-rac:21.3.0 +``` + +**Notes:** + +- You must have created **racstorage** volume before the creation of the Oracle RAC container. +- You can change env variables such as IPs and ORACLE_PWD based on your env. For details about the env variables, refer the section 8. + +#### Assign Network to additional Oracle RAC docker container + +Connect the private and public networks you created earlier to the container: + +```bash +# docker network disconnect bridge racnode2 +# docker network connect rac_pub1_nw --ip 172.16.1.151 racnode2 +# docker network connect rac_priv1_nw --ip 192.168.17.151 racnode2 +``` + +#### Start Oracle RAC docker container + +Start the container + +```bash +# docker start racnode2 +``` + +To check the database logs, tail the logs using the following command: + +```bash +# docker logs -f racnode2 +``` + +You should see the database creation success message at the end. + +```text +#################################### +ORACLE RAC DATABASE IS READY TO USE! +#################################### +``` + +#### Connect to the Oracle RAC container on Additional Node + +To connect to the container execute the following command: + +```bash +# docker exec -i -t racnode2 /bin/bash +``` + +If the node addition fails, log in to the container using the preceding command and review `/tmp/orod.log`. You can also review the Grid Infrastructure logs i.e. `$GRID_BASE/diag/crs` and check for failure logs. If the node creation has failed during the database creation process, then check DB logs. + +## Section 5: Oracle RAC on Podman + +If you are deploying Oracle RAC On Docker, skip to [Section 4: Oracle RAC on Docker](#section-4-oracle-rac-on-docker) + +**Note** Oracle RAC is supported for production use on Podman starting with Oracle Database 19c (19.16), and Oracle Database 21c (21.7). You can deploy Oracle RAC on Podman using the pre-built images available on Oracle Container Registry. Execute the following steps in a given order to deploy RAC on Podman: + +To create an Oracle RAC environment on Podman, complete each of these steps in order. + +### Section 5.1 : Prerequisites for Running Oracle RAC on Podman + +You must install and configure [Podman release 4.0.2](https://docs.oracle.com/en/operating-systems/oracle-linux/Podman/) or later on Oracle Linux 8.5 or later to run Oracle RAC on Podman. + +**IMPORTANT:** Completing prerequisite steps is a requirement for successful configuration. + +Complete each prerequisite step in order, customized for your environment. + +1. Verify that you have enough memory and CPU resources available for all containers. In this `README.md` for Podman, we used the following configuration: + + - 2 Podman hosts + - CPU Cores: 1 Socket with 4 cores, with 2 threads for each core Intel® Xeon® Platinum 8167M CPU at 2.00 GHz + - RAM: 60 GB + - Swap memory: 32 GB + - Oracle Linux 8.5 (Linux-x86-64) with the Unbreakable Enterprise Kernel 6: `5.4.17-2136.300.7.el8uek.x86_64`. + +2. Oracle RAC must run certain processes in real-time mode. To run processes inside a container in real-time mode, populate the real-time CPU budgeting on machine restarts. Create a oneshot systemd service as follows: + + - Create a file `/etc/systemd/system/Podman-rac-cgroup.service` + - Append the following lines: + + ```INI + [Unit] + Description=Populate Cgroups with real time chunk on machine restart + After=multi-user.target + [Service] + Type=oneshot + ExecStart=/bin/bash -c “/bin/echo 950000 > /sys/fs/cgroup/cpu,cpuacct/machine.slice/cpu.rt_runtime_us && /bin/systemctl restart Podman-restart.service” + StandardOutput=journal + CPUAccounting=yes + Slice=machine.slice + [Install] + WantedBy=multi-user.target + ``` + + - After creating the file `/etc/systemd/system/Podman-rac-cgroup.service` with the lines appended in the preceding step, reload and restart the Podman daemon using the following steps: + + ```bash + systemctl daemon-reload + systemctl enable Podman-rac-cgroup.service + systemctl enable Podman-restart.service + systemctl start Podman-rac-cgroup.service + ``` + +3. If SELINUX is enabled on the Podman host, then you must create an SELinux policy for Oracle RAC on Podman. For details about this procedure, see "How to Configure Podman for SELinux Mode" in the publication [Oracle Real Application Clusters Installation Guide for Podman Oracle Linux x86-64](https://docs.oracle.com/en/database/oracle/oracle-database/21/racpd/target-configuration-oracle-rac-podman.html#GUID-59138DF8-3781-4033-A38F-E0466884D008). + +### Section 5.2: Setup RAC Containers on Podman +This section provides step by step procedure to deploy Oracle RAC on container with block devices and storage container. To understand the details of environment variable, refer For the details of environment variables [Section 7: Environment Variables for the First Node](#section-7-environment-variables-for-the-first-node) + +Refer the [Section 3: Network and Password Management](#section-3--network-and-password-management) and setup the network on a container host based on your Oracle RAC environment. If you have already done the setup, ignore and proceed further. + +#### Deploying Oracle RAC Containers with Block Devices on Podman + +If you are using an NFS volume, skip to the section [Deploying Oracle RAC on Container With Oracle RAC Storage Container on Podman](#deploying-oracle-rac-on-container-with-oracle-rac-storage-container-on-podman). + +Make sure the ASM devices do not have any existing file system. To clear any other file system from the devices, use the following command: + + ```bash + # dd if=/dev/zero of=/dev/xvde bs=8k count=100000 + ``` + +Repeat for each shared block device. In the preceding example, `/dev/xvde` is a shared Xen virtual block device. + +Now create the Oracle RAC container using the image. For the details of environment variables, refer to section 7. You can use the following example to create a container: + + ```bash + # podman create -t -i \ + --hostname racnode1 \ + --volume /boot:/boot:ro \ + --tmpfs /dev/shm:rw,exec,size=4G \ + --volume /opt/containers/rac_host_file:/etc/hosts \ + --volume /opt/.secrets:/run/secrets:ro \ + --dns=172.16.1.25 \ + --dns-search=example.com \ + --device=/dev/xvde:/dev/asm_disk1 \ + --device=/dev/xvdf:/dev/asm_disk2 \ + --privileged=false \ + --cap-add=SYS_NICE \ + --cap-add=SYS_RESOURCE \ + --cap-add=NET_ADMIN \ + --cap-add=AUDIT_WRITE \ + --cap-add=AUDIT_CONTROL \ + -e DNS_SERVERS="172.16.1.25" \ + -e NODE_VIP=172.16.1.160 \ + -e VIP_HOSTNAME=racnode1-vip \ + -e PRIV_IP=192.168.17.150 \ + -e PRIV_HOSTNAME=racnode1-priv \ + -e PUBLIC_IP=172.16.1.150 \ + -e PUBLIC_HOSTNAME=racnode1 \ + -e SCAN_NAME=racnode-scan \ + -e OP_TYPE=INSTALL \ + -e DOMAIN=example.com \ + -e ASM_DEVICE_LIST=/dev/asm_disk1,/dev/asm_disk2 \ + -e ASM_DISCOVERY_DIR=/dev \ + -e CMAN_HOSTNAME=racnode-cman1 \ + -e CMAN_IP=172.16.1.15 \ + -e COMMON_OS_PWD_FILE=common_os_pwdfile.enc \ + -e PWD_KEY=pwd.key \ + --restart=always \ + --systemd=always \ + --cpu-rt-runtime=95000 \ + --ulimit rtprio=99 \ + --name racnode1 \ + localhost/oracle/database-rac:21.3.0-21.7.0 + ``` + +**Note:** Change environment variables such as `NODE_IP`, `PRIV_IP`, `PUBLIC_IP`, `ASM_DEVICE_LIST`, `PWD_FILE`, and `PWD_KEY` based on your environment. Also, ensure you use the correct device names on each host. + +#### Deploying Oracle RAC on Container With Oracle RAC Storage Container on Podman + +If you are using block devices, skip to the section [Deploying RAC Containers with Block Devices on Podman](#deploying-oracle-rac-containers-with-block-devices-on-podman) +Now create the Oracle RAC container using the image. You can use the following example to create a container: + + ```bash + # podman create -t -i \ + --hostname racnode1 \ + --volume /boot:/boot:ro \ + --tmpfs /dev/shm:rw,exec,size=4G \ + --volume /opt/containers/rac_host_file:/etc/hosts \ + --volume /opt/.secrets:/run/secrets:ro \ + --dns=172.16.1.25 \ + --dns-search=example.com \ + --privileged=false \ + --volume racstorage:/oradata \ + --cap-add=SYS_NICE \ + --cap-add=SYS_RESOURCE \ + --cap-add=NET_ADMIN \ + --cap-add=AUDIT_WRITE \ + --cap-add=AUDIT_CONTROL \ + -e DNS_SERVERS="172.16.1.25" \ + -e NODE_VIP=172.16.1.160 \ + -e VIP_HOSTNAME=racnode1-vip \ + -e PRIV_IP=192.168.17.150 \ + -e PRIV_HOSTNAME=racnode1-priv \ + -e PUBLIC_IP=172.16.1.150 \ + -e PUBLIC_HOSTNAME=racnode1 \ + -e SCAN_NAME=racnode-scan \ + -e OP_TYPE=INSTALL \ + -e DOMAIN=example.com \ + -e ASM_DISCOVERY_DIR=/oradata \ + -e ASM_DEVICE_LIST=/oradata/asm_disk01.img,/oradata/asm_disk02.img,/oradata/asm_disk03.img,/oradata/asm_disk04.img,/oradata/asm_disk05.img \ + -e CMAN_HOSTNAME=racnode-cman1 \ + -e CMAN_IP=172.16.1.15 \ + -e COMMON_OS_PWD_FILE=common_os_pwdfile.enc \ + -e PWD_KEY=pwd.key \ + --restart=always \ + --systemd=always \ + --cpu-rt-runtime=95000 \ + --ulimit rtprio=99 \ + --name racnode1 \ + localhost/oracle/database-rac:21.3.0-21.7.0 + ``` + +**Notes:** + +- Change environment variables such as `NODE_IP`, `PRIV_IP`, `PUBLIC_IP`, `ASM_DEVICE_LIST`, `PWD_FILE`, and `PWD_KEY` based on your environment. Also, ensure you use the correct device names on each host. +- You must have created the `racstorage` volume before the creation of the Oracle RAC Container. For details about the available environment variables, refer the [Section 7](#section-7-environment-variables-for-the-first-node). + +#### Assign networks to Oracle RAC podman containers + +You need to assign the Podman networks created in section 1 to containers. Execute the following commands: + + ```bash + # podman network disconnect bridge racnode1 + # podman network connect rac_pub1_nw --ip 172.16.1.150 racnode1 + # podman network connect rac_priv1_nw --ip 192.168.17.150 racnode1 + ``` + +#### Start the first podman container + +To start the first container, run the following command: + + ```bash + # podman start racnode1 + ``` + +It can take at least 40 minutes or longer to create the first node of the cluster. To check the database logs, tail the logs using the following command: + +```bash +podman exec racnode1 /bin/bash -c "tail -f /tmp/orod.log" +``` + +You should see the database creation success message at the end. + +```text +#################################### +ORACLE RAC DATABASE IS READY TO USE! +#################################### +``` + +#### Connect to the Oracle RAC podman container + +To connect to the container execute the following command: + +```bash +# podman exec -i -t racnode1 /bin/bash +``` + +If the install fails for any reason, log in to the container using the preceding command and check `/tmp/orod.log`. You can also review the Grid Infrastructure logs located at `$GRID_BASE/diag/crs` and check for failure logs. If the failure occurred during the database creation then check the database logs. + +### Section 5.3: Adding a Oracle RAC Node using a container on Podman + +Before proceeding to the next step, ensure Oracle Grid Infrastructure is running and the Oracle RAC Database is open as per instructions in [Section 5.2: Setup RAC Containers on Podman](#section-52-setup-rac-containers-on-podman). Otherwise, the node addition process will fail. + +Refer the [Section 3: Network and Password Management](#section-3--network-and-password-management) and setup the network on a container host based on your Oracle RAC environment. If you have already done the setup, ignore and proceed further. + +To understand the details of environment variable, refer For the details of environment variables [Section 8](#section-8-environment-variables-for-the-second-and-subsequent-nodes). + + +Reset the password on the existing Oracle RAC node for SSH setup between an existing node in the cluster and the new node. Password must be the same on all the nodes for the `grid` and `oracle` users. Execute the following command on an existing node of the cluster. + +```bash +podman exec -i -t -u root racnode1 /bin/bash +sh /opt/scripts/startup/resetOSPassword.sh --help +sh /opt/scripts/startup/resetOSPassword.sh --op_type reset_grid_oracle --pwd_file common_os_pwdfile.enc --secret_volume /run/secrets --pwd_key_file pwd.key +``` + +**Note:** If you do not have a common secret volume among Oracle RAC containers, populate the password file with the same password that you have used on the new node, encrypt the file, and execute `resetOSPassword.sh` on the existing node of the cluster. + +#### Deploying Oracle RAC Additional Node on Container with Block Devices on Podman + +If you are using an NFS volume, skip to the section [Deploying Oracle RAC Additional Node on Container with Oracle RAC Storage Container on Podman](#deploying-oracle-rac-additional-node-on-container-with-oracle-rac-storage-container-on-podman). + +To create additional nodes, use the following command: + +```bash +# podman create -t -i \ + --hostname racnode2 \ + --tmpfs /dev/shm:rw,exec,size=4G \ + --volume /boot:/boot:ro \ + --dns-search=example.com \ + --volume /opt/containers/rac_host_file:/etc/hosts \ + --volume /opt/.secrets:/run/secrets:ro \ + --dns=172.16.1.25 \ + --dns-search=example.com \ + --device=/dev/xvde:/dev/asm_disk1 \ + --device=/dev/zvdf:/dev/asm_disk2 \ + --privileged=false \ + --cap-add=SYS_NICE \ + --cap-add=SYS_RESOURCE \ + --cap-add=NET_ADMIN \ + --cap-add=AUDIT_CONTROL \ + --cap-add=AUDIT_WRITE \ + -e DNS_SERVERS="172.16.1.25" \ + -e EXISTING_CLS_NODES=racnode1 \ + -e NODE_VIP=172.16.1.161 \ + -e VIP_HOSTNAME=racnode2-vip \ + -e PRIV_IP=192.168.17.151 \ + -e PRIV_HOSTNAME=racnode2-priv \ + -e PUBLIC_IP=172.16.1.151 \ + -e PUBLIC_HOSTNAME=racnode2 \ + -e DOMAIN=example.com \ + -e SCAN_NAME=racnode-scan \ + -e ASM_DISCOVERY_DIR=/dev \ + -e ASM_DEVICE_LIST=/dev/asm_disk1,/dev/asm_disk2 \ + -e ORACLE_SID=ORCLCDB \ + -e OP_TYPE=ADDNODE \ + -e COMMON_OS_PWD_FILE=common_os_pwdfile.enc \ + -e PWD_KEY=pwd.key \ + --systemd=always \ + --cpu-rt-runtime=95000 \ + --ulimit rtprio=99 \ + --restart=always \ + --name racnode2 \ + localhost/oracle/database-rac:21.3.0-21.7.0 +``` + +For details of all environment variables and parameters, refer to [Section 8](#section-8-environment-variables-for-the-second-and-subsequent-nodes). + +#### Deploying Oracle RAC Additional Node on Container with Oracle RAC Storage Container on Podman + +If you are using physical block devices for shared storage, skip to [Deploying Oracle RAC Additional Node on Container with Block Devices on Podman](#deploying-oracle-rac-additional-node-on-container-with-block-devices-on-podman). + +Use the existing `racstorage:/oradata` volume when creating the additional container using the image. + +For example: + +```bash +# podman create -t -i \ + --hostname racnode2 \ + --tmpfs /dev/shm:rw,exec,size=4G \ + --volume /boot:/boot:ro \ + --dns-search=example.com \ + --volume /opt/containers/rac_host_file:/etc/hosts \ + --volume /opt/.secrets:/run/secrets:ro \ + --dns=172.16.1.25 \ + --dns-search=example.com \ + --privileged=false \ + --volume racstorage:/oradata \ + --cap-add=SYS_NICE \ + --cap-add=SYS_RESOURCE \ + --cap-add=NET_ADMIN \ + --cap-add=AUDIT_WRITE \ + --cap-add=AUDIT_CONTROL \ + -e DNS_SERVERS="172.16.1.25" \ + -e EXISTING_CLS_NODES=racnode1 \ + -e NODE_VIP=172.16.1.161 \ + -e VIP_HOSTNAME=racnode2-vip \ + -e PRIV_IP=192.168.17.151 \ + -e PRIV_HOSTNAME=racnode2-priv \ + -e PUBLIC_IP=172.16.1.151 \ + -e PUBLIC_HOSTNAME=racnode2 \ + -e DOMAIN=example.com \ + -e SCAN_NAME=racnode-scan \ + -e ASM_DISCOVERY_DIR=/oradata \ + -e ASM_DEVICE_LIST=/oradata/asm_disk01.img,/oradata/asm_disk02.img,/oradata/asm_disk03.img,/oradata/asm_disk04.img,/oradata/asm_disk05.img \ + -e ORACLE_SID=ORCLCDB \ + -e OP_TYPE=ADDNODE \ + -e COMMON_OS_PWD_FILE=common_os_pwdfile.enc \ + -e PWD_KEY=pwd.key \ + --systemd=always \ + --cpu-rt-runtime=95000 \ + --ulimit rtprio=99 \ + --restart=always \ + --name racnode2 \ + localhost/oracle/database-rac:21.3.0-21.7.0 +``` + +**Notes:** + +- You must have created **racstorage** volume before the creation of the Oracle RAC container. +- You can change env variables such as IPs and ORACLE_PWD based on your env. For details about the env variables, refer the [Section 8](#section-8-environment-variables-for-the-second-and-subsequent-nodes). + +#### Assign Network to additional Oracle RAC podman container + +Connect the private and public networks you created earlier to the container: + +```bash +# podman network disconnect bridge racnode2 +# podman network connect rac_pub1_nw --ip 172.16.1.151 racnode2 +# podman network connect rac_priv1_nw --ip 192.168.17.151 racnode2 +``` + +#### Start Oracle RAC podman container + +Start the container + +```bash +# podman start racnode2 +``` + +To check the database logs, tail the logs using the following command: + +```bash +podman exec racnode2 /bin/bash -c "tail -f /tmp/orod.log" +``` + +You should see the database creation success message at the end. + +```text +#################################### +ORACLE RAC DATABASE IS READY TO USE! +#################################### +``` + +## Section 6: Connecting to an Oracle RAC Database + +**IMPORTANT:** This section assumes that you have successfully created an Oracle RAC cluster using the preceding sections. + +If you are using a connection manager and exposed the port 1521 on the host, then connect from an external client using the following connection string, where `` is the host container, and `` is the database system identifier: + +```bash +system/@//:1521/ +``` + +If you are using the bridge created using MACVLAN driver, and you have configured DNS appropriately, then you can connect using the public Single Client Access (SCAN) listener directly from any external client. To connect with the SCAN, use the following connection string, where `` is the SCAN name for the database, and `` is the database system identifier: + +```bash +system/@//:1521/ +``` + +## Section 7: Environment Variables for the First Node + +This section provides information about the environment variables that can be used when creating the first node of a cluster. + +```bash +OP_TYPE=###Specify the Operation TYPE. It can accept 2 values INSTALL OR ADDNODE#### +NODE_VIP=####Specify the Node VIP### +VIP_HOSTNAME=###Specify the VIP hostname### +PRIV_IP=###Specify the Private IP### +PRIV_HOSTNAME=###Specify the Private Hostname### +PUBLIC_IP=###Specify the public IP### +PUBLIC_HOSTNAME=###Specify the public hostname### +SCAN_NAME=###Specify the scan name### +ASM_DEVICE_LIST=###Specify the ASM Disk lists. +SCAN_IP=###Specify this if you do not have DNS server### +DOMAIN=###Default value set to example.com### +PASSWORD=###OS password will be generated by openssl### +CLUSTER_NAME=###Default value set to racnode-c#### +ORACLE_SID=###Default value set to ORCLCDB### +ORACLE_PDB=###Default value set to ORCLPDB### +ORACLE_PWD=###Default value set to generated by openssl random password### +ORACLE_CHARACTERSET=###Default value set AL32UTF8### +DEFAULT_GATEWAY=###Default gateway. You need this env variable if containers will be running on multiple hosts.#### +CMAN_HOSTNAME=###Connection Manager Host Name### +CMAN_IP=###Connection manager Host IP### +ASM_DISCOVERY_DIR=####ASM disk location insdie the container. By default it is /dev###### +COMMON_OS_PWD_FILE=###Pass the file name to setup grid and oracle user password. If you specify ORACLE_PWD_FILE, GRID_PWD_FILE, and DB_PWD_FILE then you do not need to specify this env variable### +ORACLE_PWD_FILE=###Pass the file name to set the password for oracle user.### +GRID_PWD_FILE=###Pass the file name to set the password for grid user.### +DB_PWD_FILE=###Pass the file name to set the password for DB user i.e. sys.### +REMOVE_OS_PWD_FILES=###Set this env variable to true to remove pwd key file and password file after resetting password.### +CONTAINER_DB_FLAG=###Default value is set to true to create container database. Set this to false if you do not want to create container database.### +``` + +## Section 8: Environment Variables for the Second and Subsequent Nodes + +This section provides the details about the environment variables that can be used for all additional nodes added to an existing cluster. + +```bash +OP_TYPE=###Specify the Operation TYPE. It can accept 2 values INSTALL OR ADDNODE### +EXISTING_CLS_NODES=###Specify the Existing Node of the cluster which you want to join. If you have 2 nodes in the cluster and you are trying to add the third node then specify existing 2 nodes of the clusters and separate them by comma.#### +NODE_VIP=###Specify the Node VIP### +VIP_HOSTNAME=###Specify the VIP hostname### +PRIV_IP=###Specify the Private IP### +PRIV_HOSTNAME=###Specify the Private Hostname### +PUBLIC_IP=###Specify the public IP### +PUBLIC_HOSTNAME=###Specify the public hostname### +SCAN_NAME=###Specify the scan name### +SCAN_IP=###Specify this if you do not have DNS server### +ASM_DEVICE_LIST=###Specify the ASM Disk lists. +DOMAIN=###Default value set to example.com### +ORACLE_SID=###Default value set to ORCLCDB### +DEFAULT_GATEWAY=###Default gateway. You need this env variable if containers will be running on multiple hosts.#### +CMAN_HOSTNAME=###Connection Manager Host Name### +CMAN_IP=###Connection manager Host IP### +ASM_DISCOVERY_DIR=####ASM disk location inside the container. By default it is /dev###### +COMMON_OS_PWD_FILE=###You need to pass the file name to setup grid and oracle user password. If you specify ORACLE_PWD_FILE, GRID_PWD_FILE, and DB_PWD_FILE then you do not need to specify this env variable### +ORACLE_PWD_FILE=###You need to pass the file name to set the password for oracle user.### +GRID_PWD_FILE=###You need to pass the file name to set the password for grid user.### +DB_PWD_FILE=###You need to pass the file name to set the password for DB user i.e. sys.### +REMOVE_OS_PWD_FILES=###You need to set this to true to remove pwd key file and password file after resetting password.### +``` + +## Section 9: Building a Patched Oracle RAC Container Image + +If you want to build a patched image based on a base 21.3.0 container image, then refer to the GitHub page [Example of how to create a patched database image](https://github.com/oracle/docker-images/tree/main/OracleDatabase/RAC/OracleRealApplicationClusters/samples/applypatch). + +## Section 10 : Sample Container Files for Older Releases + +### Docker Container Files + +This project offers sample container files for Oracle Grid Infrastructure and Oracle Real Application Clusters for dev and test: + +* Oracle Database 19c Oracle Grid Infrastructure (19.3) for Linux x86-64 +* Oracle Database 19c (19.3) for Linux x86-64 +* Oracle Database 18c Oracle Grid Infrastructure (18.3) for Linux x86-64 +* Oracle Database 18c (18.3) for Linux x86-64 +* Oracle Database 12c Release 2 Oracle Grid Infrastructure (12.2.0.1.0) for Linux x86-64 +* Oracle Database 12c Release 2 (12.2.0.1.0) Enterprise Edition for Linux x86-64 + + **Notes:** + +* Note that the Oracle RAC on Docker Container releases are supported only for test and development environments, but not for production environments. +* If you are planning to build and deploy Oracle RAC 18.3.0, you need to download Oracle 18.3.0 Grid Infrastructure and Oracle Database 18.3.0 Database. You also need to download Patch# p28322130_183000OCWRU_Linux-x86-64.zip from [Oracle Technology Network](https://www.oracle.com/technetwork/database/database-technologies/clusterware/downloads/docker-4418413.html). +Stage it under containerfiles/18.3.0 folder. +* If you are planning to build and deploy Oracle RAC 12.2.0.1, you need to download Oracle 12.2.0.1 Grid Infrastructure and Oracle Database 12.2.0.1 Database. You also need to download Patch# p27383741_122010_Linux-x86-64.zip from [Oracle Technology Network](https://www.oracle.com/technetwork/database/database-technologies/clusterware/downloads/docker-4418413.html). +Stage it under containerfiles/12.2.0.1 folder. + +### Podman Container Files + +This project offers sample container files for Oracle Grid Infrastructure and Oracle Real Application Clusters for dev and test: + +* Oracle Database 19c Oracle Grid Infrastructure (19.3) for Linux x86-64 +* Oracle Database 19c (19.3) for Linux x86-64 + + **Notes:** + +* Because Oracle RAC on Podman is supported on 19c from 19.16 or later, you must download the grid release update (RU) from [support.oracle.com](https://support.oracle.com/portal/). In this case, we downloaded RU `34130714`. +* Download following one-offs for 19.16 from [support.oracle.com](https://support.oracle.com/portal/) + * `34339952` + * `32869666` +* Before starting the next step, you must edit `docker-images/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles/19.3.0/Dockerfile`, change `oraclelinux:7-slim` to `oraclelinux:8`, and save the file. +* You must add `CV_ASSUME_DISTID=OEL8` inside the `Dockerfile` as an env variable. + +* Once the `19.3.0` Oracle RAC on Podman image is built, start building patched image with the download 19.16 RU and one-offs. To build the patch the image, refer [Example of how to create a patched database image](https://github.com/oracle/docker-images/tree/main/OracleDatabase/RAC/OracleRealApplicationClusters/samples/applypatch). +* Make changes in `/opt/containers/envfile` as per 19c `Dockerfile`. You need to change all the contents based on 19c such as `GRID_HOME`, `ORACLE_HOME` and `ADDNODE_RSP` which you have used in `Dockerfile` while building the image. + +## Section 11 : Support + +### Docker Support + +At the time of this release, Oracle RAC on Docker is supported only on Oracle Linux 7. To see current details, refer the [Real Application Clusters Installation Guide for Docker Containers Oracle Linux x86-64](https://docs.oracle.com/en/database/oracle/oracle-database/21/racdk/oracle-rac-on-docker.html). + +### Podman Support + +At the time of this release, Oracle RAC on Podman is supported for Oracle Linux 8.5 later. To see current Linux support certifications, refer [Oracle RAC on Podman Documentation](https://docs.oracle.com/en/database/oracle/oracle-database/21/install-and-upgrade.html) + +## Section 12 : License + +To download and run Oracle Grid and Database, regardless of whether inside or outside a container, you must download the binaries from the Oracle website and accept the license indicated on that page. + +All scripts and files hosted in this repository which are required to build the container images are, unless otherwise noted, released under UPL 1.0 license. + +## Section 11 : Copyright + +Copyright (c) 2014-2022 Oracle and/or its affiliates. diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/docs/developers/ENVVARIABLESCOMPOSE.md b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/developers/ENVVARIABLESCOMPOSE.md new file mode 100644 index 0000000000..d468616e79 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/developers/ENVVARIABLESCOMPOSE.md @@ -0,0 +1,60 @@ +# Environment Variables Explained for Oracle RAC on Podman Compose + +This section provides information about the environment variables that can be used when creating 2 Node RAC cluster. + +| Variable Name | Description | +|----------------------------|-----------------------------------------------------------------------------| +| DNS_PUBLIC_IP | Default set to 10.0.20.25. Set this env variable when you want to set DNS container public ip address where both Oracle RAC nodes are resolved. | +| DNS_CONTAINER_NAME | Default set to rac-dnsserver. Set this env variable when you want to set name for dns container. | +| DNS_HOST_NAME | Default set to racdns. Set this env variable when you want to set dns container host name. | +| DNS_IMAGE_NAME | Default set to "localhost/oracle/rac-dnsserver:latest". Set this env variable when you want to set dns image name. | +| RAC_NODE_NAME_PREFIXP | Default set to racnodep. Set this env variable when you want to set different prefix being used for DNS podman container resolutions. | +| DNS_DOMAIN | Default set to example.info. Set this env variable when you want to set dns domain. | +| PUBLIC_NETWORK_NAME | Default set to rac_pub1_nw. Set this env variable when you want to set public podman network name for RAC. | +| PUBLIC_NETWORK_SUBNET | Default set to 10.0.20.0/24. Set this env variable when you want to set public network subnet. | +| PRIVATE1_NETWORK_NAME | Default set to rac_priv1_nw. Set this env variable when you want to specify first private network name. | +| PRIVATE1_NETWORK_SUBNET | Default set to 192.168.17.0/24. Set this env variable when you want to set first private network subnet. | +| PRIVATE2_NETWORK_NAME | Default set to rac_priv2_nw. Set this env variable when you want to set second private network name. | +| PRIVATE2_NETWORK_SUBNET | Default set to 192.168.18.0/24. Set this env variable when you want to set second private network subnet. | +| RACNODE1_CONTAINER_NAME | Default set to racnodep1. Set this env variable when you want to set container name for first RAC container. | +| RACNODE1_HOST_NAME | Default set to racnodep1. Set this env variable when you want to set host name for first RAC container. | +| RACNODE1_PUBLIC_IP | Default set to 10.0.20.170. Set this env variable when you want to set public ip first RAC container. | +| RACNODE1_CRS_PRIVATE_IP1 | Default set to 192.168.17.170. Set this env variable when you want to set private ip for the first private network for first RAC container. | +| RACNODE1_CRS_PRIVATE_IP2 | Default set to 192.168.18.170. Set this env variable when you want to set private ip for the second private network for first RAC container. | +| INSTALL_NODE | Default set to racnodep1. Set this env variable to any of RAC container, but this will remain same across the RAC Cluster for both nodes where actual RAC cluster installation will happen. | +| RAC_IMAGE_NAME | Default set to localhost/oracle/database-rac:21.0.0. Set this env variable when you want to specify RAC Image name. | +| CRS_NODES | Default set to "pubhost:racnodep1,viphost:racnodep1-vip;pubhost:racnodep2,viphost:racnodep2-vip". Set this env variable to value in format as used here for all the nodes part of RAC Cluster Setup. | +| SCAN_NAME | Default set to racnodepc1-scan. Set this env variable when you want to specify resolvable scan name from DNS. | +| CRS_ASM_DISCOVERY_STRING | Default set to /oradata with NFS Storage devices. Default set to /dev/asm-disk* for BlockDevices. This specifies the discovery string for ASM. Do not change this unless you have modified podman-compose.yml to find different discovery string. | +| CRS_ASM_DEVICE_LIST | Default set to /oradata/asm_disk01.img,/oradata/asm_disk02.img,/oradata/asm_disk03.img,/oradata/asm_disk04.img,/oradata/asm_disk05.img and is used with NFS Storage Devices. Do not change this. | +| ASM_DISK1 | Default set to /dev/oracleoci/oraclevdd. Set this env variable when you want to specify first asm disk in block devices setup. | +| ASM_DISK2 | Default set to /dev/oracleoci/oraclevde. Set this env variable when you want to specify second asm disk in block devices setup. | +| RACNODE2_CONTAINER_NAME | Default set to racnodep2. Set this env variable when you want to set container name for second RAC container. | +| RACNODE2_HOST_NAME | Default set to racnodep2. Set this env variable when you want to set host name for second RAC container. | +| RACNODE2_PUBLIC_IP | Default set to 10.0.20.171. Set this env variable when you want to set public ip for second RAC container. | +| RACNODE2_CRS_PRIVATE_IP1 | Default set to 192.168.17.171. Set this env variable when you want to set first private ip for second RAC container. | +| RACNODE2_CRS_PRIVATE_IP2 | Default set to 192.168.18.171. Set this env variable when you want to set second private ip for second RAC container. | +| PWD_SECRET_FILE | Default set to /opt/.secrets/pwdfile.enc. Do not change this. | +| KEY_SECRET_FILE | Default set to /opt/.secrets/key.pem. Do not change this. | +| CMAN_CONTAINER_NAME | Default set to racnodepc1-cman. Set this env variable when you want to set connection manager container name. | +| CMAN_HOST_NAME | Default set to racnodepc1-cman. Set this env variable when you want to set hostname for connection manager container. | +| CMAN_IMAGE_NAME | Default set to "localhost/oracle/client-cman:21.0.0". Set this env variable when you want to set connection manager image name. | +| CMAN_PUBLIC_IP | Default set to 10.0.20.15. Set this env variable when you want to set public ip for connection manager container. | +| CMAN_PUBLIC_HOSTNAME | Default set to racnodepc1-cman. Set this env variable when you want to set public hostname for connection manager container. | +| DB_HOSTDETAILS | Default set to HOST=racnodepc1-scan:RULE_ACT=accept,HOST=racnodep1:IP=10.0.20.170. Set this env variable when you want to set details for DB host to be set up with connection manager container. | +| STORAGE_CONTAINER_NAME | Default set to racnode-storage. Set this env variable when you want to set container name storage container. | +| STORAGE_HOST_NAME | Default set to racnode-storage. Set this env variable when you want to set hostname for storage container. | +| STORAGE_IMAGE_NAME | Default set to "localhost/oracle/rac-storage-server:latest". Set this env variable when you want to set storage image name. | +| ORACLE_DBNAME | Default set to ORCLCDB. Set this env variable when you want to set RAC DB Name. | +| STORAGE_PRIVATE_IP | Default set to 192.168.17.80. Set this env variable when you want to set private ip for storage container. | +| NFS_STORAGE_VOLUME | Default set to /scratch/stage/rac-storage/$ORACLE_DBNAME. Set this env variable when you want to specify path used by NFS storage container. Must be at least 50 GB of space. | +| DB_SERVICE | Default set to service:soepdb. Set this env variable when you want to specify database service to be created in this format of . | +| EXISTING_CLS_NODE | Default set to "racnodep1,racnodep2" and used only during node addition. | + +## License + +All scripts and files hosted in this repository which are required to build the container images are, unless otherwise noted, released under UPL 1.0 license. + +## Copyright + +Copyright (c) 2014-2024 Oracle and/or its affiliates. \ No newline at end of file diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/docs/developers/OTHERS.md b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/developers/OTHERS.md new file mode 100644 index 0000000000..af0348230d --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/developers/OTHERS.md @@ -0,0 +1,254 @@ +# Oracle Real Application Clusters in Linux Containers for Developers + +Learn about container deployment options for Oracle Real Application Clusters (Oracle RAC) Release 21c (v21.3). + +## Overview of Running Oracle RAC in Containers + +Oracle Real Application Clusters (Oracle RAC) is an option for the award-winning Oracle Database Enterprise Edition. Oracle RAC is a cluster database with a shared cache architecture that overcomes the limitations of traditional shared-nothing and shared-disk approaches to provide highly scalable and available database solutions for all business applications. + +Oracle RAC uses Oracle Clusterware as a portable cluster software that allows clustering of independent servers so that they cooperate as a single system, and Oracle Automatic Storage Management (Oracle ASM) to provide simplified storage management that is consistent across all servers and storage platforms. +Oracle Clusterware and Oracle ASM are part of the Oracle Grid Infrastructure, which bundles both solutions in an easy-to-deploy software package. + +For more information on Oracle RAC Database 21c, refer to the [Oracle Database documentation](http://docs.oracle.com/en/database/). + +This guide helps you install Oracle RAC on Containers on Host Machines as explained in detail below. With the current release, you prepare the host machine, build or use pre-built Oracle RAC Container Images v21.3, and set up Oracle RAC on Single or Multiple Host machines with Oracle ASM. +In this installation guide, we use [Podman](https://docs.podman.io/en/v3.0/) to create Oracle RAC Containers and manage them. + +## Using this Documentation +To create an Oracle RAC environment, follow these steps: + +- [Oracle Real Application Clusters in Linux Containers for Developers](#oracle-real-application-clusters-in-linux-containers-for-developers) + - [Overview of Running Oracle RAC in Containers](#overview-of-running-oracle-rac-in-containers) + - [Using this Documentation](#using-this-documentation) + - [Preparation Steps for Running Oracle RAC in Containers](#preparation-steps-for-running-oracle-rac-database-in-containers) + - [Getting Oracle RAC Database Container Images](#getting-oracle-rac-database-container-images) + - [Building Oracle RAC Database Container Image](#building-oracle-rac-database-container-image) + - [Building Oracle RAC Database Container Slim Image](#building-oracle-rac-database-container-slim-image) + - [Network Management](#network-management) + - [Password Management](#password-management) + - [Oracle RAC on Containers Deployment Scenarios](#oracle-rac-on-containers-deployment-scenarios) + - [Oracle RAC Containers on Podman](#oracle-rac-containers-on-podman) + - [Setup Using Oracle RAC Image](#1-setup-using-oracle-rac-container-image) + - [Setup Using Oracle RAC Slim Image](#2-setup-using-oracle-rac-container-slim-image) + - [Connecting to an Oracle RAC Database](#connecting-to-an-oracle-rac-database) + - [Deletion of Node from Oracle RAC Cluster](#deletion-of-node-from-oracle-rac-cluster) + - [Building a Patched Oracle RAC Container Image](#building-a-patched-oracle-rac-container-image) + - [Sample Container Files for Older Releases](#sample-container-files-for-older-releases) + - [Cleanup](#cleanup) + - [Support](#support) + - [License](#license) + - [Copyright](#copyright) + +## Preparation Steps for Running Oracle RAC Database in Containers + +Before you proceed to the next section, you must complete each of the steps listed in this section and complete the following prerequisites. + +* Refer to the following sections in the publication [Oracle Real Application Clusters Installation Guide](https://docs.oracle.com/cd/F39414_01/racpd/oracle-real-application-clusters-installation-guide-podman-oracle-linux-x86-64.pdf) for Podman Oracle Linux x86-64 to complete the preparation steps for Oracle RAC on Container deployment: + + * Overview of Oracle RAC on Podman + * Host Preparation for Oracle RAC on Podman + * Podman Host Server Configuration + * Podman Containers and Oracle RAC Nodes + * Provisioning the Podman Host Server + * Podman Host Preparation + * Preparing for Podman Container Installation + * Installing Podman Engine + * Allocating Linux Resources for Oracle Grid Infrastructure Deployment + * How to Configure Podman for SELinux Mode +* Install `git` from dnf or yum repository and clone the git repo. We clone this repo on a path called `` and refer here. +* If you are planning to use NFS storage for OCR, Voting Disk, and Database files, then configure NFS storage and export at least one NFS mount. You can also use the `/docker-images/OracleDatabase/RAC/OracleRACStorageServer` container for the shared file system on NFS. Refer [OracleRACStorageServer](../OracleRACStorageServer/README.md). + +* If SELinux is enabled on the Podman host, you must create an SELinux policy for Oracle RAC on Podman. For details about this procedure, see `How to Configure Podman for SELinux Mode` in the publication [Oracle Real Application Clusters Installation Guide for Podman Oracle Linux x86-64](https://docs.oracle.com/en/database/oracle/oracle-database/21/racpd/target-configuration-oracle-rac-podman.html#GUID-59138DF8-3781-4033-A38F-E0466884D008). +Also, when you are performing the installation using any files from a Podman host machine where SELinux is enabled, make sure they are labeled correctly with `container_file_t` context. You can use `ls -lZ ` to see the security context set on files. + +* To resolve VIPs and SCAN IPs, in this guide we use a DNS container. Before proceeding to the next step, create a [DNS server container](../OracleDNSServer/README.md). +If you have a preconfigured DNS server in your environment, then you can replace `-e DNS_SERVERS=10.0.20.25`, `--dns=10.0.20.25`, `-e DOMAIN=example.info`, and `--dns-search=example.info` parameters in the examples in this guide with the `DOMAIN_NAME` and `DNS_SERVER` based on your environment. + +* The Oracle RAC `Containerfile` does not contain any Oracle software binaries. Download the following software from the [Oracle Technology Network](https://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html), if you are planning to build Oracle RAC Container Images from the next section. +However, if you are using pre-built RAC images from the Oracle Container Registry, you can skip this step. + - Oracle Grid Infrastructure 21c (21.3) for Linux x86-64 + - Oracle Database 21c (21.3) for Linux x86-64 + +**Notes** + +- **For testing purposes only**, use the Oracle `DNSServer` Image to deploy a container providing DNS resolution. Refer [OracleDNSServer](../OracleDNSServer/README.md) for details. +- `OracleRACStorageServer` container image can be used **only for testing purposes**. Refer [OracleRACStorageServer](../OracleRACStorageServer/README.md) for details. +- If the Podman bridge network is not available outside your host, you can use the Oracle Connection Manager [CMAN image](https://github.com/oracle/docker-images/tree/main/OracleDatabase/RAC/OracleConnectionManager) to access the Oracle RAC Database from outside the host. + +## Getting Oracle RAC Database Container Images + +Oracle RAC is supported for production use on Podman starting with Oracle Database 19c (19.16), and Oracle Database 21c (21.7). You can also deploy Oracle RAC on Podman using the pre-built images available on the Oracle Container Registry. +Refer to [this documentation](https://docs.oracle.com/en/operating-systems/oracle-linux/docker/docker-UsingDockerRegistries.html#docker-registry) for details on using the Oracle Container Registry. + +Example of pulling an Oracle RAC Image from the Oracle Container Registry: +```bash +# For Oracle RAC Container Image +podman pull container-registry.oracle.com/database/rac:21.16 +podman tag container-registry.oracle.com/database/rac:21.16 localhost/oracle/database-rac:21.3.0 +# For Oracle RAC Container Slim Image +podman pull container-registry.oracle.com/database/rac:21.16-slim +podman tag container-registry.oracle.com/database/rac:21.16-slim localhost/oracle/database-rac:21.3.0-slim +``` + +If you are using pre-built Oracle RAC images from the Oracle Container Registry, then you can skip the section that follows where we build the container images. + +If you want to build the latest Oracle RAC Image from this Github repository, instead of a pre-built image, then then follow these instructions. + +**IMPORTANT :** This section assumes that you have completed all of the prerequisites in [Preparation Steps for running Oracle RAC Database in containers](#preparation-steps-for-running-oracle-rac-database-in-containers) and completed all the steps, based on your environment. Ensure that you do not uncompress the binaries and patches manually before building the Oracle RAC Image. + +To assist in building the images, you can use the [`buildContainerImage.sh`](./containerfiles/buildContainerImage.sh) script. See the following for instructions and usage. + +### Building Oracle RAC Database Container Image + +In this document, Oracle RAC Database Container Image refers to an Oracle RAC Database Container Image with Oracle Grid Infrastructure and Oracle Database software binaries installed during Oracle RAC Podman image creation. The resulting images will contain the Oracle Grid Infrastructure and Oracle RAC Database software binaries. +Before you begin, you must download grid and database binaries and stage them under `/docker-images/OracleDatabase/RAC/OracleRealApplicationCluster/containerfiles/`. + +```bash + ./buildContainerImage.sh -v +``` +Example: Building Oracle RAC image for v 21.3.0- +```bash + ./buildContainerImage.sh -v 21.3.0 +``` + +### Building Oracle RAC Database Container Slim Image +In this document, an Oracle RAC container slim image refers to a container image that does not include installing Oracle Grid Infrastructure and Oracle Database during the Oracle RAC image creation. To build an Oracle RAC slim image that doesn't contain the Oracle RAC Database and Grid infrastructure software, run the following command: +```bash + ./buildContainerImage.sh -v -i -o '--build-arg SLIMMING=true' +``` + Example: Building Oracle Slim Image for v 21.3.0- + ```bash + ./buildContainerImage.sh -v 21.3.0 -i -o '--build-arg SLIMMING=true' + ``` + To build an Oracle RAC slim image, you must use `--build-arg SLIMMING=true`. + To change the base image for building Oracle RAC images, you must use `--build-arg BASE_OL_IMAGE=oraclelinux:9`. + +**Notes** +- Usage of `./buildContainerImage.sh`- + ```text + -v: version to build + -i: ignore the MD5 checksums + -t: user-defined image name and tag (e.g., image_name:tag). Default is set to `oracle/database-rac:` for RAC Image and `oracle/database-rac:-slim` for RAC slim image. + -o: passes on container build option (e.g., --build-arg SLIMMIMG=true for slim,--build-arg BASE_OL_IMAGE=oraclelinux:9 to change base image). The default is "--build-arg SLIMMING=false" + ``` +- Ensure that you have enough space in `/var/lib/containers` while building the Oracle RAC image. Also, if required use `export TMPDIR=` for Podman to refer to any other folder as the temporary podman cache location instead of the default '/tmp' location. +- After the `21.3.0` Oracle RAC container image is built, to apply the 21c RU and build the 21c patched image, refer to [Example of how to create a patched database image](./samples/applypatch/README.md). +- If you are behind a proxy wall, then you must set the `https_proxy` or `http_proxy` environment variable based on your environment before building the image. +- In the slim image case, the resulting images will not contain the Oracle Grid Infrastructure binaries and Oracle RAC Database binaries. + +## Network Management + +Before you start the installation, you must plan your private and public network. Refer to section `Podman Host Preparation` in the publication [Oracle Real Application Clusters Installation Guide](https://docs.oracle.com/cd/F39414_01/racpd/oracle-real-application-clusters-installation-guide-podman-oracle-linux-x86-64.pdf) for Podman Oracle Linux x86-64. +You can create a `network bridge` on every container host so containers running within that host can communicate with each other. For example: create `rac_pub1_nw` for the public network (`10.0.20.0/24`) and `rac_priv1_nw` (`192.168.17.0/24`) for a private network. +You can use any network subnet for testing. In this document we define the public network on `10.0.20.0/24` and the private network on `192.168.17.0/24`. + +```bash + podman network create --driver=bridge --subnet=10.0.20.0/24 rac_pub1_nw + podman network create --driver=bridge --subnet=192.168.17.0/24 rac_priv1_nw --disable-dns --internal + podman network create --driver=bridge --subnet=192.168.18.0/24 rac_priv2_nw --disable-dns --internal + +``` + +- To run Oracle RAC using Oracle Container Runtime for Docker on multiple hosts, you must create one of the following: + +a. [Podman macvlan network](https://docs.podman.io/en/latest/markdown/podman-network-create.1.html) using the following commands: + +```bash + podman network create -d macvlan --subnet=10.0.20.0/24 --gateway=10.0.20.1 -o parent=ens5 rac_pub1_nw + podman network create -d macvlan --subnet=192.168.17.0/24 --gateway=192.168.17.1 -o parent=ens6 rac_priv1_nw --disable-dns --internal + podman network create -d macvlan --subnet=192.168.18.0/24 --gateway=192.168.18.1 -o parent=ens7 rac_priv2_nw --disable-dns --internal +``` + + +b. [Podman ipvlan network](https://docs.docker.com/network/drivers/ipvlan/) using the following commands: +```bash + podman network create -d ipvlan --subnet=10.0.20.0/24 -o parent=ens5 rac_pub1_nw + podman network create -d ipvlan --subnet=192.168.17.0/24 -o parent=ens6 rac_priv1_nw --disable-dns --internal + podman network create -d ipvlan --subnet=192.168.18.0/24 -o parent=ens7 rac_priv2_nw --disable-dns --internal + ``` + +## Password Management +- Specify the secret volume for resetting the grid, oracle, and database user password during node creation or node addition. The volume can be a shared volume among all the containers. For example: + +```bash +mkdir /opt/.secrets/ +``` +- Generate a password file - Edit the `/opt/.secrets/pwdfile.txt` and seed the password for the grid, oracle, and database users. For this deployment scenario, it will be a common password for the grid, oracle, and database users. Run the command: + +```bash +cd /opt/.secrets +openssl genrsa -out key.pem +openssl rsa -in key.pem -out key.pub -pubout +openssl pkeyutl -in pwdfile.txt -out pwdfile.enc -pubin -inkey key.pub -encrypt +rm -rf /opt/.secrets/pwdfile.txt +``` +- Oracle recommends using Podman secrets inside the containers. To create Podman secrets, run the following command: + +```bash +podman secret create pwdsecret /opt/.secrets/pwdfile.enc +podman secret create keysecret /opt/.secrets/key.pem + +podman secret ls +ID NAME DRIVER CREATED UPDATED +7eb7f573905283c808bdabaff keysecret file 13 hours ago 13 hours ago +e3ac963fd736d8bc01dcd44dd pwdsecret file 13 hours ago 13 hours ago + +podman secret inspect +``` +Notes: +- In this example we use `pwdsecret` as the common password for SSH setup between containers for the oracle, grid, and Oracle RAC database users. Also, `keysecret` is used to extract secrets inside the Oracle RAC Containers. + +## Oracle RAC on Containers Deployment Scenarios +Oracle RAC can be deployed with various scenarios, such as using podman vs podman-compose, NFS vs Block Devices, Oracle RAC Container Image vs Slim Image, with User Defined Response files, and so on. All are covered in detail in the instructions that follow. + +### Oracle RAC Containers on Podman +#### [1. Setup Using Oracle RAC Container Image](./rac-container/racimage/README.md) +#### [2. Setup Using Oracle RAC Container Slim Image](./rac-container/racslimimage/README.md) + +### Oracle RAC Containers on Podman Compose +#### [1. Setup Using Oracle RAC Container Image](../samples/rac-compose/racimage/README.md) +#### [2. Setup Using Oracle RAC Container Slim Image](../samples/rac-compose/racslimimage/README.md) + +## Connecting to an Oracle RAC Database + +**IMPORTANT:** This section assumes that you have successfully created an Oracle RAC cluster using the preceding sections. +Refer to the [README](./CONNECTING.md) for instructions on how to connect to the Oracle RAC Database. + +## Deletion of Node from Oracle RAC Cluster +Refer to [README](./DELETION.md) for instructions on how to delete a Node from Existing Oracle RAC Container Cluster. + +## Building a Patched Oracle RAC Container Image + +If you want to build a patched image based on a base 21.3.0 container image, then refer to the GitHub page [Example of how to create a patched database image](./samples/applypatch/README.md). + +## Sample Container Files for Older Releases + +This project offers example container files for Oracle Grid Infrastructure and Oracle Real Application Clusters for dev and test: + +* Oracle Database 21c Oracle Grid Infrastructure (21.3) for Linux x86-64 +* Oracle Database 21c (21.3) for Linux x86-64 +* Oracle Database 19c Oracle Grid Infrastructure (19.3) for Linux x86-64 +* Oracle Database 19c (19.3) for Linux x86-64 +* Oracle Database 18c Oracle Grid Infrastructure (18.3) for Linux x86-64 +* Oracle Database 18c (18.3) for Linux x86-64 +* Oracle Database 12c Release 2 Oracle Grid Infrastructure (12.2.0.1.0) for Linux x86-64 +* Oracle Database 12c Release 2 (12.2.0.1.0) Enterprise Edition for Linux x86-64 + +To install older releases of Oracle RAC on Podman or Oracle RAC on Docker, refer to the [README.md](./README_1.md) + +## Cleanup +Refer to [README](./CLEANUP.md) for instructions on how to connect to an Oracle RAC Database Container Environment. + +## Support + +At the time of this release, Oracle RAC on Podman is supported for Oracle Linux 9.3 or later. To see the current Linux support certifications, refer to [Oracle RAC on Podman Documentation](https://docs.oracle.com/en/database/oracle/oracle-database/21/install-and-upgrade.html) + +## License + +To download and run Oracle Grid Infrastructure and Oracle Database, regardless of whether inside or outside a container, you must download the binaries from the Oracle website and accept the license indicated on that page. + +All scripts and files hosted in this repository that are required to build the container images are, unless otherwise noted, released under a UPL 1.0 license. + +## Copyright + +Copyright (c) 2014-2024 Oracle and/or its affiliates. diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/docs/developers/README.md b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/developers/README.md new file mode 100644 index 0000000000..f85bb91eae --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/developers/README.md @@ -0,0 +1,300 @@ +# Oracle Real Application Clusters in Linux Containers for Developers + +Learn about container deployment options for Oracle Real Application Clusters (Oracle RAC) Release 21c (v26.0) + +## Overview of Running Oracle RAC in Containers + +Oracle Real Application Clusters (Oracle RAC) is an option for the award-winning Oracle Database Enterprise Edition. Oracle RAC is a cluster database with a shared cache architecture that overcomes the limitations of traditional shared-nothing and shared-disk approaches to provide highly scalable and available database solutions for all business applications. +Oracle RAC uses Oracle Clusterware as a portable cluster software that allows clustering of independent servers so that they cooperate as a single system and Oracle Automatic Storage Management (Oracle ASM) to provide simplified storage management that is consistent across all servers and storage platforms. +Oracle Clusterware and Oracle ASM are part of the Oracle Grid Infrastructure, which bundles both solutions in an easy-to-deploy software package. For more information on Oracle RAC Database 21c refer to the [Oracle Database documentation](http://docs.oracle.com/en/database/). + +This guide helps you install Oracle RAC on Containers on Host Machines as explained in detail below. With the current release, you prepare the host machine, build or use pre-built Oracle RAC Container Images v26.0, and setup Oracle RAC on Single or Multiple Host machines with Oracle ASM. +In this installation guide, we use [Podman](https://docs.podman.io/en/v3.0/) to create Oracle RAC Containers and manage them. + +## Using this Documentation +To create an Oracle RAC environment, follow these steps: + +- [Oracle Real Application Clusters in Linux Containers for Developers](#oracle-real-application-clusters-in-linux-containers-for-developers) + - [Overview of Running Oracle RAC in Containers](#overview-of-running-oracle-rac-in-containers) + - [Using this Documentation](#using-this-documentation) + - [Before you begin](#before-you-begin) + - [QuickStart](#quickstart) + - [Getting Oracle RAC Database Container Images](#getting-oracle-rac-database-container-images) + - [Networking in Oracle RAC Podman Container Environment](#networking-in-oracle-rac-podman-container-environment) + - [Deploy Oracle RAC 2 Node Environment with NFS Storage Container](#deploy-oracle-rac-2-node-environment-with-nfs-storage-container) + - [Deploy Oracle RAC 2 Node Environment with BlockDevices](#deploy-oracle-rac-2-node-environment-with-blockdevices) + - [Validating Oracle RAC Environment](#validating-oracle-rac-environment) + - [Connecting to an Oracle RAC Database](#connecting-to-an-oracle-rac-database) + - [Environment Variables Explained for above 2 Node RAC on Podman Compose](#environment-variables-explained-for-above-2-node-rac-on-podman-compose) + - [Cleanup](#cleanup) + - [Support](#support) + - [License](#license) + - [Copyright](#copyright) + +## Before you begin +- Before proceeding further, the below prerequisites related to the Oracle RAC (Real Application Cluster) Podman host Environment need to be setup as a preparation steps for the Podman host machine for Oracle RAC Containers. For more details related to the preparation of the host machine, refer to [Preparation Steps for running Oracle RAC Database in containers](../../README.md#preparation-steps-for-running-oracle-rac-database-in-containers). +We have pre-created script `setup_rac_host.sh` which will prepare the podman host with the following pre-requisites- + - Validate Host machine for supported Os version(OL >9.3), Kernel(>UEKR7), Memory(>32GB), Swap(>32GB), shm(>4GB) etc. + - Update /etc/sysctl.conf + - Setup node directories for Slim Image + - Setup chronyd service + - Setup tsc clock (if available). + - Install Podman + - Install Podman Compose + - Setup and Load SELinux modules + - Create Oracle RAC Podman secrets + +**Note :** All below steps or commands in this QuickStart needs to be run as a `sudo` or `root` user. +* In this quickstart, our working directory is `/docker-images/OracleDatabase/RAC/OracleRealApplicationClusters/containerfiles` from where all commands are executed. +* Set `secret-password` of your choice below, which is going to be used as a password for the Oracle RAC Container environment. + Execute below command- + ```bash + export RAC_SECRET= + ``` + +- To prepare podman host machine using a pre-created script, copy the file `setup_rac_host.sh` from [/docker-images/OracleDatabase/RAC/ +OracleRealApplicationClusters/containerfiles/setup_rac_host.sh](../containerfiles/setup_rac_host.sh) and execute below - + ```bash + ./setup_rac_host.sh -prepare-rac-env + ``` + Logs- + ```bash + INFO: Finished setting up the pre-requisites for Podman-Host + ``` + +## Getting Oracle RAC Database Container Images + +Oracle RAC is supported for production use on Podman starting with Oracle Database 19c (19.16), and Oracle Database 21c (21.7). You can also deploy Oracle RAC on Podman using the pre-built images available on the Oracle Container Registry. +Refer [this documentation](https://docs.oracle.com/en/operating-systems/oracle-linux/docker/docker-UsingDockerRegistries.html#docker-registry) for details on using Oracle Container Registry and [Getting Oracle RAC Database Container Images](../../README.md#getting-oracle-rac-database-container-images) + +Example of pulling an Oracle RAC Image from the Oracle Container Registry: +```bash +# For Oracle RAC Container Image- +podman pull phx.ocir.io/intsanjaysingh/oracle/database-rac:21.3.0 +podman tag phx.ocir.io/intsanjaysingh/oracle/database-rac:21.3.0 localhost/oracle/database-rac:21.3.0 +``` + +**Notes** +- Use the Oracle `DNSServer` Image to deploy a container providing DNS resolutions. Refer [OracleDNSServer](../../../OracleDNSServer/README.md) +- `OracleRACStorageServer` container image can be used for deploy Oracle RAC with NFS Storage. Refer [OracleRACStorageServer](../../../OracleRACStorageServer/README.md) for details. +- If the Podman bridge network is not available outside your host, you can use the Oracle Connection Manager [CMAN image](../../../OracleConnectionManager/README.md) to access the Oracle RAC Database from outside the host. + +- When Podman Images are ready like the below example used in this quickstart developer guide, you can proceed to the next steps- + ```bash + podman images + localhost/oracle/client-cman 21.3.0 7b095637d7b6 About a minute ago 2.08 GB + localhost/oracle/database-rac 21.3.0 dcda5cf71b23 12 hours ago 9.33 GB + localhost/oracle/rac-storage-server latest d233b08a8aed 12 hours ago 443 MB + localhost/oracle/rac-dnsserver latest 7d2301d7ea53 13 hours ago 279 MB + ``` + + +## QuickStart +To become familiar with Oracle RAC on Containers, Oracle recommends that you first start with this QuickStart. + +After you become familiar with Oracle RAC on Containers, you can explore more advanced setups, deployments, features, and so on, as explained in detail in the [Oracle Real Application Clusters](../../../OracleRealApplicationClusters/README.md) + +* To resolve VIPs and SCAN IPs, in this guide we use a DNS container. Before proceeding to the next step, create a [DNS server container](../OracleDNSServer/README.md). +If you have a preconfigured DNS server in your environment, then you can replace `-e DNS_SERVERS=10.0.20.25`, `--dns=10.0.20.25`, `-e DOMAIN=example.info` and `--dns-search=example.info` parameters in the examples in this guide with the `DOMAIN_NAME` and `DNS_SERVER` based on your environment. + +## Networking in Oracle RAC Podman Container Environment +- In this Quick Start, we will create below subnets for Oracle RAC Podman Container Environment- + + | Network Name | Subnet CIDR | Description | + |----------------|--------------|--------------------------------------| + | rac_pub1_nw | 10.0.20.0/24 | Public network for Oracle RAC Podman Container Environment | + | rac_priv1_nw | 192.168.17.0/24 | First private network for Oracle RAC Podman Container Environment | + | rac_priv2_nw | 192.168.18.0/24 | Second private network for Oracle RAC Podman Container Environment | + +## Deploy Oracle RAC 2 Node Environment with NFS Storage Container +- Copy `podman-compose.yml` file from this [/docker-images/OracleDatabase/RAC/ +OracleRealApplicationClusters/samples/rac-compose/racimage/withoutresponsefiles/nfsdevices/podman-compose.yml](../samples/rac-compose/racimage/withoutresponsefiles/nfsdevices/podman-compose.yml) in your working directory. +- Execute the below command from your working directory to export the required environment variables required by the compose file in this quickstart- + ```bash + source ./setup_rac_host.sh -nfs-env + ``` + Logs - + ```bash + INFO: NFS Environment variables setup completed successfully. + ``` + Note: In this example, `DB_SERVICE` is set to as default as an example. If you want to change to a different name, set like below - + ```bash + export DB_SERVICE=service: + ``` + + Note: + - In this example, we have used the below path for NFS Storage Volume. This path must have a minimum 100GB of free space. If you want to change it, export by changing it as per your environment before proceeding further - + ```bash + export ORACLE_DBNAME=ORCLCDB + export NFS_STORAGE_VOLUME="/scratch/stage/rac-storage/$ORACLE_DBNAME" + ``` + - If SELinux host is enabled on the machine then execute the following- + ```bash + semanage fcontext -a -t container_file_t /scratch/stage/rac-storage/$ORACLE_DBNAME + restorecon -v /scratch/stage/rac-storage/$ORACLE_DBNAME + ``` +- Execute below to create Podman Networks specific to RAC in this quickstart- + ```bash + ./setup_rac_host.sh -networks + ``` + Logs - + ```bash + INFO: Oracle RAC Container Networks setup successfully + ``` +- Execute below to deploy DNS Containers- + ```bash + ./setup_rac_host.sh -dns + ``` + Logs - + ```bash + ########################################## + INFO: DNS Container is setup successfully. + ########################################## + ``` +- Execute below to deploy Storage Containers- + + ```bash + ./setup_rac_host.sh -storage + ``` + Logs- + ```bash + ############################################################ + INFO: NFS Storage Container exporting /oradata successfully. + ############################################################ + racstorage + ``` +- Execute below to deploy Oracle RAC Containers- + ```bash + ./setup_rac_host.sh -rac + ``` + Logs- + ```bash + ############################################### + INFO: Oracle RAC Containers setup successfully. + ############################################### + ``` +- Optional: If the Podman bridge network is not available outside your host, you can use the Oracle Connection Manager to access the Oracle RAC Database from outside the host. Execute below if you want to deploy CMAN Container as well- + ```bash + ./setup_rac_host.sh -cman + ``` + Logs- + ```bash + ########################################### + INFO: CMAN Container is setup successfully. + ########################################### + ``` +- If you want to cleanup the RAC Container environment, then execute below- + ```bash + ./setup_rac_host.sh -cleanup + ``` + This will cleanup Oracle RAC Containers, Oracle Storage Volume, Oracle RAC Podman Networks, etc. + + Logs- + ```bash + INFO: Oracle Container RAC Environment Cleanup Successfully + ``` + +## Deploy Oracle RAC 2 Node Environment with BlockDevices + +- Copy `podman-compose.yml` file from [/docker-images/OracleDatabase/RAC/ +OracleRealApplicationClusters/samples/rac-compose/racimage/withoutresponsefiles/blockdevices/podman-compose.yml](../samples/rac-compose/racimage/withoutresponsefiles/blockdevices/podman-compose.yml) in your working directory. +- Execute the below command to export the required environment variables required by the compose file in this quickstart- + ```bash + source ./setup_rac_host.sh -blockdevices-env + ``` + Logs- + ```bash + INFO: BlockDevices Environment variables setup completed successfully. + ``` + Note: In this example, DB_SERVICE is set to service:soepdb. If you want to change to a different name, set it like `export DB_SERVICE=service:` + + Note: In this example, we have used the below asm disks. If you want to change it, export by changing it as per your environment before proceeding further - + ```bash + export ASM_DISK1="/dev/oracleoci/oraclevdd" + export ASM_DISK2="/dev/oracleoci/oraclevde" + ``` +- Execute below to create Podman Networks specific to RAC in this quickstart- + ```bash + ./setup_rac_host.sh -networks + ``` + Logs- + ```bash + INFO: Oracle RAC Container Networks setup successfully + ``` + +- Execute below to deploy DNS Containers- + ```bash + ./setup_rac_host.sh -dns + ``` + Logs- + ```bash + ########################################## + INFO: DNS Container is setup successfully. + ########################################## + ``` +- Execute below to deploy Oracle RAC Containers- + ```bash + ./setup_rac_host.sh -rac + ``` + Logs- + ```bash + ############################################### + INFO: Oracle RAC Containers setup successfully. + ############################################### + ``` +- Optional: If the Podman bridge network is not available outside your host, you can use the Oracle Connection Manager to access the Oracle RAC Database from outside the host. Execute below if you want to deploy CMAN Container as well- + ```bash + ./setup_rac_host.sh -cman + ``` + Logs- + ```bash + ########################################### + INFO: CMAN Container is setup successfully. + ########################################### + ``` +- If you want to Cleanup the RAC Container environment , then execute the below- + ```bash + ./setup_rac_host.sh -cleanup + ``` + This will cleanup Oracle RAC Containers, Oracle RAC Podman Networks, etc. + Logs- + ```bash + INFO: Oracle Container RAC Environment Cleanup Successfully + ``` + +## Validating Oracle RAC Environment +You can validate if the environment is healthy by running the below command- +```bash +podman ps -a + +58642afb20eb localhost/oracle/rac-dnsserver:latest /bin/sh -c exec $... 23 hours ago Up 23 hours (healthy) rac-dnsserver +a192f4e9092a localhost/oracle/database-rac:21.3.0 10 hours ago Up 10 hours (healthy) racnodep1 +745679457df5 localhost/oracle/database-rac:21.3.0 10 hours ago Up 10 hours (healthy) racnodep2 +``` +Note: +- Look for `(healthy)` next to container names under the `STATUS` section. + +## Environment Variables Explained for above 2 Node RAC on Podman Compose +Refer to [Environment Variables Explained for Oracle RAC on Podman Compose](./ENVVARIABLESCOMPOSE.md) for the explanation of all the environment variables related to Oracle RAC on Podman Compose. Change or Set these environment variables as per your environment. + +## Connecting to an Oracle RAC Database + +**IMPORTANT:** This section assumes that you have successfully created an Oracle RAC cluster using the preceding sections. +Refer to the [README](../CONNECTING.md) for instructions on how to connect to the Oracle RAC Database. + +## Cleanup +Refer to [README](../CLEANUP.md) for instructions on how to cleanup an Oracle RAC Database Container Environment. + +## Support + +At the time of this release, Oracle RAC on Podman is supported for Oracle Linux 9.3 later. To see current Linux support certifications, refer [Oracle RAC on Podman Documentation](https://docs.oracle.com/en/database/oracle/oracle-database/21/install-and-upgrade.html) + +## License + +To download and run Oracle Grid and Database, regardless of whether inside or outside a container, you must download the binaries from the Oracle website and accept the license indicated on that page. + +All scripts and files hosted in this repository that are required to build the container images are, unless otherwise noted, released under a UPL 1.0 license. + +## Copyright + +Copyright (c) 2014-2024 Oracle and/or its affiliates. diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racimage/README.md b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racimage/README.md new file mode 100644 index 0000000000..6195d7634f --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racimage/README.md @@ -0,0 +1,737 @@ +# Oracle RAC on Podman using Oracle RAC Image +=============================================================== + +Refer to the following instructions to set up Oracle RAC on Podman using an Oracle RAC Image for various scenarios. + +- [Oracle RAC on Podman using Oracle RAC Image](#oracle-rac-on-podman-using-oracle-rac-image) + - [Section 1: Prerequisites for Setting up Oracle RAC on Container using Oracle RAC Image](#section-1-prerequisites-for-setting-up-oracle-rac-on-containers-using-oracle-rac-image) + - [Section 2: Deploying Two-node Oracle RAC on Podman using Oracle RAC Image](#section-2-deploying-two-node-oracle-rac-on-podman-using-oracle-rac-image) + - [Section 2.1: Deploying Two-Node Oracle RAC on Podman Using Oracle RAC image Without Using Response Files](#section-21-deploying-two-node-oracle-rac-on-podman-using-an-oracle-rac-image-without-using-response-files) + - [Section 2.1.1: Deploying With Block Devices](#section-211-deploying-with-block-devices) + - [Section 2.1.2: Deploying with NFS Storage Devices](#section-212-deploying-with-nfs-storage-devices) + - [Section 2.2: Deploying Two-node Oracle RAC on Podman Using Oracle RAC Image with User-defined response files](#section-22-deploying-two-node-oracle-rac-setup-on-podman-using-oracle-rac-image-using-user-defined-response-files) + - [Section 2.2.1: Deploying With block devices](#section-221-deploying-with-blockdevices) + - [Section 2.2.2: Deploying with NFS storage devices](#section-222-deploying-with-nfs-storage-devices) + - [Section 3: Attach the Network to Containers](#section-3-attach-the-network-to-containers) + - [Attach the network to racnodep1](#attach-the-network-to-racnodep1) + - [Attach the network to racnodep2](#attach-the-network-to-racnodep2) + - [Section 4: Start the Containers](#section-4-start-the-containers) + - [Section 5: Validate the Oracle RAC Environment](#section-5-validate-the-oracle-rac-environment) + - [Section 6: Connecting to Oracle RAC environment](#section-6-connecting-to-oracle-rac-environment) + - [Section 7: Example of Node Addition to Oracle RAC Containers Based on Oracle RAC Image with block devices](#section-7-example-of-node-addition-to-oracle-rac-containers-based-on-oracle-rac-image-with-block-devices) + - [Section 7.1: Example of node addition to Oracle RAC containers based on Oracle RAC image without Response File](#section-71-example-of-node-addition-to-oracle-rac-containers-based-on-oracle-rac-image-without-response-file) + - [Section 8: Example of Node Addition to Oracle RAC containers Based on Oracle RAC Image with NFS Storage Devices](#section-8-example-of-node-addition-to-oracle-rac-containers-based-on-oracle-rac-image-with-nfs-storage-devices) + - [Section 8.1: Example of node addition to Oracle RAC containers based on Oracle RAC Image without Response File](#section-81-example-of-node-addition-to-oracle-rac-containers-based-on-oracle-rac-image-without-response-file) + - [Environment Variables for Oracle RAC on Containers](#environment-variables-for-oracle-rac-on-containers) + - [Cleanup](#cleanup) + - [Support](#support) + - [License](#license) + - [Copyright](#copyright) + +## Oracle RAC Setup on Podman using Oracle RAC Image + +You can deploy multi-node Oracle RAC using Oracle RAC images either on block devices or on NFS storage devices. You can also choose to deploy the images either by using Response Files that you define, or without using response files. All of these demonstrated in detail in this document. + +## Section 1: Prerequisites for Setting up Oracle RAC on containers using Oracle RAC image +**IMPORTANT:** Complete all of the steps specified in this section (customized for your environment) before you proceed to the next section. Completing prerequisite steps is a requirement for successful configuration. + + +* Complete the [Preparation Steps for running Oracle RAC Database in containers](../../../README.md#preparation-steps-for-running-oracle-rac-database-in-containers) +* If you are planning to use Oracle Connection Manager, then create an Oracle Connection Manager container image. See the [Oracle RAC Oracle Connection Manager README.MD](../../../../OracleConnectionManager/README.md) +* Ensure the Oracle RAC Image is present. You can either pull and use the Oracle RAC Image from the Oracle Container Registry, or you can create the Oracle RAC Container image by following [Building Oracle RAC Database Container Images](../../../README.md) +```bash +# podman images|grep database-rac +localhost/oracle/database-rac 21.3.0 41239091d2ac 16 minutes ago 9.27 GB +``` +* Configure the [Network](../../../README.md#network-management). +* Configure the [Password Management](../../../README.md#password-management). + +## Section 2: Deploying Two-node Oracle RAC on Podman Using Oracle RAC Image + +Use the instructions that follow to set up Oracle RAC on Podman using an Oracle RAC image for various scenarios, such as deploying with user-defined files or deploying without user-defined files. Oracle RAC setup can also be done either on block devices or on NFS storage devices. + +### Section 2.1: Deploying Two-node Oracle RAC on Podman using an Oracle RAC image without using response files + +To set up Oracle RAC on Podman using an Oracle RAC Image without providing response files, complete these steps. + +#### Section 2.1.1: Deploying With Block Devices +##### Section 2.1.1.1: Prerequisites for setting up Oracle RAC with block devices + +Ensure that you have created at least one Block Device with at least 50 Gb of storage space that can be accessed by two Oracle RAC Nodes, and can be shared between them. You can create more block devices in accordance with your requirements and pass those environment variables and devices to the `podman create` command as well as in the Oracle Grid Infrastructure (grid) response files. +**Note:** if you use response files. You can skip this step if you are planning to use NFS storage devices. + +Ensure that the ASM devices do not have any existing file system. To clear any other file system from the devices, use the following command: + + ```bash +dd if=/dev/zero of=/dev/oracleoci/oraclevdd bs=8k count=10000 +``` + +Repeat this command on each shared block device. In this example command, `/dev/oracleoci/oraclevdd` is a shared KVM virtual block device. + +##### Section 2.1.1.2: Create Oracle RAC Containers + +Create the Oracle RAC containers using the Oracle RAC image. For details about environment variables, see [Environment Variables Explained](../../../docs/ENVIRONMENTVARIABLES.md) + +You can use the following example to create a container on host `racnodep1`: + +```bash +podman create -t -i \ +--hostname racnodep1 \ +--dns-search "example.info" \ +--dns 10.0.20.25 \ +--shm-size 4G \ +--cpuset-cpus 0-1 \ +--memory 16G \ +--memory-swap 32G \ +--sysctl kernel.shmall=2097152 \ +--sysctl "kernel.sem=250 32000 100 128" \ +--sysctl kernel.shmmax=8589934592 \ +--sysctl kernel.shmmni=4096 \ +--sysctl 'net.ipv4.conf.eth1.rp_filter=2' \ +--sysctl 'net.ipv4.conf.eth2.rp_filter=2' \ +--cap-add=SYS_RESOURCE \ +--cap-add=NET_ADMIN \ +--cap-add=SYS_NICE \ +--cap-add=AUDIT_WRITE \ +--cap-add=AUDIT_CONTROL \ +--cap-add=NET_RAW \ +--secret pwdsecret \ +--secret keysecret \ +--health-cmd "/bin/python3 /opt/scripts/startup/scripts/main.py --checkracstatus" \ +-e DNS_SERVERS="10.0.20.25" \ +-e DB_SERVICE=service:soepdb \ +-e CRS_PRIVATE_IP1=192.168.17.170 \ +-e CRS_PRIVATE_IP2=192.168.18.170 \ +-e CRS_NODES="\"pubhost:racnodep1,viphost:racnodep1-vip;pubhost:racnodep2,viphost:racnodep2-vip\"" \ +-e SCAN_NAME=racnodepc1-scan \ +-e INIT_SGA_SIZE=3G \ +-e INIT_PGA_SIZE=2G \ +-e INSTALL_NODE=racnodep1 \ +-e DB_PWD_FILE=pwdsecret \ +-e PWD_KEY=keysecret \ +--device=/dev/oracleoci/oraclevdd:/dev/asm-disk1 \ +--device=/dev/oracleoci/oraclevde:/dev/asm-disk2 \ +-e CRS_ASM_DEVICE_LIST=/dev/asm-disk1,/dev/asm-disk2 \ +-e OP_TYPE=setuprac \ +--restart=always \ +--ulimit rtprio=99 \ +--systemd=always \ +--name racnodep1 \ +localhost/oracle/database-rac:21.3.0 +``` +To create another container on host `racnodep2`:, use the following command: + +```bash +podman create -t -i \ +--hostname racnodep2 \ +--dns-search "example.info" \ +--dns 10.0.20.25 \ +--shm-size 4G \ +--cpuset-cpus 0-1 \ +--memory 16G \ +--memory-swap 32G \ +--sysctl kernel.shmall=2097152 \ +--sysctl "kernel.sem=250 32000 100 128" \ +--sysctl kernel.shmmax=8589934592 \ +--sysctl kernel.shmmni=4096 \ +--sysctl 'net.ipv4.conf.eth1.rp_filter=2' \ +--sysctl 'net.ipv4.conf.eth2.rp_filter=2' \ +--health-cmd "/bin/python3 /opt/scripts/startup/scripts/main.py --checkracstatus" \ +--cap-add=SYS_RESOURCE \ +--cap-add=NET_ADMIN \ +--cap-add=SYS_NICE \ +--cap-add=AUDIT_WRITE \ +--cap-add=AUDIT_CONTROL \ +--cap-add=NET_RAW \ +--secret pwdsecret \ +--secret keysecret \ +-e DNS_SERVERS="10.0.20.25" \ +-e DB_SERVICE=service:soepdb \ +-e CRS_PRIVATE_IP1=192.168.17.171 \ +-e CRS_PRIVATE_IP2=192.168.18.171 \ +-e CRS_NODES="\"pubhost:racnodep1,viphost:racnodep1-vip;pubhost:racnodep2,viphost:racnodep2-vip\"" \ +-e SCAN_NAME=racnodepc1-scan \ +-e INIT_SGA_SIZE=3G \ +-e INIT_PGA_SIZE=2G \ +-e INSTALL_NODE=racnodep1 \ +-e DB_PWD_FILE=pwdsecret \ +-e PWD_KEY=keysecret \ +--device=/dev/oracleoci/oraclevdd:/dev/asm-disk1 \ +--device=/dev/oracleoci/oraclevde:/dev/asm-disk2 \ +-e CRS_ASM_DEVICE_LIST=/dev/asm-disk1,/dev/asm-disk2 \ +-e OP_TYPE=setuprac \ +--restart=always \ +--ulimit rtprio=99 \ +--systemd=always \ +--name racnodep2 \ +localhost/oracle/database-rac:21.3.0 +``` +#### Section 2.1.2: Deploying with NFS Storage Devices + +##### Section 2.1.2.1: Prerequisites for setting up Oracle RAC with NFS storage devices + +* Create a NFS Volume to be used for ASM Devices for Oracle RAC. See [Configuring NFS for Storage for Oracle RAC on Podman](https://review.us.oracle.com/review2/Review.html#reviewId=467473;scope=document;status=open,fixed;documentId=4229197) for more details. **Note:** You can skip this step if you are planning to use block devices for storage. + +* Make sure the ASM NFS Storage devices do not have any existing file system. + +##### Section 2.1.2.2: Create Oracle RAC Containers +Create the Oracle RAC containers using the image. For details about environment variables, see [Environment Variables Explained](#environment-variables-for-oracle-rac-on-containers). You can use the following example to create a container on host `racnodep1`: + +```bash +podman create -t -i \ +--hostname racnodep1 \ +--dns-search "example.info" \ +--dns 10.0.20.25 \ +--shm-size 4G \ +--cpuset-cpus 0-1 \ +--memory 16G \ +--memory-swap 32G \ +--sysctl kernel.shmall=2097152 \ +--sysctl "kernel.sem=250 32000 100 128" \ +--sysctl kernel.shmmax=8589934592 \ +--sysctl kernel.shmmni=4096 \ +--sysctl 'net.ipv4.conf.eth1.rp_filter=2' \ +--sysctl 'net.ipv4.conf.eth2.rp_filter=2' \ +--cap-add=SYS_RESOURCE \ +--cap-add=NET_ADMIN \ +--cap-add=SYS_NICE \ +--cap-add=AUDIT_WRITE \ +--cap-add=AUDIT_CONTROL \ +--cap-add=NET_RAW \ +--secret pwdsecret \ +--secret keysecret \ +--health-cmd "/bin/python3 /opt/scripts/startup/scripts/main.py --checkracstatus" \ +-e DNS_SERVERS="10.0.20.25" \ +-e DB_SERVICE=service:soepdb \ +-e CRS_PRIVATE_IP1=192.168.17.170 \ +-e CRS_PRIVATE_IP2=192.168.18.170 \ +-e CRS_NODES="\"pubhost:racnodep1,viphost:racnodep1-vip;pubhost:racnodep2,viphost:racnodep2-vip\"" \ +-e SCAN_NAME=racnodepc1-scan \ +-e INIT_SGA_SIZE=3G \ +-e INIT_PGA_SIZE=2G \ +-e INSTALL_NODE=racnodep1 \ +-e DB_PWD_FILE=pwdsecret \ +-e PWD_KEY=keysecret \ +--volume racstorage:/oradata \ +-e CRS_ASM_DEVICE_LIST=/oradata/asm_disk01.img,/oradata/asm_disk02.img,/oradata/asm_disk03.img,/oradata/asm_disk04.img,/oradata/asm_disk05.img \ +-e CRS_ASM_DISCOVERY_STRING="/oradata/asm_disk*" \ +-e OP_TYPE=setuprac \ +-e ASM_ON_NAS=True \ +--restart=always \ +--ulimit rtprio=99 \ +--systemd=always \ +--name racnodep1 \ +localhost/oracle/database-rac:21.3.0 +``` +To create another container on host `racnodep2`, use the following command: + +```bash +podman create -t -i \ +--hostname racnodep2 \ +--dns-search "example.info" \ +--dns 10.0.20.25 \ +--shm-size 4G \ +--cpuset-cpus 0-1 \ +--memory 16G \ +--memory-swap 32G \ +--sysctl kernel.shmall=2097152 \ +--sysctl "kernel.sem=250 32000 100 128" \ +--sysctl kernel.shmmax=8589934592 \ +--sysctl kernel.shmmni=4096 \ +--sysctl 'net.ipv4.conf.eth1.rp_filter=2' \ +--sysctl 'net.ipv4.conf.eth2.rp_filter=2' \ +--cap-add=SYS_RESOURCE \ +--cap-add=NET_ADMIN \ +--cap-add=SYS_NICE \ +--cap-add=AUDIT_WRITE \ +--cap-add=AUDIT_CONTROL \ +--cap-add=NET_RAW \ +--secret pwdsecret \ +--secret keysecret \ +--health-cmd "/bin/python3 /opt/scripts/startup/scripts/main.py --checkracstatus" \ +-e DNS_SERVERS="10.0.20.25" \ +-e DB_SERVICE=service:soepdb \ +-e CRS_PRIVATE_IP1=192.168.17.171 \ +-e CRS_PRIVATE_IP2=192.168.18.171 \ +-e CRS_NODES="\"pubhost:racnodep1,viphost:racnodep1-vip;pubhost:racnodep2,viphost:racnodep2-vip\"" \ +-e SCAN_NAME=racnodepc1-scan \ +-e INIT_SGA_SIZE=3G \ +-e INIT_PGA_SIZE=2G \ +-e INSTALL_NODE=racnodep1 \ +-e DB_PWD_FILE=pwdsecret \ +-e PWD_KEY=keysecret \ +--volume racstorage:/oradata \ +-e CRS_ASM_DEVICE_LIST=/oradata/asm_disk01.img,/oradata/asm_disk02.img,/oradata/asm_disk03.img,/oradata/asm_disk04.img,/oradata/asm_disk05.img \ +-e CRS_ASM_DISCOVERY_STRING="/oradata/asm_disk*" \ +-e OP_TYPE=setuprac \ +-e ASM_ON_NAS=True \ +--restart=always \ +--ulimit rtprio=99 \ +--systemd=always \ +--name racnodep2 \ +localhost/oracle/database-rac:21.3.0 +``` +### Section 2.2: Deploying Two-Node Oracle RAC Setup on Podman using Oracle RAC Image Using User Defined Response files + +Follow the below instructions to setup Oracle RAC on Podman using Oracle RAC Image for using user-defined response files. + +#### Section 2.2.1: Deploying With BlockDevices + +##### Prerequisites for setting up Oracle RAC with User-Defined files +- On the shared folder between both RAC nodes, create a file named [grid_setup_new_21c.rsp](withresponsefiles/blockdevices/grid_setup_new_21c.rsp). In this example, we copy the file to `/scratch/common_scripts/podman/rac/grid_setup_new_21c.rsp` +- Also, prepare a database response file similar to this [dbca_21c.rsp](withresponsefiles/dbca_21c.rsp). +- If SELinux is enabled on the host machine then execute the following as well - + ```bash + semanage fcontext -a -t container_file_t /scratch/common_scripts/podman/rac/grid_setup_new_21c.rsp + restorecon -v /scratch/common_scripts/podman/rac/grid_setup_new_21c.rsp + semanage fcontext -a -t container_file_t /scratch/common_scripts/podman/rac/dbca_21c.rsp + restorecon -v /scratch/common_scripts/podman/rac/dbca_21c.rsp + ``` + Note: Passwords defined in response files is going to be overwritten by passwords defined in `podman secret` due to security reasons of exposure of the password as plain text. +You can skip this step if you are not planning to use **User Defined Response Files for RAC**. + +Create first Oracle RAC Container - + +```bash +podman create -t -i \ +--hostname racnodep1 \ +--dns-search "example.info" \ +--dns 10.0.20.25 \ +--shm-size 4G \ +--volume /scratch/common_scripts/podman/rac/grid_setup_new_21c.rsp:/tmp/grid_21c.rsp \ +--volume /scratch/common_scripts/podman/rac/dbca_21c.rsp:/tmp/dbca_21c.rsp \ +--cpuset-cpus 0-1 \ +--memory 16G \ +--memory-swap 32G \ +--sysctl kernel.shmall=2097152 \ +--sysctl "kernel.sem=250 32000 100 128" \ +--sysctl kernel.shmmax=8589934592 \ +--sysctl kernel.shmmni=4096 \ +--sysctl 'net.ipv4.conf.eth1.rp_filter=2' \ +--sysctl 'net.ipv4.conf.eth2.rp_filter=2' \ +--cap-add=SYS_RESOURCE \ +--cap-add=NET_ADMIN \ +--cap-add=SYS_NICE \ +--cap-add=AUDIT_WRITE \ +--cap-add=AUDIT_CONTROL \ +--cap-add=NET_RAW \ +--secret pwdsecret \ +--secret keysecret \ +--health-cmd "/bin/python3 /opt/scripts/startup/scripts/main.py --checkracstatus" \ +-e DNS_SERVERS="10.0.20.25" \ +-e DB_SERVICE=service:soepdb \ +-e GRID_RESPONSE_FILE=/tmp/grid_21c.rsp \ +-e DBCA_RESPONSE_FILE=/tmp/dbca_21c.rsp \ +-e CRS_PRIVATE_IP1=192.168.17.170 \ +-e CRS_PRIVATE_IP2=192.168.18.170 \ +-e CRS_NODES="\"pubhost:racnodep1,viphost:racnodep1-vip;pubhost:racnodep2,viphost:racnodep2-vip\"" \ +-e SCAN_NAME=racnodepc1-scan \ +-e INIT_SGA_SIZE=3G \ +-e INIT_PGA_SIZE=2G \ +-e INSTALL_NODE=racnodep1 \ +-e DB_PWD_FILE=pwdsecret \ +-e PWD_KEY=keysecret \ +--device=/dev/oracleoci/oraclevdd:/dev/asm-disk1 \ +--device=/dev/oracleoci/oraclevde:/dev/asm-disk2 \ +-e CRS_ASM_DEVICE_LIST=/dev/asm-disk1,/dev/asm-disk2 \ +-e OP_TYPE=setuprac \ +--restart=always \ +--ulimit rtprio=99 \ +--systemd=always \ +--name racnodep1 \ +localhost/oracle/database-rac:21.3.0 +``` + +Create another Oracle RAC container +```bash +podman create -t -i \ +--hostname racnodep2 \ +--dns-search "example.info" \ +--dns 10.0.20.25 \ +--shm-size 4G \ +--volume /scratch/common_scripts/podman/rac/grid_setup_new_21c.rsp:/tmp/grid_21c.rsp \ +--volume /scratch/common_scripts/podman/rac/dbca_21c.rsp:/tmp/dbca_21c.rsp \ +--cpuset-cpus 0-1 \ +--memory 16G \ +--memory-swap 32G \ +--sysctl kernel.shmall=2097152 \ +--sysctl "kernel.sem=250 32000 100 128" \ +--sysctl kernel.shmmax=8589934592 \ +--sysctl kernel.shmmni=4096 \ +--sysctl 'net.ipv4.conf.eth1.rp_filter=2' \ +--sysctl 'net.ipv4.conf.eth2.rp_filter=2' \ +--cap-add=SYS_RESOURCE \ +--cap-add=NET_ADMIN \ +--cap-add=SYS_NICE \ +--cap-add=AUDIT_WRITE \ +--cap-add=AUDIT_CONTROL \ +--cap-add=NET_RAW \ +--secret pwdsecret \ +--secret keysecret \ +--health-cmd "/bin/python3 /opt/scripts/startup/scripts/main.py --checkracstatus" \ +-e DNS_SERVERS="10.0.20.25" \ +-e DB_SERVICE=service:soepdb \ +-e GRID_RESPONSE_FILE=/tmp/grid_21c.rsp \ +-e DBCA_RESPONSE_FILE=/tmp/dbca_21c.rsp \ +-e CRS_PRIVATE_IP1=192.168.17.171 \ +-e CRS_PRIVATE_IP2=192.168.18.171 \ +-e CRS_NODES="\"pubhost:racnodep1,viphost:racnodep1-vip;pubhost:racnodep2,viphost:racnodep2-vip\"" \ +-e SCAN_NAME=racnodepc1-scan \ +-e INIT_SGA_SIZE=3G \ +-e INIT_PGA_SIZE=2G \ +-e INSTALL_NODE=racnodep1 \ +-e DB_PWD_FILE=pwdsecret \ +-e PWD_KEY=keysecret \ +--device=/dev/oracleoci/oraclevdd:/dev/asm-disk1 \ +--device=/dev/oracleoci/oraclevde:/dev/asm-disk2 \ +-e CRS_ASM_DEVICE_LIST=/dev/asm-disk1,/dev/asm-disk2 \ +-e OP_TYPE=setuprac \ +--restart=always \ +--ulimit rtprio=99 \ +--systemd=always \ +--name racnodep2 \ +localhost/oracle/database-rac:21.3.0 +``` +#### Section 2.2.2: Deploying with NFS storage devices + +##### Prerequisites for setting up Oracle RAC with User-Defined Files +- Create a NFS Volume to be used for ASM Devices for Oracle RAC. See [Configuring NFS for Storage for Oracle RAC on Podman](https://review.us.oracle.com/review2/Review.html#reviewId=467473;scope=document;status=open,fixed;documentId=4229197) for more details. **Note:** You can skip this step if you are planning to use block devices for storage. + +- Make sure the ASM NFS Storage devices do not have any existing file system. +- On the shared folder between both Oracle RAC nodes, create the file name [grid_setup_new_21c.rsp](withresponsefiles/nfsdevices/grid_setup_new_21c.rsp). In this example, we copy the file to `/scratch/common_scripts/podman/rac/grid_setup_new_21c.rsp`. +- Also, prepare a database response file similar to this [dbca_21c.rsp](withresponsefiles/dbca_21c.rsp). +- If the SELinux is enabled on the machine then also run the following the following as well- + ```bash + semanage fcontext -a -t container_file_t /scratch/common_scripts/podman/rac/grid_setup_new_21c.rsp + restorecon -v /scratch/common_scripts/podman/rac/grid_setup_new_21c.rsp + semanage fcontext -a -t container_file_t /scratch/common_scripts/podman/rac/dbca_21c.rsp + restorecon -v /scratch/common_scripts/podman/rac/dbca_21c.rsp + ``` +Note: You can skip this step if you are not planning to deploy with user-defined Response Files for Oracle RAC. + +Create the first Oracle RAC Container. In this example, the hostname is `racnodep1` + +```bash + podman create -t -i \ + --hostname racnodep1 \ + --dns-search "example.info" \ + --dns 10.0.20.25 \ + --shm-size 4G \ + --volume /scratch/common_scripts/podman/rac/grid_setup_new_21c.rsp:/tmp/grid_21c.rsp \ + --volume /scratch/common_scripts/podman/rac/dbca_21c.rsp:/tmp/dbca_21c.rsp \ + --health-cmd "/bin/python3 /opt/scripts/startup/scripts/main.py --checkracstatus" \ + --cpuset-cpus 0-1 \ + --memory 16G \ + --memory-swap 32G \ + --sysctl kernel.shmall=2097152 \ + --sysctl "kernel.sem=250 32000 100 128" \ + --sysctl kernel.shmmax=8589934592 \ + --sysctl kernel.shmmni=4096 \ + --sysctl 'net.ipv4.conf.eth1.rp_filter=2' \ + --sysctl 'net.ipv4.conf.eth2.rp_filter=2' \ + --cap-add=SYS_RESOURCE \ + --cap-add=NET_ADMIN \ + --cap-add=SYS_NICE \ + --cap-add=AUDIT_WRITE \ + --cap-add=AUDIT_CONTROL \ + --cap-add=NET_RAW \ + --secret pwdsecret \ + --secret keysecret \ + -e DNS_SERVERS="10.0.20.25" \ + -e DB_SERVICE=service:soepdb \ + -e GRID_RESPONSE_FILE=/tmp/grid_21c.rsp \ + -e DBCA_RESPONSE_FILE=/tmp/dbca_21c.rsp \ + -e CRS_PRIVATE_IP1=192.168.17.170 \ + -e CRS_PRIVATE_IP2=192.168.18.170 \ + -e CRS_NODES="\"pubhost:racnodep1,viphost:racnodep1-vip;pubhost:racnodep2,viphost:racnodep2-vip\"" \ + -e SCAN_NAME=racnodepc1-scan \ + -e INIT_SGA_SIZE=3G \ + -e INIT_PGA_SIZE=2G \ + -e INSTALL_NODE=racnodep1 \ + -e DB_PWD_FILE=pwdsecret \ + -e PWD_KEY=keysecret \ + --volume racstorage:/oradata \ + -e CRS_ASM_DEVICE_LIST=/oradata/asm_disk01.img,/oradata/asm_disk02.img,/oradata/asm_disk03.img,/oradata/asm_disk04.img,/oradata/asm_disk05.img \ + -e CRS_ASM_DISCOVERY_STRING="/oradata/asm_disk*" \ + -e OP_TYPE=setuprac \ + -e ASM_ON_NAS=True \ + --restart=always \ + --ulimit rtprio=99 \ + --systemd=always \ + --name racnodep1 \ + localhost/oracle/database-rac:21.3.0 +``` + +Create another Oracle RAC container. In this example, the hostname is `racnodep2` +```bash +podman create -t -i \ + --hostname racnodep2 \ + --dns-search "example.info" \ + --dns 10.0.20.25 \ + --shm-size 4G \ + --volume /scratch/common_scripts/podman/rac/grid_setup_new_21c.rsp:/tmp/grid_21c.rsp \ + --volume /scratch/common_scripts/podman/rac/dbca_21c.rsp:/tmp/dbca_21c.rsp \ + --health-cmd "/bin/python3 /opt/scripts/startup/scripts/main.py --checkracstatus" \ + --cpuset-cpus 0-1 \ + --memory 16G \ + --memory-swap 32G \ + --sysctl kernel.shmall=2097152 \ + --sysctl "kernel.sem=250 32000 100 128" \ + --sysctl kernel.shmmax=8589934592 \ + --sysctl kernel.shmmni=4096 \ + --sysctl 'net.ipv4.conf.eth1.rp_filter=2' \ + --sysctl 'net.ipv4.conf.eth2.rp_filter=2' \ + --cap-add=SYS_RESOURCE \ + --cap-add=NET_ADMIN \ + --cap-add=SYS_NICE \ + --cap-add=AUDIT_WRITE \ + --cap-add=AUDIT_CONTROL \ + --cap-add=NET_RAW \ + --secret pwdsecret \ + --secret keysecret \ + -e DNS_SERVERS="10.0.20.25" \ + -e DB_SERVICE=service:soepdb \ + -e GRID_RESPONSE_FILE=/tmp/grid_21c.rsp \ + -e DBCA_RESPONSE_FILE=/tmp/dbca_21c.rsp \ + -e CRS_PRIVATE_IP1=192.168.17.171 \ + -e CRS_PRIVATE_IP2=192.168.18.171 \ + -e CRS_NODES="\"pubhost:racnodep1,viphost:racnodep1-vip;pubhost:racnodep2,viphost:racnodep2-vip\"" \ + -e SCAN_NAME=racnodepc1-scan \ + -e INIT_SGA_SIZE=3G \ + -e INIT_PGA_SIZE=2G \ + -e INSTALL_NODE=racnodep1 \ + -e DB_PWD_FILE=pwdsecret \ + -e PWD_KEY=keysecret \ + --volume racstorage:/oradata \ + -e CRS_ASM_DEVICE_LIST=/oradata/asm_disk01.img,/oradata/asm_disk02.img,/oradata/asm_disk03.img,/oradata/asm_disk04.img,/oradata/asm_disk05.img \ + -e CRS_ASM_DISCOVERY_STRING="/oradata/asm_disk*" \ + -e OP_TYPE=setuprac \ + -e ASM_ON_NAS=True \ + --restart=always \ + --ulimit rtprio=99 \ + --systemd=always \ + --name racnodep2 \ + localhost/oracle/database-rac:21.3.0 +``` +**Note:** +- To use this example, change the environment variables based on your environment. See [Environment Variables for Oracle RAC on Containers](#environment-variables-for-oracle-rac-on-containers) for more details. +- In the example that follows, we use a podman bridge network with one public and two private networks. For this reason,`--sysctl 'net.ipv4.conf.eth1.rp_filter=2' --sysctl 'net.ipv4.conf.eth2.rp_filter=2` is required when we use two private networks. If your use case is different, then this syctl configuration for the Podman Bridge can be ignored. +- If you are planning to place database files such as datafiles and archivelogs on different diskgroups, then you must pass these parameters: `DB_ASM_DEVICE_LIST`, `RECO_ASM_DEVICE_LIST`,`DB_DATA_FILE_DEST`, `DB_RECOVERY_FILE_DEST`. For more information, see [Section 8: Environment Variables for Oracle RAC on Containers](#environment-variables-for-oracle-rac-on-containers). + +## Section 3: Attach the Network to Containers + +You must assign the podman networks created based on the preceding examples. Complete the following tasks: + +### Attach the network to racnodep1 + +```bash +podman network disconnect podman racnodep1 +podman network connect rac_pub1_nw --ip 10.0.20.170 racnodep1 +podman network connect rac_priv1_nw --ip 192.168.17.170 racnodep1 +podman network connect rac_priv2_nw --ip 192.168.18.170 racnodep1 +``` + +### Attach the network to racnodep2 + +```bash +podman network disconnect podman racnodep2 +podman network connect rac_pub1_nw --ip 10.0.20.171 racnodep2 +podman network connect rac_priv1_nw --ip 192.168.17.171 racnodep2 +podman network connect rac_priv2_nw --ip 192.168.18.171 racnodep2 +``` + +## Section 4: Start the containers + +You must start the container. Run the following commands: + +```bash +podman start racnodep1 +podman start racnodep2 +``` + +It can take approximately 20 minutes or longer to create and set up a two-node Oracle RAC primary. To check the logs, use the following command from another terminal session: + +```bash +podman exec racnodep1 /bin/bash -c "tail -f /tmp/orod/oracle_rac_setup.log" +``` + +When the database configuration is complete, you should see a message similar to the following: + +```bash +#################################### +ORACLE RAC DATABASE IS READY TO USE! +#################################### +``` + +## Section 5: Validate the Oracle RAC Environment +To validate if the environment is healthy, run the following command: +```bash +podman ps -a + +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +f1345fd4047b localhost/oracle/rac-dnsserver:latest /bin/sh -c exec $... 8 hours ago Up 8 hours (healthy) rac-dnsserver +2f42e49758d1 localhost/oracle/database-rac:21.3.0 46 minutes ago Up 37 minutes (healthy) racnodep1 +a27fceea9fe6 localhost/oracle/database-rac:21.3.0 46 minutes ago Up 37 minutes (healthy) racnodep2 +``` +**Note:** +- Look for `(healthy)` next to container names under the `STATUS` section. + +## Section 6: Connecting to Oracle RAC Environment + +**IMPORTANT:** Before you connnect to the environment, you must first successfully create an Oracle RAC cluster as described in the preceding sections. +See [README](../../CONNECTING.md) for instructions on how to connect to the Oracle RAC Database. + +## Section 7: Example of Node Addition to Oracle RAC Containers Based on Oracle RAC Image with Block Devices + +### Section 7.1: Example of node addition to Oracle RAC Containers based on Oracle RAC Image without Response File +The following is an example of how to add an additional node to the existing Oracle RAC two-node cluster using the Oracle RAC image and without user-defined files. + +Create additional Oracle RAC Container. In this example, we create the container on host `racnodep3`: +```bash +podman create -t -i \ +--hostname racnodep3 \ +--dns-search "example.info" \ +--dns 10.0.20.25 \ +--shm-size 4G \ +--cpuset-cpus 0-1 \ +--memory 16G \ +--memory-swap 32G \ +--sysctl kernel.shmall=2097152 \ +--sysctl "kernel.sem=250 32000 100 128" \ +--sysctl kernel.shmmax=8589934592 \ +--sysctl kernel.shmmni=4096 \ +--sysctl 'net.ipv4.conf.eth1.rp_filter=2' \ +--sysctl 'net.ipv4.conf.eth2.rp_filter=2' \ +--cap-add=SYS_RESOURCE \ +--cap-add=NET_ADMIN \ +--cap-add=SYS_NICE \ +--cap-add=AUDIT_WRITE \ +--cap-add=AUDIT_CONTROL \ +--cap-add=NET_RAW \ +--secret pwdsecret \ +--secret keysecret \ +--health-cmd "/bin/python3 /opt/scripts/startup/scripts/main.py --checkracstatus" \ +-e DNS_SERVERS="10.0.20.25" \ +-e DB_SERVICE=service:soepdb \ +-e CRS_PRIVATE_IP1=192.168.17.172 \ +-e CRS_PRIVATE_IP2=192.168.18.172 \ +-e CRS_NODES="\"pubhost:racnodep3,viphost:racnodep3-vip\"" \ +-e SCAN_NAME=racnodepc1-scan \ +-e INIT_SGA_SIZE=3G \ +-e INIT_PGA_SIZE=2G \ +-e DB_PWD_FILE=pwdsecret \ +-e PWD_KEY=keysecret \ +--device=/dev/oracleoci/oraclevdd:/dev/asm-disk1 \ +--device=/dev/oracleoci/oraclevde:/dev/asm-disk2 \ +-e CRS_ASM_DEVICE_LIST=/dev/asm-disk1,/dev/asm-disk2 \ +-e OP_TYPE=racaddnode \ +-e EXISTING_CLS_NODE="racnodep1,racnodep2" \ +-e INSTALL_NODE=racnodep3 \ +--restart=always \ +--ulimit rtprio=99 \ +--systemd=always \ +--name racnodep3 \ +localhost/oracle/database-rac:21.3.0 + +podman network disconnect podman racnodep3 +podman network connect rac_pub1_nw --ip 10.0.20.172 racnodep3 +podman network connect rac_priv1_nw --ip 192.168.17.172 racnodep3 +podman network connect rac_priv2_nw --ip 192.168.18.172 racnodep3 +podman start racnodep3 +podman exec racnodep3 /bin/bash -c "tail -f /tmp/orod/oracle_rac_setup.log" +``` +When the Oracle RAC container has completed being set up, you should see a message similar to the following: +```bash +======================================================== +Oracle Database ORCLCDB3 is up and running on racnodep3. +======================================================== +``` + +## Section 8: Example of Node Addition to Oracle RAC Containers Based on Oracle RAC Image with NFS Storage Devices + +### Section 8.1: Example of node addition to Oracle RAC Containers based on Oracle RAC Image without Response File +In the following example, we add an additional node to the existing Oracle RAC two-node cluster using the Oracle RAC image without user-defined files. + +Create additional Oracle RAC Container. In this example, the hostname is `racnodep3` + +```bash +podman create -t -i \ +--hostname racnodep3 \ +--dns-search "example.info" \ +--dns 10.0.20.25 \ +--shm-size 4G \ +--secret pwdsecret \ +--secret keysecret \ +--volume /scratch/rac/cluster01/node3:/u01 \ +--volume /scratch:/scratch \ +--cpuset-cpus 0-1 \ +--memory 16G \ +--memory-swap 32G \ +--sysctl kernel.shmall=2097152 \ +--sysctl "kernel.sem=250 32000 100 128" \ +--sysctl kernel.shmmax=8589934592 \ +--sysctl kernel.shmmni=4096 \ +--sysctl 'net.ipv4.conf.eth1.rp_filter=2' \ +--sysctl 'net.ipv4.conf.eth2.rp_filter=2' \ +--cap-add=SYS_RESOURCE \ +--cap-add=NET_ADMIN \ +--cap-add=SYS_NICE \ +--cap-add=AUDIT_WRITE \ +--cap-add=AUDIT_CONTROL \ +--cap-add=NET_RAW \ +--secret pwdsecret \ +--secret keysecret \ +--health-cmd "/bin/python3 /opt/scripts/startup/scripts/main.py --checkracstatus" \ +-e DNS_SERVERS="10.0.20.25" \ +-e DB_SERVICE=service:soepdb \ +-e CRS_PRIVATE_IP1=192.168.17.172 \ +-e CRS_PRIVATE_IP2=192.168.18.172 \ +-e CRS_NODES="\"pubhost:racnodep3,viphost:racnodep3-vip\"" \ +-e SCAN_NAME=racnodepc1-scan \ +-e INIT_SGA_SIZE=3G \ +-e INIT_PGA_SIZE=2G \ +-e PASSWORD_FILE=pwdfile \ +--volume racstorage:/oradata \ +-e CRS_ASM_DEVICE_LIST=/oradata/asm_disk01.img,/oradata/asm_disk02.img,/oradata/asm_disk03.img,/oradata/asm_disk04.img,/oradata/asm_disk05.img \ +-e CRS_ASM_DISCOVERY_STRING="/oradata/asm_disk*" \ +-e ASM_ON_NAS=True \ +-e OP_TYPE=racaddnode \ +-e EXISTING_CLS_NODE="racnodep1,racnodep2" \ +-e INSTALL_NODE=racnodep3 \ +--restart=always \ +--ulimit rtprio=99 \ +--systemd=always \ +--name racnodep3 \ +localhost/oracle/database-rac:21.3.0 + +podman network disconnect podman racnodep3 +podman network connect rac_pub1_nw --ip 10.0.20.172 racnodep3 +podman network connect rac_priv1_nw --ip 192.168.17.172 racnodep3 +podman network connect rac_priv2_nw --ip 192.168.18.172 racnodep3 +podman start racnodep3 +podman exec racnodep3 /bin/bash -c "tail -f /tmp/orod/oracle_rac_setup.log" + +======================================================== +Oracle Database ORCLCDB3 is up and running on racnodep3. +======================================================== +``` + +## Environment Variables for Oracle RAC on Containers +For an explanation of all of the environment variables used with Oracle RAC on Podman, see [Environment Variables Explained for Oracle RAC on Podman](../../../docs/ENVIRONMENTVARIABLES.md). Change or set these environment variables as required for configurations info your environment. + +## Cleanup +For instructions to connect to a cleanup Oracle RAC Database Container Environment, see [README](../../../docs/CLEANUP.md). + +## Support + +At the time of this release, Oracle RAC on Podman is supported for Oracle Linux 8.10 later. To review the current Linux support certifications, see [Oracle RAC on Podman Documentation](https://docs.oracle.com/en/database/oracle/oracle-database/21/install-and-upgrade.html) + +## License + +To download and run Oracle Grid and Database, regardless of whether inside or outside a container, you must download the binaries from the Oracle website and accept the license indicated on that page. + +All scripts and files hosted in this repository that are required to build the container images are, unless otherwise noted, released under a UPL 1.0 license. + +## Copyright + +Copyright (c) 2014-2024 Oracle and/or its affiliates. diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racimage/withresponsefiles/blockdevices/addition/podman-compose.yml b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racimage/withresponsefiles/blockdevices/addition/podman-compose.yml new file mode 100644 index 0000000000..6ca4172d63 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racimage/withresponsefiles/blockdevices/addition/podman-compose.yml @@ -0,0 +1,76 @@ +--- +version: "3" +networks: + rac_pub1_nw: + name: ${PUBLIC_NETWORK_NAME} + external: true + rac_priv1_nw: + name: ${PRIVATE1_NETWORK_NAME} + external: true + rac_priv2_nw: + name: ${PRIVATE2_NETWORK_NAME} + external: true +secrets: + pwdsecret: + file: ${PWD_SECRET_FILE} + keysecret: + file: ${KEY_SECRET_FILE} +services: + racnodep3: + container_name: ${RACNODE3_CONTAINER_NAME} + hostname: ${RACNODE3_HOST_NAME} + image: ${RAC_IMAGE_NAME} + restart: always + dns: ${DNS_PUBLIC_IP} + dns_search: ${DNS_DOMAIN} + shm_size: 4G + secrets: + - pwdsecret + - keysecret + volumes: + - ${GRID_RESPONSE_FILE}:/tmp/grid_21ai.rsp + environment: + DNS_SERVERS: ${DNS_PUBLIC_IP} + PRIVATE_IP1_LIST: ${RACNODE3_PRIVATE_IP1_LIST} + PRIVATE_IP2_LIST: ${RACNODE3_PRIVATE_IP2_LIST} + DEFAULT_GATEWAY: ${DEFAULT_GATEWAY} + INSTALL_NODE: ${INSTALL_NODE} + OP_TYPE: racaddnode + EXISTING_CLS_NODE: ${EXISTING_CLS_NODE} + CRS_NODES: ${CRS_NODES} + SCAN_NAME: ${SCAN_NAME} + PROFILE_FLAG: "true" + DB_PWD_FILE: pwdsecret + PWD_KEY: keysecret + CRS_ASM_DEVICE_LIST: ${CRS_ASM_DEVICE_LIST} + CRS_ASM_DISCOVERY_DIR: ${CRS_ASM_DISCOVERY_DIR} + INIT_SGA_SIZE: 3G + INIT_PGA_SIZE: 2G + GRID_RESPONSE_FILE: /tmp/grid_21ai.rsp + sysctls: + - kernel.shmall=2097152 + - kernel.shmmax=8589934592 + - kernel.shmmni=4096 + - 'kernel.sem=250 32000 100 128' + - 'net.ipv4.conf.eth1.rp_filter=2' + - 'net.ipv4.conf.eth2.rp_filter=2' + ulimits: + rtprio: 99 + cap_add: + - SYS_RESOURCE + - NET_ADMIN + - SYS_NICE + - AUDIT_WRITE + - AUDIT_CONTROL + devices: + - "${ASM_DISK1}:${ASM_DEVICE1}" + - "${ASM_DISK2}:${ASM_DEVICE2}" + networks: + - rac_pub1_nw + - rac_priv1_nw + - rac_priv2_nw + healthcheck: + test: ["CMD-SHELL", "if [ `cat /tmp/orod/oracle_rac_setup.log | grep -c 'ORACLE RAC DATABASE IS READY TO USE'` -ge 1 ]; then exit 0; else exit 1; fi"] + interval: ${HEALTHCHECK_INTERVAL} + timeout: ${HEALTHCHECK_TIMEOUT} + retries: ${HEALTHCHECK_RETRIES} \ No newline at end of file diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racimage/withresponsefiles/blockdevices/grid_setup_new_21c.rsp b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racimage/withresponsefiles/blockdevices/grid_setup_new_21c.rsp new file mode 100644 index 0000000000..c7ffe19d4a --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racimage/withresponsefiles/blockdevices/grid_setup_new_21c.rsp @@ -0,0 +1,64 @@ +oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v21.0.0 +INVENTORY_LOCATION=/u01/app/oraInventory +oracle.install.option=CRS_CONFIG +ORACLE_BASE=/u01/app/grid +oracle.install.asm.OSDBA=dba +oracle.install.asm.OSOPER= +oracle.install.asm.OSASM=asmadmin +oracle.install.crs.config.scanType=LOCAL_SCAN +oracle.install.crs.config.SCANClientDataFile= +oracle.install.crs.config.gpnp.scanName=racnodepc1-scan +oracle.install.crs.config.gpnp.scanPort=1521 +oracle.install.crs.config.ClusterConfiguration=STANDALONE +oracle.install.crs.config.configureAsExtendedCluster=false +oracle.install.crs.config.memberClusterManifestFile= +oracle.install.crs.config.clusterName=rac01cluster +oracle.install.crs.config.gpnp.configureGNS= +oracle.install.crs.config.autoConfigureClusterNodeVIP=false +oracle.install.crs.config.gpnp.gnsOption= +oracle.install.crs.config.gpnp.gnsClientDataFile= +oracle.install.crs.config.gpnp.gnsSubDomain= +oracle.install.crs.config.gpnp.gnsVIPAddress= +oracle.install.crs.config.sites= +oracle.install.crs.config.clusterNodes=racnodep1:racnodep1-vip:HUB,racnodep2:racnodep2-vip:HUB +oracle.install.crs.config.networkInterfaceList=eth0:10.0.20.0:1,eth1:192.168.17.0:5,eth2:192.168.18.0:5 +oracle.install.asm.configureGIMRDataDG=false +oracle.install.crs.config.storageOption= +oracle.install.crs.config.useIPMI=false +oracle.install.crs.config.ipmi.bmcUsername= +oracle.install.crs.config.ipmi.bmcPassword= +oracle.install.asm.storageOption=ASM +oracle.install.asmOnNAS.ocrLocation= +oracle.install.asmOnNAS.configureGIMRDataDG=false +oracle.install.asmOnNAS.gimrLocation= +oracle.install.asm.SYSASMPassword=ORacle__21c +oracle.install.asm.diskGroup.name=DATA +oracle.install.asm.diskGroup.redundancy=EXTERNAL +oracle.install.asm.diskGroup.AUSize=4 +oracle.install.asm.diskGroup.FailureGroups= +oracle.install.asm.diskGroup.disksWithFailureGroupNames=/dev/asm-disk1,,/dev/asm-disk2, +oracle.install.asm.diskGroup.disks=/dev/asm-disk1,/dev/asm-disk2 +oracle.install.asm.diskGroup.quorumFailureGroupNames= +oracle.install.asm.diskGroup.diskDiscoveryString=/dev/asm* +oracle.install.asm.monitorPassword=ORacle__21c +oracle.install.asm.gimrDG.name= +oracle.install.asm.gimrDG.redundancy= +oracle.install.asm.gimrDG.AUSize=1 +oracle.install.asm.gimrDG.FailureGroups= +oracle.install.asm.gimrDG.disksWithFailureGroupNames= +oracle.install.asm.gimrDG.disks= +oracle.install.asm.gimrDG.quorumFailureGroupNames= +oracle.install.asm.configureAFD=false +oracle.install.crs.configureRHPS=false +oracle.install.crs.config.ignoreDownNodes=false +oracle.install.config.managementOption=NONE +oracle.install.config.omsHost= +oracle.install.config.omsPort=0 +oracle.install.config.emAdminUser= +oracle.install.config.emAdminPassword= +oracle.install.crs.rootconfig.executeRootScript=false +oracle.install.crs.rootconfig.configMethod=ROOT +oracle.install.crs.rootconfig.sudoPath= +oracle.install.crs.rootconfig.sudoUserName= +oracle.install.crs.config.batchinfo= +oracle.install.crs.deleteNode.nodes= \ No newline at end of file diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racimage/withresponsefiles/blockdevices/podman-compose.yml b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racimage/withresponsefiles/blockdevices/podman-compose.yml new file mode 100644 index 0000000000..a74819d4a8 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racimage/withresponsefiles/blockdevices/podman-compose.yml @@ -0,0 +1,188 @@ +--- +version: "3" +networks: + rac_pub1_nw: + name: ${PUBLIC_NETWORK_NAME} + driver: bridge + ipam: + driver: default + config: + - subnet: "${PUBLIC_NETWORK_SUBNET}" + rac_priv1_nw: + name: ${PRIVATE1_NETWORK_NAME} + driver: bridge + ipam: + driver: default + config: + - subnet: "${PRIVATE1_NETWORK_SUBNET}" + rac_priv2_nw: + name: ${PRIVATE2_NETWORK_NAME} + driver: bridge + ipam: + driver: default + config: + - subnet: "${PRIVATE2_NETWORK_SUBNET}" +secrets: + pwdsecret: + file: ${PWD_SECRET_FILE} + keysecret: + file: ${KEY_SECRET_FILE} +services: + rac-dnsserver: + container_name: ${DNS_CONTAINER_NAME} + hostname: ${DNS_HOST_NAME} + image: ${DNS_IMAGE_NAME} + dns_search: ${DNS_DOMAIN} + environment: + SETUP_DNS_CONFIG_FILES: "setup_true" + DOMAIN_NAME: ${DNS_DOMAIN} + RAC_NODE_NAME_PREFIXD: ${RAC_NODE_NAME_PREFIXD} + RAC_NODE_NAME_PREFIXP: ${RAC_NODE_NAME_PREFIXP} + WEBMIN_ENABLED: false + SETUP_DNS_CONFIG_FILES: "setup_true" + cap_add: + - AUDIT_WRITE + healthcheck: + test: ["CMD-SHELL", "if [ `cat /tmp/orod.log | grep -c 'DNS Server IS READY TO USE'` -ge 1 ]; then exit 0; else exit 1; fi"] + interval: ${HEALTHCHECK_INTERVAL} + timeout: ${HEALTHCHECK_TIMEOUT} + retries: ${HEALTHCHECK_RETRIES} + privileged: false + networks: + - rac_pub1_nw + - rac_priv1_nw + - rac_priv2_nw + racnodep1: + container_name: ${RACNODE1_CONTAINER_NAME} + hostname: ${RACNODE1_HOST_NAME} + image: ${RAC_IMAGE_NAME} + restart: always + dns: ${DNS_PUBLIC_IP} + dns_search: ${DNS_DOMAIN} + shm_size: 4G + secrets: + - pwdsecret + - keysecret + volumes: + - ${GRID_RESPONSE_FILE}:/tmp/grid_21c.rsp + environment: + DNS_SERVERS: ${DNS_PUBLIC_IP} + PRIVATE_IP1_LIST: ${RACNODE1_PRIVATE_IP1_LIST} + PRIVATE_IP2_LIST: ${RACNODE1_PRIVATE_IP2_LIST} + DEFAULT_GATEWAY: ${DEFAULT_GATEWAY} + OP_TYPE: setuprac + INSTALL_NODE: ${INSTALL_NODE} + CRS_NODES: ${CRS_NODES} + SCAN_NAME: ${SCAN_NAME} + CRS_ASM_DEVICE_LIST: ${CRS_ASM_DEVICE_LIST} + CRS_ASM_DISCOVERY_DIR: ${CRS_ASM_DISCOVERY_DIR} + INIT_SGA_SIZE: 3G + INIT_PGA_SIZE: 2G + DB_PWD_FILE: pwdsecret + PWD_KEY: keysecret + GRID_RESPONSE_FILE: /tmp/grid_21c.rsp + CMAN_HOST: ${CMAN_HOST_NAME} + CMAN_PORT: 1521 + sysctls: + - kernel.shmall=2097152 + - kernel.shmmax=8589934592 + - kernel.shmmni=4096 + - 'kernel.sem=250 32000 100 128' + - 'net.ipv4.conf.eth1.rp_filter=2' + - 'net.ipv4.conf.eth2.rp_filter=2' + ulimits: + rtprio: 99 + cap_add: + - SYS_RESOURCE + - NET_ADMIN + - SYS_NICE + - AUDIT_WRITE + - AUDIT_CONTROL + devices: + - "${ASM_DISK1}:${ASM_DEVICE1}" + - "${ASM_DISK2}:${ASM_DEVICE2}" + networks: + - rac_pub1_nw + - rac_priv1_nw + - rac_priv2_nw + healthcheck: + test: ["CMD-SHELL", "if [ `cat /tmp/orod/oracle_rac_setup.log | grep -c 'ORACLE RAC DATABASE IS READY TO USE'` -ge 1 ]; then exit 0; else exit 1; fi"] + interval: ${HEALTHCHECK_INTERVAL} + timeout: ${HEALTHCHECK_TIMEOUT} + retries: ${HEALTHCHECK_RETRIES} + racnodep2: + container_name: ${RACNODE2_CONTAINER_NAME} + hostname: ${RACNODE2_HOST_NAME} + image: ${RAC_IMAGE_NAME} + restart: always + dns: ${DNS_PUBLIC_IP} + dns_search: ${DNS_DOMAIN} + shm_size: 4G + secrets: + - pwdsecret + - keysecret + volumes: + - ${GRID_RESPONSE_FILE}:/tmp/grid_21c.rsp + environment: + DNS_SERVERS: ${DNS_PUBLIC_IP} + PRIVATE_IP1_LIST: ${RACNODE2_PRIVATE_IP1_LIST} + PRIVATE_IP2_LIST: ${RACNODE2_PRIVATE_IP2_LIST} + DEFAULT_GATEWAY: ${DEFAULT_GATEWAY} + INSTALL_NODE: ${INSTALL_NODE} + CRS_NODES: ${CRS_NODES} + SCAN_NAME: ${SCAN_NAME} + CRS_ASM_DEVICE_LIST: ${CRS_ASM_DEVICE_LIST} + CRS_ASM_DISCOVERY_DIR: ${CRS_ASM_DISCOVERY_DIR} + INIT_SGA_SIZE: 3G + INIT_PGA_SIZE: 2G + DB_PWD_FILE: pwdsecret + PWD_KEY: keysecret + GRID_RESPONSE_FILE: /tmp/grid_21c.rsp + CMAN_HOST: ${CMAN_HOST_NAME} + CMAN_PORT: 1521 + sysctls: + - kernel.shmall=2097152 + - kernel.shmmax=8589934592 + - kernel.shmmni=4096 + - 'kernel.sem=250 32000 100 128' + - 'net.ipv4.conf.eth1.rp_filter=2' + - 'net.ipv4.conf.eth2.rp_filter=2' + ulimits: + rtprio: 99 + cap_add: + - SYS_RESOURCE + - NET_ADMIN + - SYS_NICE + - AUDIT_WRITE + - AUDIT_CONTROL + devices: + - "${ASM_DISK1}:${ASM_DEVICE1}" + - "${ASM_DISK2}:${ASM_DEVICE2}" + networks: + - rac_pub1_nw + - rac_priv1_nw + - rac_priv2_nw + racnodepc1-cman: + container_name: ${CMAN_CONTAINER_NAME} + hostname: ${CMAN_HOST_NAME} + image: ${CMAN_IMAGE_NAME} + dns_search: ${DNS_DOMAIN} + dns: ${DNS_PUBLIC_IP} + environment: + DOMAIN_NAME: ${DNS_DOMAIN} + PUBLIC_IP: ${CMAN_PUBLIC_IP} + PUBLIC_HOSTNAME: ${CMAN_PUBLIC_HOSTNAME} + DB_HOSTDETAILS: ${DB_HOSTDETAILS} + privileged: false + ports: + - 1521:1521 + networks: + rac_pub1_nw: + ipv4_address: ${CMAN_PUBLIC_IP} + cap_add: + - AUDIT_WRITE + healthcheck: + test: ["CMD-SHELL", "if [ `cat /tmp/orod.log | grep -c 'CONNECTION MANAGER IS READY TO USE'` -ge 1 ]; then exit 0; else exit 1; fi"] + interval: ${HEALTHCHECK_INTERVAL} + timeout: ${HEALTHCHECK_TIMEOUT} + retries: ${HEALTHCHECK_RETRIES} \ No newline at end of file diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racimage/withresponsefiles/dbca_21c.rsp b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racimage/withresponsefiles/dbca_21c.rsp new file mode 100644 index 0000000000..d45141abb4 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racimage/withresponsefiles/dbca_21c.rsp @@ -0,0 +1,58 @@ +responseFileVersion=/oracle/assistants/rspfmt_dbca_response_schema_v21.0.0 +gdbName=ORCLCDB +sid=ORCLCDB +databaseConfigType=RAC +RACOneNodeServiceName= +policyManaged=false +managementPolicy= +createServerPool=false +serverPoolName= +cardinality= +force=false +pqPoolName= +pqCardinality= +createAsContainerDatabase=true +numberOfPDBs=1 +pdbName=ORCLPDB +useLocalUndoForPDBs=true +pdbAdminPassword=ORacle__21c +nodelist=racnodep1,racnodep2 +templateName={ORACLE_HOME}/assistants/dbca/templates/General_Purpose.dbc +sysPassword=ORacle__21c +systemPassword=ORacle__21c +oracleHomeUserPassword= +emConfiguration= +runCVUChecks=true +dbsnmpPassword=ORacle__21c +omsHost= +omsPort= +emUser= +emPassword= +dvConfiguration=false +dvUserName= +dvUserPassword= +dvAccountManagerName= +dvAccountManagerPassword= +olsConfiguration=false +datafileJarLocation={ORACLE_HOME}/assistants/dbca/templates/ +datafileDestination=+DATA/{DB_UNIQUE_NAME}/ +recoveryAreaDestination= +storageType=ASM +diskGroupName=+DATA/{DB_UNIQUE_NAME}/ +asmsnmpPassword= +recoveryGroupName= +characterSet=AL32UTF8 +nationalCharacterSet=AL16UTF16 +registerWithDirService= +dirServiceUserName= +dirServicePassword= +walletPassword= +listeners=LISTENER +variablesFile= +variables=DB_UNIQUE_NAME=ORCLCDB,ORACLE_BASE=/u01/app/oracle,PDB_NAME=ORCLPDB,DB_NAME=ORCLCDB,ORACLE_HOME=/u01/app/oracle/product/21.3.0/dbhome_1,SID=ORCLCDB +initParams=audit_trail=none,audit_sys_operations=false,remote_login_passwordfile=exclusive +sampleSchema=false +memoryPercentage=40 +databaseType=MULTIPURPOSE +automaticMemoryManagement=false +totalMemory=5000 \ No newline at end of file diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racimage/withresponsefiles/nfsdevices/grid_setup_new_21c.rsp b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racimage/withresponsefiles/nfsdevices/grid_setup_new_21c.rsp new file mode 100644 index 0000000000..16062dd6cb --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racimage/withresponsefiles/nfsdevices/grid_setup_new_21c.rsp @@ -0,0 +1,64 @@ +oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v21.0.0 +INVENTORY_LOCATION=/u01/app/oraInventory +oracle.install.option=CRS_CONFIG +ORACLE_BASE=/u01/app/grid +oracle.install.asm.OSDBA=dba +oracle.install.asm.OSOPER= +oracle.install.asm.OSASM=asmadmin +oracle.install.crs.config.scanType=LOCAL_SCAN +oracle.install.crs.config.SCANClientDataFile= +oracle.install.crs.config.gpnp.scanName=racnodepc1-scan +oracle.install.crs.config.gpnp.scanPort=1521 +oracle.install.crs.config.ClusterConfiguration=STANDALONE +oracle.install.crs.config.configureAsExtendedCluster=false +oracle.install.crs.config.memberClusterManifestFile= +oracle.install.crs.config.clusterName=rac01cluster +oracle.install.crs.config.gpnp.configureGNS= +oracle.install.crs.config.autoConfigureClusterNodeVIP=false +oracle.install.crs.config.gpnp.gnsOption= +oracle.install.crs.config.gpnp.gnsClientDataFile= +oracle.install.crs.config.gpnp.gnsSubDomain= +oracle.install.crs.config.gpnp.gnsVIPAddress= +oracle.install.crs.config.sites= +oracle.install.crs.config.clusterNodes=racnodep1:racnodep1-vip:HUB,racnodep2:racnodep2-vip:HUB +oracle.install.crs.config.networkInterfaceList=eth0:10.0.20.0:1,eth1:192.168.17.0:5,eth2:192.168.18.0:5 +oracle.install.asm.configureGIMRDataDG=false +oracle.install.crs.config.storageOption= +oracle.install.crs.config.useIPMI=false +oracle.install.crs.config.ipmi.bmcUsername= +oracle.install.crs.config.ipmi.bmcPassword= +oracle.install.asm.storageOption=ASM +oracle.install.asmOnNAS.ocrLocation= +oracle.install.asmOnNAS.configureGIMRDataDG=false +oracle.install.asmOnNAS.gimrLocation= +oracle.install.asm.SYSASMPassword=ORacle__21c +oracle.install.asm.diskGroup.name=DATA +oracle.install.asm.diskGroup.redundancy=EXTERNAL +oracle.install.asm.diskGroup.AUSize=4 +oracle.install.asm.diskGroup.FailureGroups= +oracle.install.asm.diskGroup.disksWithFailureGroupNames=/oradata/asm_disk01.img,,/oradata/asm_disk02.img,,/oradata/asm_disk03.img,,/oradata/asm_disk04.img,,/oradata/asm_disk05.im +oracle.install.asm.diskGroup.disks=/oradata/asm_disk01.img,/oradata/asm_disk02.img,/oradata/asm_disk03.img,/oradata/asm_disk04.img,/oradata/asm_disk05.img +oracle.install.asm.diskGroup.quorumFailureGroupNames= +oracle.install.asm.diskGroup.diskDiscoveryString=/oradata/asm_disk* +oracle.install.asm.monitorPassword=ORacle__21c +oracle.install.asm.gimrDG.name= +oracle.install.asm.gimrDG.redundancy= +oracle.install.asm.gimrDG.AUSize=1 +oracle.install.asm.gimrDG.FailureGroups= +oracle.install.asm.gimrDG.disksWithFailureGroupNames= +oracle.install.asm.gimrDG.disks= +oracle.install.asm.gimrDG.quorumFailureGroupNames= +oracle.install.asm.configureAFD=false +oracle.install.crs.configureRHPS=false +oracle.install.crs.config.ignoreDownNodes=false +oracle.install.config.managementOption=NONE +oracle.install.config.omsHost= +oracle.install.config.omsPort=0 +oracle.install.config.emAdminUser= +oracle.install.config.emAdminPassword= +oracle.install.crs.rootconfig.executeRootScript=false +oracle.install.crs.rootconfig.configMethod=ROOT +oracle.install.crs.rootconfig.sudoPath= +oracle.install.crs.rootconfig.sudoUserName= +oracle.install.crs.config.batchinfo= +oracle.install.crs.deleteNode.nodes= \ No newline at end of file diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racslimimage/README.md b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racslimimage/README.md new file mode 100644 index 0000000000..dd69984ffc --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racslimimage/README.md @@ -0,0 +1,699 @@ +# Oracle RAC on Podman using Slim Image +=============================================================== + +Refer below instructions for the setup of Oracle RAC on Podman using Slim Image for various scenarios. + +- [Oracle RAC on Podman using Slim Image](#oracle-rac-on-podman-using-slim-image) + - [Section 1: Prerequisites for Setting up Oracle RAC on Container Using Slim Image](#section-1-prerequisites-for-setting-up-oracle-rac-on-container-using-slim-image) + - [Section 2: Deploying 2 Node Oracle RAC Setup on Podman Using Slim Image](#section-2-deploying-2-node-oracle-rac-setup-on-podman-using-slim-image) + - [Section 2.1: Deploying 2 Node Oracle RAC Setup on Podman Using Slim Image Without using response files](#section-21-deploying-2-node-oracle-rac-setup-on-podman-using-slim-image-without-using-response-files) + - [Section 2.1.1: Deploying With BlockDevices](#section-211-deploying-with-blockdevices) + - [Section 2.1.2: Deploying with NFS Storage Devices](#section-212-deploying-with-nfs-storage-devices) + - [Section 2.2: Deploying 2 Node Oracle RAC Setup on Podman Using Slim Image Using User Defined response files](#section-22-deploying-2-node-oracle-rac-setup-on-podman-using-slim-image-using-user-defined-response-files) + - [Section 2.2.1: Deploying with BlockDevices](#section-221-deploying-with-blockdevices) + - [Section 2.2.2: Deploying with NFS Storage Devices](#section-222-deploying-with-nfs-storage-devices) + - [Section 3: Attach the Network to Containers](#section-3-attach-the-network-to-containers) + - [Attach the Network to racnodep1](#attach-the-network-to-racnodep1) + - [Attach the Network to racnodep2](#attach-the-network-to-racnodep2) + - [Section 4: Start the Containers](#section-4-start-the-containers) + - [Section 5: Validation Oracle RAC Environment](#section-5-validating-oracle-rac-environment) + - [Section 6: Connecting to Oracle RAC Environment](#section-6-connecting-to-oracle-rac-environment) + - [Section 7: Sample of Addition of Nodes to Oracle RAC Containers based on Slim Image](#section-7-sample-of-addition-of-nodes-to-oracle-rac-containers-based-on-slim-image) + - [Section 7.1: Sample of Addition of Nodes to Oracle RAC Containers based on Slim Image Without Response File](#section-71-sample-of-addition-of-nodes-to-oracle-rac-containers-based-on-slim-image-without-response-file) + - [Section 8: Sample of Addition of Nodes to Oracle RAC Containers based on Oracle RAC Slim Image with NFS Storage Devices](#section-8-sample-of-addition-of-nodes-to-oracle-rac-containers-based-on-oracle-rac-slim-image-with-nfs-storage-devices) + - [Section 8.1: Sample of Addition of Nodes to Oracle RAC Containers based on Oracle RAC Image Without Response File](#section-81-sample-of-addition-of-nodes-to-oracle-rac-containers-based-on-oracle-rac-image-without-response-file) + - [Section 9: Environment Variables for Oracle RAC on Containers](#section-9-environment-variables-for-oracle-rac-on-containers) + - [Cleanup](#cleanup) + - [Support](#support) + - [License](#license) + - [Copyright](#copyright) + +## Oracle RAC Setup on Podman using Slim Image + +Users can deploy multi-node Oracle RAC Setup using Slim Image either on Block Devices or NFS storage Devices by using User Defined Response Files or without using same. All these scenarios are discussed in detail as you proceed further below. +## Section 1: Prerequisites for Setting up Oracle RAC on Container using Slim Image +**IMPORTANT:** Execute all the steps specified in this section (customized for your environment) before you proceed to the next section. Completing prerequisite steps is a requirement for successful configuration. + +* Execute the [Preparation Steps for running Oracle RAC Database in containers](../../../README.md#preparation-steps-for-running-oracle-rac-database-in-containers) +* Create Oracle Connection Manager on the Container image and container if the IPs are not available on the user network. Please refer to [RAC Oracle Connection Manager README.MD](../../../../OracleConnectionManager/README.md) +* Make sure the Oracle RAC Slim Image is present as shown below. If you have not created the Oracle RAC Container image, execute the [Section 2.1: Building Oracle RAC Database Slim Image](../../../README.md) +```bash +# podman images|grep database-rac +localhost/oracle/database-rac 21.3.0-slim bf6ae21ccd5a 8 hours ago 517 MB +``` +* Execute the [Network](../../../README.md#network-management). +* Execute the [Password Management](../../../README.md#password-management). + +* Prepare Hosts with empty paths for 2 nodes similar to below, these are going to be mounted to Oracle RAC nodes for installing Oracle RAC Software binaries later during container creation- + ```bash + mkdir -p /scratch/rac/cluster01/node1 + rm -rf /scratch/rac/cluster01/node1/* + + mkdir -p /scratch/rac/cluster01/node2 + rm -rf /scratch/rac/cluster01/node2/* + ``` + +* Make sure the downloaded Oracle RAC software location is staged, & available for both RAC nodes. In the below example, we have staged Oracle RAC software at location ```/scratch/software/21c/goldimages``` + ```bash + ls /scratch/software/21c/goldimages + LINUX.X64_213000_db_home.zip LINUX.X64_213000_grid_home.zip + ``` +* If SELinux is enabled on the host machine then execute the following as well- + ```bash + semanage fcontext -a -t container_file_t /scratch/rac/cluster01/node1 + restorecon -v /scratch/rac/cluster01/node1 + semanage fcontext -a -t container_file_t /scratch/rac/cluster01/node2 + restorecon -v /scratch/rac/cluster01/node2 + semanage fcontext -a -t container_file_t /scratch/software/21c/goldimages/LINUX.X64_213000_grid_home.zip + restorecon -v /scratch/software/21c/goldimages/LINUX.X64_213000_grid_home.zip + semanage fcontext -a -t container_file_t /scratch/software/21c/goldimages/LINUX.X64_213000_db_home.zip + restorecon -v /scratch/software/21c/goldimages/LINUX.X64_213000_db_home.zip + ``` + +## Section 2: Deploying 2 Node Oracle RAC Setup on Podman using Slim Image + +Follow the below instructions to setup Oracle RAC on Podman using Slim Image for various scenarios like using user-defined files or not using the same. Oracle RAC setup can also be done either on block devices or on NFS storage devices. + +### Section 2.1: Deploying 2 Node Oracle RAC Setup on Podman using Slim Image Without using response files + +Follow the below instructions to setup Oracle RAC on Podman using Slim Image without providing response files. + +#### Section 2.1.1: Deploying With BlockDevices +##### Section 2.1.1.1: Prerequisites for setting up Oracle RAC with Block Devices + +- Make sure you have created atleast 1 Block Device with 50Gb storage space which can be accessed by 2 RAC Nodes and shared between them. You can create more block devices as per your requirements and pass the same to environment variables and devices to `podman create` command as well as in grid response files (if using the same). You can skip this step if you are planning to use **NFS storage devices**. + + Make sure the ASM devices do not have any existing file system. To clear any other file system from the devices, use the following command: + ```bash + dd if=/dev/zero of=/dev/oracleoci/oraclevdd bs=8k count=10000 + ``` + Repeat the cleanup disk for each shared block device. In the preceding example, `/dev/oracleoci/oraclevdd` is a shared Kvm virtual block device. +- In this example, we are going to use environment variables passed in a file called [envfile_racnodep1](withoutresponsefiles/blockdevices/envfile_racnodep1) & [envfile_racnodep2](withoutresponsefiles/blockdevices/envfile_racnodep2) and mounted to rac node containers. +In this example, we are creating files `envfile_racnodep1` and `envfile_racnodep2` are placed under `/scratch/common_scripts/podman/rac` on container host. + +- If SELinux is enabled on machine then execute the following as well- + ```bash + semanage fcontext -a -t container_file_t /scratch/common_scripts/podman/rac/envfile_racnodep1 + restorecon -v /scratch/common_scripts/podman/rac/envfile_racnodep1 + semanage fcontext -a -t container_file_t /scratch/common_scripts/podman/rac/envfile_racnodep2 + restorecon -v /scratch/common_scripts/podman/rac/envfile_racnodep2 + ``` + +###### Section 2.1.1.2: Create Oracle RAC Containers +Now create the Oracle RAC containers using the image. For the details of environment variables, refer to [Environment Variables Explained](#section-9-environment-variables-for-oracle-rac-on-containers) + +**Note**: Before creating the containers, you need to make sure you have edited the file `/scratch/common_scripts/podman/rac/envfile_racnodep1` and set the variables based on your enviornment. + +You can use the following example to create Oracle RAC containers: +```bash +podman create -t -i \ +--hostname racnodep1 \ +--dns-search "example.info" \ +--dns 10.0.20.25 \ +--shm-size 4G \ +--volume /scratch/rac/cluster01/node1:/u01 \ +--volume /scratch/common_scripts/podman/rac/envfile_racnodep1:/etc/rac_env_vars/envfile \ +--health-cmd "/bin/python3 /opt/scripts/startup/scripts/main.py --checkracstatus" \ +--volume /scratch:/scratch \ +--secret pwdsecret \ +--secret keysecret \ +--sysctl 'net.ipv4.conf.eth1.rp_filter=2' \ +--sysctl 'net.ipv4.conf.eth2.rp_filter=2' \ +--cpuset-cpus 0-1 \ +--memory 16G \ +--memory-swap 32G \ +--sysctl kernel.shmall=2097152 \ +--sysctl "kernel.sem=250 32000 100 128" \ +--sysctl kernel.shmmax=8589934592 \ +--sysctl kernel.shmmni=4096 \ +--cap-add=SYS_RESOURCE \ +--cap-add=NET_ADMIN \ +--cap-add=SYS_NICE \ +--cap-add=AUDIT_WRITE \ +--cap-add=AUDIT_CONTROL \ +--cap-add=NET_RAW \ +--device=/dev/oracleoci/oraclevdd:/dev/asm-disk1 \ +--device=/dev/oracleoci/oraclevde:/dev/asm-disk2 \ +--restart=always \ +--ulimit rtprio=99 \ +--systemd=always \ +--name racnodep1 \ + localhost/oracle/database-rac:21.3.0-slim + ``` + **Note**: Before creating the containers, you need to make sure you have edited the file `/scratch/common_scripts/podman/rac/envfile_racnodep2` and set the variables based on your enviornment. + +Create another Oracle RAC Container - + ```bash +podman create -t -i \ +--hostname racnodep2 \ +--dns-search "example.info" \ +--dns 10.0.20.25 \ +--shm-size 4G \ +--secret pwdsecret \ +--secret keysecret \ +--volume /scratch/rac/cluster01/node2:/u01 \ +--volume /scratch/common_scripts/podman/rac/envfile_racnodep2:/etc/rac_env_vars/envfile \ +--health-cmd "/bin/python3 /opt/scripts/startup/scripts/main.py --checkracstatus" \ +--volume /scratch:/scratch \ +--sysctl 'net.ipv4.conf.eth1.rp_filter=2' \ +--sysctl 'net.ipv4.conf.eth2.rp_filter=2' \ +--cpuset-cpus 0-1 \ +--memory 16G \ +--memory-swap 32G \ +--sysctl kernel.shmall=2097152 \ +--sysctl "kernel.sem=250 32000 100 128" \ +--sysctl kernel.shmmax=8589934592 \ +--sysctl kernel.shmmni=4096 \ +--cap-add=SYS_RESOURCE \ +--cap-add=NET_ADMIN \ +--cap-add=SYS_NICE \ +--cap-add=AUDIT_WRITE \ +--cap-add=AUDIT_CONTROL \ +--cap-add=NET_RAW \ +--device=/dev/oracleoci/oraclevdd:/dev/asm-disk1 \ +--device=/dev/oracleoci/oraclevde:/dev/asm-disk2 \ +--restart=always \ +--ulimit rtprio=99 \ +--systemd=always \ +--name racnodep2 \ + localhost/oracle/database-rac:21.3.0-slim + ``` + +#### Section 2.1.2: Deploying with NFS Storage Devices +##### Section 2.1.2.1: Prerequisites for setting up Oracle RAC with NFS Storage Devices +* Create a NFS Volume to be used for ASM Devices for Oracle RAC. See [Configuring NFS for Storage for Oracle RAC on Podman](https://review.us.oracle.com/review2/Review.html#reviewId=467473;scope=document;status=open,fixed;documentId=4229197) for more details. **Note:** You can skip this step if you are planning to use block devices for storage. + +* Make sure the ASM NFS Storage devices do not have any existing file system. + +* In this example we are going to use environment variables passed in a file called [envfile_racnodep1](withoutresponsefiles/nfsdevices/envfile_racnodep1) & [envfile_racnodep2](withoutresponsefiles/nfsdevices/envfile_racnodep2) and mounted to rac node containers. In this example, we are creating files under the `/scratch/common_scripts/podman/rac` path. + +* If SELinux is enabled on the host machine, then execute the following as well - + ```bash + semanage fcontext -a -t container_file_t /scratch/common_scripts/podman/rac/envfile_racnodep1 + restorecon -v /scratch/common_scripts/podman/rac/envfile_racnodep1 + semanage fcontext -a -t container_file_t /scratch/common_scripts/podman/rac/envfile_racnodep2 + restorecon -v /scratch/common_scripts/podman/rac/envfile_racnodep2 + ``` +###### Section 2.1.2.2: Create Oracle RAC Containers +Now create the Oracle RAC containers using the image. For the details of environment variables, refer to [Environment Variables Explained](#section-9-environment-variables-for-oracle-rac-on-containers) +**Note**: Before creating the containers, you need to make sure you have edited teh file `/scratch/common_scripts/podman/rac/envfile_racnodep1` and set the variables based on your environment. + +You can use the following example to create the first Oracle RAC container: +```bash +podman create -t -i \ +--hostname racnodep1 \ +--dns-search "example.info" \ +--dns 10.0.20.25 \ +--shm-size 4G \ +--secret pwdsecret \ +--secret keysecret \ +--volume /scratch/rac/cluster01/node1:/u01 \ +--volume /scratch/common_scripts/podman/rac/envfile_racnodep1:/etc/rac_env_vars/envfile \ +--health-cmd "/bin/python3 /opt/scripts/startup/scripts/main.py --checkracstatus" \ +--volume /scratch:/scratch \ +--sysctl 'net.ipv4.conf.eth1.rp_filter=2' \ +--sysctl 'net.ipv4.conf.eth2.rp_filter=2' \ +--cpuset-cpus 0-1 \ +--memory 16G \ +--memory-swap 32G \ +--sysctl kernel.shmall=2097152 \ +--sysctl "kernel.sem=250 32000 100 128" \ +--sysctl kernel.shmmax=8589934592 \ +--sysctl kernel.shmmni=4096 \ +--cap-add=SYS_RESOURCE \ +--cap-add=NET_ADMIN \ +--cap-add=SYS_NICE \ +--cap-add=AUDIT_WRITE \ +--cap-add=AUDIT_CONTROL \ +--cap-add=NET_RAW \ +--volume racstorage:/oradata \ +--restart=always \ +--ulimit rtprio=99 \ +--systemd=always \ +--name racnodep1 \ + localhost/oracle/database-rac:21.3.0-slim + ``` + +**Note**: Before creating the containers, you need to make sure you have edited teh file `/scratch/common_scripts/podman/rac/envfile_racnodep2` and set the variables based on your enviornment. + +Create another Oracle RAC Container - + + ```bash +podman create -t -i \ +--hostname racnodep2 \ +--dns-search "example.info" \ +--dns 10.0.20.25 \ +--shm-size 4G \ +--secret pwdsecret \ +--secret keysecret \ +--volume /scratch/rac/cluster01/node2:/u01 \ +--volume /scratch/common_scripts/podman/rac/envfile_racnodep2:/etc/rac_env_vars/envfile \ +--health-cmd "/bin/python3 /opt/scripts/startup/scripts/main.py --checkracstatus" \ +--volume /scratch:/scratch \ +--sysctl 'net.ipv4.conf.eth1.rp_filter=2' \ +--sysctl 'net.ipv4.conf.eth2.rp_filter=2' \ +--cpuset-cpus 0-1 \ +--memory 16G \ +--memory-swap 32G \ +--sysctl kernel.shmall=2097152 \ +--sysctl "kernel.sem=250 32000 100 128" \ +--sysctl kernel.shmmax=8589934592 \ +--sysctl kernel.shmmni=4096 \ +--cap-add=SYS_RESOURCE \ +--cap-add=NET_ADMIN \ +--cap-add=SYS_NICE \ +--cap-add=AUDIT_WRITE \ +--cap-add=AUDIT_CONTROL \ +--cap-add=NET_RAW \ +--volume racstorage:/oradata \ +--restart=always \ +--ulimit rtprio=99 \ +--systemd=always \ +--name racnodep2 \ + localhost/oracle/database-rac:21.3.0-slim + ``` + +### Section 2.2: Deploying 2 Node Oracle RAC Setup on Podman using Slim Image Using User Defined response files +#### Section 2.2.1: Deploying With BlockDevices +##### Section 2.1.1.1: Prerequisites for setup Oracle RAC using User-Defined Files with Block Devices +- On the shared folder between both RAC nodes, copy file [grid_setup_new_21c.rsp](withresponsefiles/blockdevices/grid_setup_new_21c.rsp) in `/scratch/common_scripts/podman/rac/`. +- Also, prepare a database response file similar to this [dbca_21c.rsp](withresponsefiles/dbca_21c.rsp). +- In the below example, we have captured all environment variables passed to the container in a separate envfile and mounted the same to both RAC nodes. Create envfile [envfile_racnodep1](withresponsefiles/blockdevices/envfile_racnodep1) and [envfile_racnode2](withresponsefiles/blockdevices/envfile_racnodep2) for both nodes in directory `/scratch/common_scripts/podman/rac/` +- If SELinux is enabled on the host machine then execute the following as well- + ```bash + semanage fcontext -a -t container_file_t /scratch/common_scripts/podman/rac/grid_setup_new_21c.rsp + restorecon -v /scratch/common_scripts/podman/rac/grid_setup_new_21c.rsp + semanage fcontext -a -t container_file_t /scratch/common_scripts/podman/rac/dbca_21c.rsp + restorecon -v /scratch/common_scripts/podman/rac/dbca_21c.rsp + semanage fcontext -a -t container_file_t /scratch/common_scripts/podman/rac/envfile_racnodep1 + restorecon -v /scratch/common_scripts/podman/rac/envfile_racnodep1 + semanage fcontext -a -t container_file_t /scratch/common_scripts/podman/rac/envfile_racnodep2 + restorecon -v /scratch/common_scripts/podman/rac/envfile_racnodep2 + ``` + Note: Passwords defined in response files is going to be overwritten by passwords defined in `podman secret` due to security reasons of exposure of the password as plain text. +You can skip this step if you are planning not to use **User Defined Response Files for RAC**. + +Follow the below instructions to setup Oracle RAC on Podman using Slim Image for using user-defined response files. + + +You can use the following example to create the first Oracle RAC container: + +**Note**: Before creating the containers, you need to make sure you have edited the file `/scratch/common_scripts/podman/rac/envfile_racnodep1` and set the variables based on your enviornment. + +```bash +podman create -t -i \ +--hostname racnodep1 \ +--dns-search "example.info" \ +--dns 10.0.20.25 \ +--shm-size 4G \ +--secret pwdsecret \ +--secret keysecret \ +--volume /scratch/common_scripts/podman/rac/grid_setup_new_21c.rsp:/tmp/grid_21c.rsp \ +--volume /scratch/common_scripts/podman/rac/dbca_21c.rsp:/tmp/dbca_21c.rsp \ +--volume /scratch/rac/cluster01/node1:/u01 \ +--volume /scratch:/scratch \ +--volume /scratch/common_scripts/podman/rac/envfile_racnodep1:/etc/rac_env_vars/envfile \ +--health-cmd "/bin/python3 /opt/scripts/startup/scripts/main.py --checkracstatus" \ +--sysctl 'net.ipv4.conf.eth1.rp_filter=2' \ +--sysctl 'net.ipv4.conf.eth2.rp_filter=2' \ +--cpuset-cpus 0-1 \ +--memory 16G \ +--memory-swap 32G \ +--sysctl kernel.shmall=2097152 \ +--sysctl "kernel.sem=250 32000 100 128" \ +--sysctl kernel.shmmax=8589934592 \ +--sysctl kernel.shmmni=4096 \ +--cap-add=SYS_RESOURCE \ +--cap-add=NET_ADMIN \ +--cap-add=SYS_NICE \ +--cap-add=AUDIT_WRITE \ +--cap-add=AUDIT_CONTROL \ +--cap-add=NET_RAW \ +--device=/dev/oracleoci/oraclevdd:/dev/asm-disk1 \ +--device=/dev/oracleoci/oraclevde:/dev/asm-disk2 \ +--restart=always \ +--ulimit rtprio=99 \ +--systemd=always \ +--name racnodep1 \ +localhost/oracle/database-rac:21.3.0-slim + ``` + +**Note**: Before creating the containers, you need to make sure you have edited teh file `/scratch/common_scripts/podman/rac/envfile_racnodep2` and set the variables based on your enviornment. + +To create another container, use the following command: + +```bash +podman create -t -i \ +--hostname racnodep2 \ +--dns-search "example.info" \ +--dns 10.0.20.25 \ +--shm-size 4G \ +--secret pwdsecret \ +--secret keysecret \ +--volume /scratch/common_scripts/podman/rac/grid_setup_new_21c.rsp:/tmp/grid_21c.rsp \ +--volume /scratch/common_scripts/podman/rac/dbca_21c.rsp:/tmp/dbca_21c.rsp \ +--volume /scratch/rac/cluster01/node2:/u01 \ +--volume /scratch:/scratch \ +--volume /scratch/common_scripts/podman/rac/envfile_racnodep2:/etc/rac_env_vars/envfile \ +--health-cmd "/bin/python3 /opt/scripts/startup/scripts/main.py --checkracstatus" \ +--sysctl 'net.ipv4.conf.eth1.rp_filter=2' \ +--sysctl 'net.ipv4.conf.eth2.rp_filter=2' \ +--cpuset-cpus 0-1 \ +--memory 16G \ +--memory-swap 32G \ +--sysctl kernel.shmall=2097152 \ +--sysctl "kernel.sem=250 32000 100 128" \ +--sysctl kernel.shmmax=8589934592 \ +--sysctl kernel.shmmni=4096 \ +--cap-add=SYS_RESOURCE \ +--cap-add=NET_ADMIN \ +--cap-add=SYS_NICE \ +--cap-add=AUDIT_WRITE \ +--cap-add=AUDIT_CONTROL \ +--cap-add=NET_RAW \ +--device=/dev/oracleoci/oraclevdd:/dev/asm-disk1 \ +--device=/dev/oracleoci/oraclevde:/dev/asm-disk2 \ +--restart=always \ +--ulimit rtprio=99 \ +--systemd=always \ +--name racnodep2 \ + localhost/oracle/database-rac:21.3.0-slim + ``` +#### Section 2.2.2: Deploying with NFS Storage Devices +##### Section 2.2.2.1: Prerequisites for setup Oracle RAC using User Defined Files with NFS Devices +- Create a NFS Volume to be used for ASM Devices for Oracle RAC. See [Configuring NFS for Storage for Oracle RAC on Podman](https://review.us.oracle.com/review2/Review.html#reviewId=467473;scope=document;status=open,fixed;documentId=4229197) for more details. **Note:** You can skip this step if you are planning to use block devices for storage. + +- Make sure the ASM NFS Storage devices do not have any existing file system. +- On the shared folder between both RAC nodes, create file name [grid_setup_new_21c.rsp](withresponsefiles/nfsdevices/grid_setup_new_21c.rsp) similar as below inside directory named `/scratch/common_scripts/podman/rac/`. +- Also, prepare a database response file similar to this [dbca_21c.rsp](withresponsefiles/dbca_21c.rsp) inside directory named `/scratch/common_scripts/podman/rac/`. +- In the below example, we have captured all environment variables passed to the container in a separate envfile and mounted the same to both RAC nodes. + + Create envfile [envfile_racnodep1](withresponsefiles/nfsdevices/envfile_racnodep1) and [envfile_racnode2](withresponsefiles/nfsdevices/envfile_racnodep2) for both nodes in directory `/scratch/common_scripts/podman/rac/`. +- If the SELinux is enabled on machine then execute the following as well - + ```bash + semanage fcontext -a -t container_file_t /scratch/common_scripts/podman/rac/grid_setup_new_21c.rsp + restorecon -v /scratch/common_scripts/podman/rac/grid_setup_new_21c.rsp + semanage fcontext -a -t container_file_t /scratch/common_scripts/podman/rac/dbca_21c.rsp + restorecon -v /scratch/common_scripts/podman/rac/dbca_21c.rsp + semanage fcontext -a -t container_file_t /scratch/common_scripts/podman/rac/envfile_racnodep1 + restorecon -v /scratch/common_scripts/podman/rac/envfile_racnodep1 + semanage fcontext -a -t container_file_t /scratch/common_scripts/podman/rac/envfile_racnodep2 + restorecon -v /scratch/common_scripts/podman/rac/envfile_racnodep2 + ``` +You can skip this step if you are planning not to use **User Defined Response Files for RAC**. + +Follow the below instructions to setup Oracle RAC on Podman using Slim Image for using user-defined response files. + +**Note**: Before creating the containers, you need to make sure you have edited teh file `/scratch/common_scripts/podman/rac/envfile_racnodep1` and set the variables based on your enviornment. + +You can use the following example to create the first Oracle RAC container: +```bash +podman create -t -i \ +--hostname racnodep1 \ +--dns-search "example.info" \ +--dns 10.0.20.25 \ +--shm-size 4G \ +--secret pwdsecret \ +--secret keysecret \ +--volume /scratch/common_scripts/podman/rac/grid_setup_new_21c.rsp:/tmp/grid_21c.rsp \ +--volume /scratch/common_scripts/podman/rac/dbca_21c.rsp:/tmp/dbca_21c.rsp \ +--volume /scratch/rac/cluster01/node1:/u01 \ +--volume /scratch:/scratch \ +--volume /scratch/common_scripts/podman/rac/envfile_racnodep1:/etc/rac_env_vars/envfile \ +--health-cmd "/bin/python3 /opt/scripts/startup/scripts/main.py --checkracstatus" \ +--sysctl 'net.ipv4.conf.eth1.rp_filter=2' \ +--sysctl 'net.ipv4.conf.eth2.rp_filter=2' \ +--cpuset-cpus 0-1 \ +--memory 16G \ +--memory-swap 32G \ +--sysctl kernel.shmall=2097152 \ +--sysctl "kernel.sem=250 32000 100 128" \ +--sysctl kernel.shmmax=8589934592 \ +--sysctl kernel.shmmni=4096 \ +--cap-add=SYS_RESOURCE \ +--cap-add=NET_ADMIN \ +--cap-add=SYS_NICE \ +--cap-add=AUDIT_WRITE \ +--cap-add=AUDIT_CONTROL \ +--cap-add=NET_RAW \ +--volume racstorage:/oradata \ +--restart=always \ +--ulimit rtprio=99 \ +--systemd=always \ +--name racnodep1 \ +localhost/oracle/database-rac:21.3.0-slim + ``` + +**Note**: Before creating the containers, you need to make sure you have edited teh file `/scratch/common_scripts/podman/rac/envfile_racnodep1` and set the variables based on your enviornment. + +To create another container, use the following command: + +```bash +podman create -t -i \ +--hostname racnodep2 \ +--dns-search "example.info" \ +--dns 10.0.20.25 \ +--shm-size 4G \ +--secret pwdsecret \ +--secret keysecret \ +--volume /scratch/common_scripts/podman/rac/grid_setup_new_21c.rsp:/tmp/grid_21c.rsp \ +--volume /scratch/common_scripts/podman/rac/dbca_21c.rsp:/tmp/dbca_21c.rsp \ +--volume /scratch/rac/cluster01/node2:/u01 \ +--volume /scratch:/scratch \ +--volume /scratch/common_scripts/podman/rac/envfile_racnodep2:/etc/rac_env_vars/envfile \ +--health-cmd "/bin/python3 /opt/scripts/startup/scripts/main.py --checkracstatus" \ +--sysctl 'net.ipv4.conf.eth1.rp_filter=2' \ +--sysctl 'net.ipv4.conf.eth2.rp_filter=2' \ +--cpuset-cpus 0-1 \ +--memory 16G \ +--memory-swap 32G \ +--sysctl kernel.shmall=2097152 \ +--sysctl "kernel.sem=250 32000 100 128" \ +--sysctl kernel.shmmax=8589934592 \ +--sysctl kernel.shmmni=4096 \ +--cap-add=SYS_RESOURCE \ +--cap-add=NET_ADMIN \ +--cap-add=SYS_NICE \ +--cap-add=AUDIT_WRITE \ +--cap-add=AUDIT_CONTROL \ +--cap-add=NET_RAW \ +--volume racstorage:/oradata \ +--restart=always \ +--ulimit rtprio=99 \ +--systemd=always \ +--name racnodep2 \ + localhost/oracle/database-rac:21.3.0-slim + ``` +**Note:** +- Change environment variables based on your environment. Refer [Section 8: Environment Variables for Oracle RAC on Containers](#section-9-environment-variables-for-oracle-rac-on-containers) for more details. +- Below example uses, a podman bridge network with one public and two private networks, hence`--sysctl 'net.ipv4.conf.eth1.rp_filter=2' --sysctl 'net.ipv4.conf.eth2.rp_filter=2` is required when we use two private networks, else these can be ignored. +- If you are planning to place database files such as datafiles and archivelogs on different diskgroups, you need to pass these parameters- `DB_ASM_DEVICE_LIST`,`RECO_ASM_DEVICE_LIST`,`DB_DATA_FILE_DEST`, `DB_RECOVERY_FILE_DEST`. Refer [Section 8: Environment Variables for Oracle RAC on Containers](#section-9-environment-variables-for-oracle-rac-on-containers) for more details. + +## Section 3: Attach the network to containers + +You need to assign the podman networks created based on the above sections. Execute the following commands: + +### Attach the network to racnodep1 + +```bash +podman network disconnect podman racnodep1 +podman network connect rac_pub1_nw --ip 10.0.20.170 racnodep1 +podman network connect rac_priv1_nw --ip 192.168.17.170 racnodep1 +podman network connect rac_priv2_nw --ip 192.168.18.170 racnodep1 +``` +### Attach the network to racnodep2 + +```bash +podman network disconnect podman racnodep2 +podman network connect rac_pub1_nw --ip 10.0.20.171 racnodep2 +podman network connect rac_priv1_nw --ip 192.168.17.171 racnodep2 +podman network connect rac_priv2_nw --ip 192.168.18.171 racnodep2 +``` +## Section 4: Start the containers + +You need to start the container. Execute the following command: + +```bash +podman start racnodep1 +podman start racnodep2 +``` + +It can take at least 20 minutes or longer to create and setup 2 node RAC primary and standby setup. To check the logs, use the following command from another terminal session: + +```bash +podman exec racnodep1 /bin/bash -c "tail -f /tmp/orod/oracle_rac_setup.log" +``` + +You should see the database creation success message at the end: +```bash +#################################### +ORACLE RAC DATABASE IS READY TO USE! +#################################### +``` + +Note: +- If you see any error related to files mounted on a container volume not detected in the podman logs, then make sure they are labeled correctly with the `container_file_t` context. You can use `ls -lZ ` to see the security context set on files. + For example- + ```bash + semanage fcontext -a -t container_file_t /scratch/common_scripts/podman/rac/dbca_21c.rsp + restorecon -vF /scratch/common_scripts/podman/rac/dbca_21c.rsp + ls -lZ /scratch/common_scripts/podman/rac/dbca_21c.rsp +` ``` + +## Section 5: Validating Oracle RAC Environment +You can validate if the environment is healthy by running the below command- +```bash +podman ps -a + +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +f1345fd4047b localhost/oracle/rac-dnsserver:latest /bin/sh -c exec $... 8 hours ago Up 8 hours (healthy) rac-dnsserver +2f42e49758d1 localhost/oracle/database-rac:21.3.0 46 minutes ago Up 37 minutes (healthy) racnodep1 +a27fceea9fe6 localhost/oracle/database-rac:21.3.0 46 minutes ago Up 37 minutes (healthy) racnodep2 +``` +Note: +- Look for `(healthy)` next to container names under the `STATUS` section. + +## Section 6: Connecting to Oracle RAC Environment + +**IMPORTANT:** This section assumes that you have successfully created an Oracle RAC cluster using the preceding sections. +Refer to [README](./docs/CONNECTING.md) for instructions on how to connect to Oracle RAC Database. + +## Section 7: Sample of Addition of Nodes to Oracle RAC Containers based on Slim Image +### Section 7.1: Sample of Addition of Nodes to Oracle RAC Containers based on Slim Image Without Response File +Below is the example of adding 1 more node to the existing Oracle RAC 2 node cluster using Slim image and without user-defined files - +- Create envfile [envfile_racnodep3](withoutresponsefiles/blockdevices/envfile_racnodep3) for additional node and keep it here `/scratch/common_scripts/podman/rac/envfile_racnodep3` + +**Note**: Before creating the containers, you need to make sure you have edited teh file `/scratch/common_scripts/podman/rac/envfile_racnodep3` and set the variables based on your enviornment. + +- Prepare Folder for additional node- + ```bash + mkdir -p /scratch/rac/cluster01/node3 + rm -rf /scratch/rac/cluster01/node3/* + ``` +- Create additional Oracle RAC Container- + ```bash + podman create -t -i \ + --hostname racnodep3 \ + --dns-search "example.info" \ + --dns 10.0.20.25 \ + --shm-size 4G \ + --secret pwdsecret \ + --secret keysecret \ + --volume /scratch/rac/cluster01/node3:/u01 \ + --volume /scratch/common_scripts/podman/rac/envfile_racnodep3:/etc/rac_env_vars/envfile \ + --health-cmd "/bin/python3 /opt/scripts/startup/scripts/main.py --checkracstatus" \ + --volume /scratch:/scratch \ + --cpuset-cpus 0-1 \ + --memory 16G \ + --memory-swap 32G \ + --sysctl kernel.shmall=2097152 \ + --sysctl "kernel.sem=250 32000 100 128" \ + --sysctl kernel.shmmax=8589934592 \ + --sysctl kernel.shmmni=4096 \ + --sysctl 'net.ipv4.conf.eth1.rp_filter=2' \ + --sysctl 'net.ipv4.conf.eth2.rp_filter=2' \ + --cap-add=SYS_RESOURCE \ + --cap-add=NET_ADMIN \ + --cap-add=SYS_NICE \ + --cap-add=AUDIT_WRITE \ + --cap-add=AUDIT_CONTROL \ + --cap-add=NET_RAW \ + --device=/dev/oracleoci/oraclevdd:/dev/asm-disk1 \ + --device=/dev/oracleoci/oraclevde:/dev/asm-disk2 \ + --restart=always \ + --ulimit rtprio=99 \ + --systemd=always \ + --name racnodep3 \ + localhost/oracle/database-rac:21.3.0-slim + + podman network disconnect podman racnodep3 + podman network connect rac_pub1_nw --ip 10.0.20.172 racnodep3 + podman network connect rac_priv1_nw --ip 192.168.17.172 racnodep3 + podman network connect rac_priv2_nw --ip 192.168.18.172 racnodep3 + podman start racnodep3 + podman exec racnodep3 /bin/bash -c "tail -f /tmp/orod/oracle_rac_setup.log" + ``` + Successful message for addition of nodes- + ```bash + ======================================================== + Oracle Database ORCLCDB3 is up and running on racnodep3. + ======================================================== + ``` + +## Section 8: Sample of Addition of Nodes to Oracle RAC Containers based on Oracle RAC Slim Image with NFS Storage Devices + +### Section 8.1: Sample of Addition of Nodes to Oracle RAC Containers based on Oracle RAC Image Without Response File +Below is an example of adding one more node to the existing Oracle RAC 2 node cluster using the Oracle RAC image and without user-defined files. +**Note**: Before creating the containers, you need to make sure you have edited teh file `/scratch/common_scripts/podman/rac/envfile_racnodep3` and set the variables based on your enviornment. + +- Prepare directory for additional node- + ```bash + mkdir -p /scratch/rac/cluster01/node3 + rm -rf /scratch/rac/cluster01/node3/* + ``` +- Create additional Oracle RAC Container - + ```bash + podman create -t -i \ + --hostname racnodep3 \ + --dns-search "example.info" \ + --dns 10.0.20.25 \ + --shm-size 4G \ + --secret pwdsecret \ + --secret keysecret \ + --volume /scratch/rac/cluster01/node3:/u01 \ + --volume /scratch/common_scripts/podman/rac/envfile_racnodep3:/etc/rac_env_vars/envfile \ + --health-cmd "/bin/python3 /opt/scripts/startup/scripts/main.py --checkracstatus" \ + --volume /scratch:/scratch \ + --cpuset-cpus 0-1 \ + --memory 16G \ + --memory-swap 32G \ + --sysctl kernel.shmall=2097152 \ + --sysctl "kernel.sem=250 32000 100 128" \ + --sysctl kernel.shmmax=8589934592 \ + --sysctl kernel.shmmni=4096 \ + --sysctl 'net.ipv4.conf.eth1.rp_filter=2' \ + --sysctl 'net.ipv4.conf.eth2.rp_filter=2' \ + --cap-add=SYS_RESOURCE \ + --cap-add=NET_ADMIN \ + --cap-add=SYS_NICE \ + --cap-add=AUDIT_WRITE \ + --cap-add=AUDIT_CONTROL \ + --cap-add=NET_RAW \ + --volume racstorage:/oradata \ + --restart=always \ + --ulimit rtprio=99 \ + --systemd=always \ + --name racnodep3 \ + localhost/oracle/database-rac:21.3.0-slim + + podman network disconnect podman racnodep3 + podman network connect rac_pub1_nw --ip 10.0.20.172 racnodep3 + podman network connect rac_priv1_nw --ip 192.168.17.172 racnodep3 + podman network connect rac_priv2_nw --ip 192.168.18.172 racnodep3 + podman start racnodep3 + podman exec racnodep3 /bin/bash -c "tail -f /tmp/orod/oracle_rac_setup.log" + + ======================================================== + Oracle Database ORCLCDB3 is up and running on racnodep3. + ======================================================== + ``` + +## Section 9: Environment Variables for Oracle RAC on Containers +Refer to [Environment Variables Explained for Oracle RAC on Podman Compose](../../../docs/ENVIRONMENTVARIABLES.md) for the explanation of all the environment variables related to Oracle RAC on Podman Compose. Change or Set these environment variables as per your environment. + +## Cleanup +Refer to [README](../../../docs/CLEANUP.md) for instructions on how to connect to a cleanup Oracle RAC Database Container Environment. + +## Support + +At the time of this release, Oracle RAC on Podman is supported for Oracle Linux 8.10 later. To see current Linux support certifications, refer [Oracle RAC on Podman Documentation](https://docs.oracle.com/en/database/oracle/oracle-database/21/install-and-upgrade.html) + +## License + +To download and run Oracle Grid and Database, regardless of whether inside or outside a container, you must download the binaries from the Oracle website and accept the license indicated on that page. + +All scripts and files hosted in this repository that are required to build the container images are, unless otherwise noted, released under a UPL 1.0 license. + +## Copyright + +Copyright (c) 2014-2024 Oracle and/or its affiliates. diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racslimimage/withoutresponsefiles/blockdevices/envfile_racnodep1 b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racslimimage/withoutresponsefiles/blockdevices/envfile_racnodep1 new file mode 100644 index 0000000000..e0668f9627 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racslimimage/withoutresponsefiles/blockdevices/envfile_racnodep1 @@ -0,0 +1,22 @@ +CRS_PRIVATE_IP1=192.168.17.170 +CRS_PRIVATE_IP2=192.168.18.170 +CRS_NODES=pubhost:racnodep1,viphost:racnodep1-vip;pubhost:racnodep2,viphost:racnodep2-vip +DNS_SERVERS=10.0.20.25 +SCAN_NAME=racnodepc1-scan +INSTALL_NODE=racnodep1 +DB_SW_ZIP_FILE=LINUX.X64_213000_db_home.zip +STAGING_SOFTWARE_LOC=/scratch/software/21c/goldimages +GRID_SW_ZIP_FILE=LINUX.X64_213000_grid_home.zip +DB_NAME=ORCLCDB +CRS_ASM_DEVICE_LIST=/dev/asm-disk1,/dev/asm-disk2 +INIT_SGA_SIZE=3G +INIT_PGA_SIZE=2G +DB_PWD_FILE=pwdsecret +PWD_KEY=keysecret +DB_SERVICE=service:soepdb +GRID_HOME=/u01/app/21c/grid +GRID_BASE=/u01/app/grid +DB_HOME=/u01/app/oracle/product/21c/dbhome_1 +DB_BASE=/u01/app/oracle +INVENTORY=/u01/app/oraInventory +OP_TYPE=setuprac diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racslimimage/withoutresponsefiles/blockdevices/envfile_racnodep2 b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racslimimage/withoutresponsefiles/blockdevices/envfile_racnodep2 new file mode 100644 index 0000000000..26bdd100c4 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racslimimage/withoutresponsefiles/blockdevices/envfile_racnodep2 @@ -0,0 +1,22 @@ +CRS_PRIVATE_IP1=192.168.17.171 +CRS_PRIVATE_IP2=192.168.18.171 +CRS_NODES=pubhost:racnodep1,viphost:racnodep1-vip;pubhost:racnodep2,viphost:racnodep2-vip +GRID_HOME=/u01/app/21c/grid +GRID_BASE=/u01/app/grid +DB_HOME=/u01/app/oracle/product/21c/dbhome_1 +DB_BASE=/u01/app/oracle +INVENTORY=/u01/app/oraInventory +DNS_SERVERS=10.0.20.25 +STAGING_SOFTWARE_LOC=/scratch/software/21c/goldimages +GRID_SW_ZIP_FILE=LINUX.X64_213000_grid_home.zip +DB_SW_ZIP_FILE=LINUX.X64_213000_db_home.zip +SCAN_NAME=racnodepc1-scan +OP_TYPE=setuprac +DB_NAME=ORCLCDB +CRS_ASM_DEVICE_LIST=/dev/asm-disk1,/dev/asm-disk2 +INIT_SGA_SIZE=3G +INIT_PGA_SIZE=2G +INSTALL_NODE=racnodep1 +DB_PWD_FILE=pwdsecret +PWD_KEY=keysecret +DB_SERVICE=service:soepdb \ No newline at end of file diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racslimimage/withoutresponsefiles/blockdevices/envfile_racnodep3 b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racslimimage/withoutresponsefiles/blockdevices/envfile_racnodep3 new file mode 100644 index 0000000000..c7b8f818a4 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racslimimage/withoutresponsefiles/blockdevices/envfile_racnodep3 @@ -0,0 +1,24 @@ +CRS_PRIVATE_IP1=192.168.17.172 +CRS_PRIVATE_IP2=192.168.18.172 +CRS_NODES=pubhost:racnodep3,viphost:racnodep3-vip +GRID_HOME=/u01/app/21c/grid +GRID_BASE=/u01/app/grid +DB_HOME=/u01/app/oracle/product/21c/dbhome_1 +DB_BASE=/u01/app/oracle +INVENTORY=/u01/app/oraInventory +DNS_SERVERS=10.0.20.25 +STAGING_SOFTWARE_LOC=/scratch/software/21c/goldimages +GRID_SW_ZIP_FILE=LINUX.X64_213000_grid_home.zip +DB_SW_ZIP_FILE=LINUX.X64_213000_db_home.zip +SCAN_NAME=racnodepc1-scan +OP_TYPE=racaddnode +DB_NAME=ORCLCDB +CRS_ASM_DEVICE_LIST=/dev/asm-disk1,/dev/asm-disk2 +INIT_SGA_SIZE=3G +INIT_PGA_SIZE=2G +INSTALL_NODE=racnodep3 +EXISTING_CLS_NODE=racnodep1,racnodep2 +DB_PWD_FILE=pwdsecret +PWD_KEY=keysecret +DB_SERVICE=service:soepdb +IGNORE_CRS_PREREQS=true \ No newline at end of file diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racslimimage/withoutresponsefiles/nfsdevices/envfile_racnodep1 b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racslimimage/withoutresponsefiles/nfsdevices/envfile_racnodep1 new file mode 100644 index 0000000000..7efbdbc948 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racslimimage/withoutresponsefiles/nfsdevices/envfile_racnodep1 @@ -0,0 +1,24 @@ +CRS_PRIVATE_IP1=192.168.17.170 +CRS_PRIVATE_IP2=192.168.18.170 +CRS_NODES=pubhost:racnodep1,viphost:racnodep1-vip;pubhost:racnodep2,viphost:racnodep2-vip +GRID_HOME=/u01/app/21c/grid +GRID_BASE=/u01/app/grid +DB_HOME=/u01/app/oracle/product/21c/dbhome_1 +DB_BASE=/u01/app/oracle +INVENTORY=/u01/app/oraInventory +DNS_SERVERS=10.0.20.25 +STAGING_SOFTWARE_LOC=/scratch/software/21c/goldimages +GRID_SW_ZIP_FILE=LINUX.X64_213000_grid_home.zip +DB_SW_ZIP_FILE=LINUX.X64_213000_db_home.zip +SCAN_NAME=racnodepc1-scan +CRS_ASM_DISCOVERY_STRING=/oradata/asm_disk* +OP_TYPE=setuprac +DB_NAME=ORCLCDB +CRS_ASM_DEVICE_LIST=/oradata/asm_disk01.img,/oradata/asm_disk02.img,/oradata/asm_disk03.img,/oradata/asm_disk04.img,/oradata/asm_disk05.img +INIT_SGA_SIZE=3G +INIT_PGA_SIZE=2G +INSTALL_NODE=racnodep1 +DB_PWD_FILE=pwdsecret +PWD_KEY=keysecret +ASM_ON_NAS=True +DB_SERVICE=service:soepdb \ No newline at end of file diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racslimimage/withoutresponsefiles/nfsdevices/envfile_racnodep2 b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racslimimage/withoutresponsefiles/nfsdevices/envfile_racnodep2 new file mode 100644 index 0000000000..ee7be37119 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racslimimage/withoutresponsefiles/nfsdevices/envfile_racnodep2 @@ -0,0 +1,24 @@ +CRS_PRIVATE_IP1=192.168.17.171 +CRS_PRIVATE_IP2=192.168.18.171 +CRS_NODES=pubhost:racnodep1,viphost:racnodep1-vip;pubhost:racnodep2,viphost:racnodep2-vip +GRID_HOME=/u01/app/21c/grid +GRID_BASE=/u01/app/grid +DB_HOME=/u01/app/oracle/product/21c/dbhome_1 +DB_BASE=/u01/app/oracle +INVENTORY=/u01/app/oraInventory +DNS_SERVERS=10.0.20.25 +STAGING_SOFTWARE_LOC=/scratch/software/21c/goldimages +GRID_SW_ZIP_FILE=LINUX.X64_213000_grid_home.zip +DB_SW_ZIP_FILE=LINUX.X64_213000_db_home.zip +SCAN_NAME=racnodepc1-scan +CRS_ASM_DISCOVERY_STRING=/oradata/asm_disk* +OP_TYPE=setuprac +DB_NAME=ORCLCDB +CRS_ASM_DEVICE_LIST=/oradata/asm_disk01.img,/oradata/asm_disk02.img,/oradata/asm_disk03.img,/oradata/asm_disk04.img,/oradata/asm_disk05.img +INIT_SGA_SIZE=3G +INIT_PGA_SIZE=2G +INSTALL_NODE=racnodep1 +DB_PWD_FILE=pwdsecret +PWD_KEY=keysecret +ASM_ON_NAS=True +DB_SERVICE=service:soepdb \ No newline at end of file diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racslimimage/withoutresponsefiles/nfsdevices/envfile_racnodep3 b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racslimimage/withoutresponsefiles/nfsdevices/envfile_racnodep3 new file mode 100644 index 0000000000..fcfde07e3a --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racslimimage/withoutresponsefiles/nfsdevices/envfile_racnodep3 @@ -0,0 +1,26 @@ +CRS_PRIVATE_IP1=192.168.17.172 +CRS_PRIVATE_IP2=192.168.18.172 +CRS_NODES=pubhost:racnodep3,viphost:racnodep3-vip +GRID_HOME=/u01/app/21c/grid +GRID_BASE=/u01/app/grid +DB_HOME=/u01/app/oracle/product/21c/dbhome_1 +DB_BASE=/u01/app/oracle +INVENTORY=/u01/app/oraInventory +DNS_SERVERS=10.0.20.25 +STAGING_SOFTWARE_LOC=/scratch/software/21c/goldimages +GRID_SW_ZIP_FILE=LINUX.X64_213000_grid_home.zip +DB_SW_ZIP_FILE=LINUX.X64_213000_db_home.zip +SCAN_NAME=racnodepc1-scan +CRS_ASM_DISCOVERY_STRING=/oradata/asm_disk* +OP_TYPE=racaddnode +DB_NAME=ORCLCDB +CRS_ASM_DEVICE_LIST=/oradata/asm_disk01.img,/oradata/asm_disk02.img,/oradata/asm_disk03.img,/oradata/asm_disk04.img,/oradata/asm_disk05.img +INIT_SGA_SIZE=3G +INIT_PGA_SIZE=2G +INSTALL_NODE=racnodep3 +EXISTING_CLS_NODE=racnodep1,racnodep2 +DB_PWD_FILE=pwdsecret +PWD_KEY=keysecret +ASM_ON_NAS=True +DB_SERVICE=service:soepdb +IGNORE_CRS_PREREQS=true \ No newline at end of file diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racslimimage/withresponsefiles/blockdevices/addition/dbca_21c.rsp b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racslimimage/withresponsefiles/blockdevices/addition/dbca_21c.rsp new file mode 100644 index 0000000000..c8b0e201e2 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racslimimage/withresponsefiles/blockdevices/addition/dbca_21c.rsp @@ -0,0 +1,58 @@ +responseFileVersion=/oracle/assistants/rspfmt_dbca_response_schema_v21.0.0 +gdbName=ORCLCDB +sid=ORCLCDB +databaseConfigType=RAC +RACOneNodeServiceName= +policyManaged=false +managementPolicy= +createServerPool=false +serverPoolName= +cardinality= +force=false +pqPoolName= +pqCardinality= +createAsContainerDatabase=true +numberOfPDBs=1 +pdbName=ORCLPDB +useLocalUndoForPDBs=true +pdbAdminPassword=ORacle__21c +nodelist=racnodep3 +templateName={ORACLE_HOME}/assistants/dbca/templates/General_Purpose.dbc +sysPassword=ORacle__21c +systemPassword=ORacle__21c +oracleHomeUserPassword= +emConfiguration= +runCVUChecks=true +dbsnmpPassword=ORacle__21c +omsHost= +omsPort= +emUser= +emPassword= +dvConfiguration=false +dvUserName= +dvUserPassword= +dvAccountManagerName= +dvAccountManagerPassword= +olsConfiguration=false +datafileJarLocation={ORACLE_HOME}/assistants/dbca/templates/ +datafileDestination=+DATA/{DB_UNIQUE_NAME}/ +recoveryAreaDestination= +storageType=ASM +diskGroupName=+DATA/{DB_UNIQUE_NAME}/ +asmsnmpPassword= +recoveryGroupName= +characterSet=AL32UTF8 +nationalCharacterSet=AL16UTF16 +registerWithDirService= +dirServiceUserName= +dirServicePassword= +walletPassword= +listeners=LISTENER +variablesFile= +variables=DB_UNIQUE_NAME=ORCLCDB,ORACLE_BASE=/u01/app/oracle,PDB_NAME=ORCLPDB,DB_NAME=ORCLCDB,ORACLE_HOME=/u01/app/oracle/product/21.3.0/dbhome_1,SID=ORCLCDB +initParams=audit_trail=none,audit_sys_operations=false,remote_login_passwordfile=exclusive +sampleSchema=false +memoryPercentage=40 +databaseType=MULTIPURPOSE +automaticMemoryManagement=false +totalMemory=5000 \ No newline at end of file diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racslimimage/withresponsefiles/blockdevices/addition/envfile_racnodep3 b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racslimimage/withresponsefiles/blockdevices/addition/envfile_racnodep3 new file mode 100644 index 0000000000..06d095a250 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racslimimage/withresponsefiles/blockdevices/addition/envfile_racnodep3 @@ -0,0 +1,17 @@ +CRS_PRIVATE_IP1=192.168.17.172 +CRS_PRIVATE_IP2=192.168.18.172 +GRID_HOME=/u01/app/21c/grid +DEFAULT_GATEWAY=172.20.1.1 +COPY_GRID_SOFTWARE=true +STAGING_SOFTWARE_LOC=/scratch/software/21c/goldimages +GRID_SW_ZIP_FILE=LINUX.X64_213000_grid_home.zip +COPY_DB_SOFTWARE=true +DB_SW_ZIP_FILE=LINUX.X64_213000_db_home.zip +OP_TYPE=setuprac +GRID_RESPONSE_FILE=/tmp/grid_21c.rsp +DBCA_RESPONSE_FILE=/tmp/dbca_21c.rsp +OP_TYPE=racaddnode +DB_NAME=ORCLCDB +EXISTING_CLS_NODE=racnodep1,racnodep2 +DB_PWD_FILE=pwdsecret +PWD_KEY=keysecret \ No newline at end of file diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racslimimage/withresponsefiles/blockdevices/addition/grid_setup_new_21c.rsp b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racslimimage/withresponsefiles/blockdevices/addition/grid_setup_new_21c.rsp new file mode 100644 index 0000000000..7165f956ef --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racslimimage/withresponsefiles/blockdevices/addition/grid_setup_new_21c.rsp @@ -0,0 +1,63 @@ +oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v21.0.0 +INVENTORY_LOCATION=/u01/app/oraInventory +installOption=CRS_CONFIG +ORACLE_BASE=/u01/app/grid +clusterUsage=RAC +zeroDowntimeGIPatching= +skipDriverUpdate= +OSDBA=asmdba +OSOPER=asmoper +OSASM=asmadmin +scanType= +scanClientDataFile= +scanName=racnodepc1-scan +scanPort=1521 +configureAsExtendedCluster= +clusterName=racnode-c +configureGNS= +configureDHCPAssignedVIPs= +gnsSubDomain= +gnsVIPAddress= +sites= +clusterNodes=racnodep3:racnodep3-vip:HUB +networkInterfaceList=eth0:172.20.1.0:1,eth1:192.168.17.0:5,eth2:192.168.18.0:5 +storageOption= +votingFilesLocations= +ocrLocations= +clientDataFile= +useIPMI= +bmcBinpath= +bmcUsername= +bmcPassword= +sysasmPassword=ORacle__21c +diskGroupName=DATA +redundancy=EXTERNAL +auSize= +failureGroups= +disksWithFailureGroupNames=/dev/asm-disk1,,/dev/asm-disk2, +diskList=/dev/asm-disk1,/dev/asm-disk2 +quorumFailureGroupNames= +diskString=/dev/asm* +asmsnmpPassword=ORacle__21c +configureAFD=false +ignoreDownNodes=false +configureBackupDG= +backupDGName= +backupDGRedundancy= +backupDGAUSize= +backupDGFailureGroups= +backupDGDisksWithFailureGroupNames= +backupDGDiskList= +backupDGQuorumFailureGroups= +managementOption= +omsHost= +omsPort= +emAdminUser= +emadminPassword= +executeRootScript=false +configMethod=ROOT +sudoPath= +sudoUserName= +batchInfo= +nodesToDelete= +enableAutoFixup= \ No newline at end of file diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racslimimage/withresponsefiles/blockdevices/addition/podman-compose.yml b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racslimimage/withresponsefiles/blockdevices/addition/podman-compose.yml new file mode 100644 index 0000000000..399fba7b78 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racslimimage/withresponsefiles/blockdevices/addition/podman-compose.yml @@ -0,0 +1,77 @@ +--- +version: "3" +networks: + rac_pub1_nw: + name: ${PUBLIC_NETWORK_NAME} + external: true + rac_priv1_nw: + name: ${PRIVATE1_NETWORK_NAME} + external: true + rac_priv2_nw: + name: ${PRIVATE2_NETWORK_NAME} + external: true +secrets: + pwdsecret: + file: ${PWD_SECRET_FILE} + keysecret: + file: ${KEY_SECRET_FILE} +services: + racnodep3: + container_name: ${RACNODE3_CONTAINER_NAME} + hostname: ${RACNODE3_HOST_NAME} + image: ${RAC_IMAGE_NAME} + restart: always + dns: ${DNS_PUBLIC_IP} + dns_search: ${DNS_DOMAIN} + shm_size: 4G + secrets: + - pwdsecret + - keysecret + volumes: + - /scratch/rac/cluster01/node3:/u01 + - /scratch:/scratch + - ${DB_RESPONSE_FILE}:/tmp/dbca_21c.rsp + - ${GRID_RESPONSE_FILE}:/tmp/grid_21c.rsp + environment: + PRIVATE_IP1_LIST: ${RACNODE3_PRIVATE_IP1_LIST} + PRIVATE_IP2_LIST: ${RACNODE3_PRIVATE_IP2_LIST} + DEFAULT_GATEWAY: ${DEFAULT_GATEWAY} + GRID_HOME: /u01/app/21c/grid + COPY_GRID_SOFTWARE: true + STAGING_SOFTWARE_LOC: ${STAGING_SOFTWARE_LOC} + GRID_SW_ZIP_FILE: LINUX.X64_213000_grid_home.zip + COPY_DB_SOFTWARE: true + DB_SW_ZIP_FILE: LINUX.X64_213000_db_home.zip + GRID_RESPONSE_FILE: /tmp/grid_21c.rsp + DBCA_RESPONSE_FILE: /tmp/dbca_21c.rsp + OP_TYPE: racaddnode + EXISTING_CLS_NODE: ${EXISTING_CLS_NODE} + DB_PWD_FILE: pwdsecret + PWD_KEY: keysecret + sysctls: + - kernel.shmall=2097152 + - kernel.shmmax=8589934592 + - kernel.shmmni=4096 + - 'kernel.sem=250 32000 100 128' + - 'net.ipv4.conf.eth1.rp_filter=2' + - 'net.ipv4.conf.eth2.rp_filter=2' + ulimits: + rtprio: 99 + cap_add: + - SYS_RESOURCE + - NET_ADMIN + - SYS_NICE + - AUDIT_WRITE + - AUDIT_CONTROL + devices: + - "${ASM_DISK1}:${ASM_DEVICE1}" + - "${ASM_DISK2}:${ASM_DEVICE2}" + networks: + - rac_pub1_nw + - rac_priv1_nw + - rac_priv2_nw + healthcheck: + test: ["CMD-SHELL", "if [ `cat /tmp/orod/oracle_rac_setup.log | grep -c 'ORACLE RAC DATABASE IS READY TO USE'` -ge 1 ]; then exit 0; else exit 1; fi"] + interval: ${HEALTHCHECK_INTERVAL} + timeout: ${HEALTHCHECK_TIMEOUT} + retries: ${HEALTHCHECK_RETRIES} \ No newline at end of file diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racslimimage/withresponsefiles/blockdevices/envfile_racnodep1 b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racslimimage/withresponsefiles/blockdevices/envfile_racnodep1 new file mode 100644 index 0000000000..8ac6f400f8 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racslimimage/withresponsefiles/blockdevices/envfile_racnodep1 @@ -0,0 +1,15 @@ +DNS_SERVERS=10.0.20.25 +CRS_PRIVATE_IP1=192.168.17.170 +CRS_PRIVATE_IP2=192.168.18.170 +GRID_HOME=/u01/app/21c/grid +STAGING_SOFTWARE_LOC=/scratch/software/21c/goldimages +GRID_SW_ZIP_FILE=LINUX.X64_213000_grid_home.zip +DB_SW_ZIP_FILE=LINUX.X64_213000_db_home.zip +OP_TYPE=setuprac +SCAN_NAME=racnodepc1-scan +INSTALL_NODE=racnodep1 +GRID_RESPONSE_FILE=/tmp/grid_21c.rsp +DBCA_RESPONSE_FILE=/tmp/dbca_21c.rsp +DB_PWD_FILE=pwdsecret +PWD_KEY=keysecret +DB_SERVICE=service:soepdb \ No newline at end of file diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racslimimage/withresponsefiles/blockdevices/envfile_racnodep2 b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racslimimage/withresponsefiles/blockdevices/envfile_racnodep2 new file mode 100644 index 0000000000..7a9e3e570b --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racslimimage/withresponsefiles/blockdevices/envfile_racnodep2 @@ -0,0 +1,15 @@ +DNS_SERVERS=10.0.20.25 +CRS_PRIVATE_IP1=192.168.17.171 +CRS_PRIVATE_IP2=192.168.18.171 +GRID_HOME=/u01/app/21c/grid +STAGING_SOFTWARE_LOC=/scratch/software/21c/goldimages +GRID_SW_ZIP_FILE=LINUX.X64_213000_grid_home.zip +DB_SW_ZIP_FILE=LINUX.X64_213000_db_home.zip +OP_TYPE=setuprac +SCAN_NAME=racnodepc1-scan +INSTALL_NODE=racnodep1 +GRID_RESPONSE_FILE=/tmp/grid_21c.rsp +DBCA_RESPONSE_FILE=/tmp/dbca_21c.rsp +DB_PWD_FILE=pwdsecret +PWD_KEY=keysecret +DB_SERVICE=service:soepdb \ No newline at end of file diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racslimimage/withresponsefiles/blockdevices/grid_setup_new_21c.rsp b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racslimimage/withresponsefiles/blockdevices/grid_setup_new_21c.rsp new file mode 100644 index 0000000000..c7ffe19d4a --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racslimimage/withresponsefiles/blockdevices/grid_setup_new_21c.rsp @@ -0,0 +1,64 @@ +oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v21.0.0 +INVENTORY_LOCATION=/u01/app/oraInventory +oracle.install.option=CRS_CONFIG +ORACLE_BASE=/u01/app/grid +oracle.install.asm.OSDBA=dba +oracle.install.asm.OSOPER= +oracle.install.asm.OSASM=asmadmin +oracle.install.crs.config.scanType=LOCAL_SCAN +oracle.install.crs.config.SCANClientDataFile= +oracle.install.crs.config.gpnp.scanName=racnodepc1-scan +oracle.install.crs.config.gpnp.scanPort=1521 +oracle.install.crs.config.ClusterConfiguration=STANDALONE +oracle.install.crs.config.configureAsExtendedCluster=false +oracle.install.crs.config.memberClusterManifestFile= +oracle.install.crs.config.clusterName=rac01cluster +oracle.install.crs.config.gpnp.configureGNS= +oracle.install.crs.config.autoConfigureClusterNodeVIP=false +oracle.install.crs.config.gpnp.gnsOption= +oracle.install.crs.config.gpnp.gnsClientDataFile= +oracle.install.crs.config.gpnp.gnsSubDomain= +oracle.install.crs.config.gpnp.gnsVIPAddress= +oracle.install.crs.config.sites= +oracle.install.crs.config.clusterNodes=racnodep1:racnodep1-vip:HUB,racnodep2:racnodep2-vip:HUB +oracle.install.crs.config.networkInterfaceList=eth0:10.0.20.0:1,eth1:192.168.17.0:5,eth2:192.168.18.0:5 +oracle.install.asm.configureGIMRDataDG=false +oracle.install.crs.config.storageOption= +oracle.install.crs.config.useIPMI=false +oracle.install.crs.config.ipmi.bmcUsername= +oracle.install.crs.config.ipmi.bmcPassword= +oracle.install.asm.storageOption=ASM +oracle.install.asmOnNAS.ocrLocation= +oracle.install.asmOnNAS.configureGIMRDataDG=false +oracle.install.asmOnNAS.gimrLocation= +oracle.install.asm.SYSASMPassword=ORacle__21c +oracle.install.asm.diskGroup.name=DATA +oracle.install.asm.diskGroup.redundancy=EXTERNAL +oracle.install.asm.diskGroup.AUSize=4 +oracle.install.asm.diskGroup.FailureGroups= +oracle.install.asm.diskGroup.disksWithFailureGroupNames=/dev/asm-disk1,,/dev/asm-disk2, +oracle.install.asm.diskGroup.disks=/dev/asm-disk1,/dev/asm-disk2 +oracle.install.asm.diskGroup.quorumFailureGroupNames= +oracle.install.asm.diskGroup.diskDiscoveryString=/dev/asm* +oracle.install.asm.monitorPassword=ORacle__21c +oracle.install.asm.gimrDG.name= +oracle.install.asm.gimrDG.redundancy= +oracle.install.asm.gimrDG.AUSize=1 +oracle.install.asm.gimrDG.FailureGroups= +oracle.install.asm.gimrDG.disksWithFailureGroupNames= +oracle.install.asm.gimrDG.disks= +oracle.install.asm.gimrDG.quorumFailureGroupNames= +oracle.install.asm.configureAFD=false +oracle.install.crs.configureRHPS=false +oracle.install.crs.config.ignoreDownNodes=false +oracle.install.config.managementOption=NONE +oracle.install.config.omsHost= +oracle.install.config.omsPort=0 +oracle.install.config.emAdminUser= +oracle.install.config.emAdminPassword= +oracle.install.crs.rootconfig.executeRootScript=false +oracle.install.crs.rootconfig.configMethod=ROOT +oracle.install.crs.rootconfig.sudoPath= +oracle.install.crs.rootconfig.sudoUserName= +oracle.install.crs.config.batchinfo= +oracle.install.crs.deleteNode.nodes= \ No newline at end of file diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racslimimage/withresponsefiles/dbca_21c.rsp b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racslimimage/withresponsefiles/dbca_21c.rsp new file mode 100644 index 0000000000..d45141abb4 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racslimimage/withresponsefiles/dbca_21c.rsp @@ -0,0 +1,58 @@ +responseFileVersion=/oracle/assistants/rspfmt_dbca_response_schema_v21.0.0 +gdbName=ORCLCDB +sid=ORCLCDB +databaseConfigType=RAC +RACOneNodeServiceName= +policyManaged=false +managementPolicy= +createServerPool=false +serverPoolName= +cardinality= +force=false +pqPoolName= +pqCardinality= +createAsContainerDatabase=true +numberOfPDBs=1 +pdbName=ORCLPDB +useLocalUndoForPDBs=true +pdbAdminPassword=ORacle__21c +nodelist=racnodep1,racnodep2 +templateName={ORACLE_HOME}/assistants/dbca/templates/General_Purpose.dbc +sysPassword=ORacle__21c +systemPassword=ORacle__21c +oracleHomeUserPassword= +emConfiguration= +runCVUChecks=true +dbsnmpPassword=ORacle__21c +omsHost= +omsPort= +emUser= +emPassword= +dvConfiguration=false +dvUserName= +dvUserPassword= +dvAccountManagerName= +dvAccountManagerPassword= +olsConfiguration=false +datafileJarLocation={ORACLE_HOME}/assistants/dbca/templates/ +datafileDestination=+DATA/{DB_UNIQUE_NAME}/ +recoveryAreaDestination= +storageType=ASM +diskGroupName=+DATA/{DB_UNIQUE_NAME}/ +asmsnmpPassword= +recoveryGroupName= +characterSet=AL32UTF8 +nationalCharacterSet=AL16UTF16 +registerWithDirService= +dirServiceUserName= +dirServicePassword= +walletPassword= +listeners=LISTENER +variablesFile= +variables=DB_UNIQUE_NAME=ORCLCDB,ORACLE_BASE=/u01/app/oracle,PDB_NAME=ORCLPDB,DB_NAME=ORCLCDB,ORACLE_HOME=/u01/app/oracle/product/21.3.0/dbhome_1,SID=ORCLCDB +initParams=audit_trail=none,audit_sys_operations=false,remote_login_passwordfile=exclusive +sampleSchema=false +memoryPercentage=40 +databaseType=MULTIPURPOSE +automaticMemoryManagement=false +totalMemory=5000 \ No newline at end of file diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racslimimage/withresponsefiles/nfsdevices/envfile_racnodep1 b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racslimimage/withresponsefiles/nfsdevices/envfile_racnodep1 new file mode 100644 index 0000000000..69944faa5e --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racslimimage/withresponsefiles/nfsdevices/envfile_racnodep1 @@ -0,0 +1,17 @@ +DNS_SERVERS=10.0.20.25 +CRS_PRIVATE_IP1=192.168.17.170 +CRS_PRIVATE_IP2=192.168.18.170 +GRID_HOME=/u01/app/21c/grid +DEFAULT_GATEWAY=10.0.20.1 +STAGING_SOFTWARE_LOC=/scratch/software/21c/goldimages +GRID_SW_ZIP_FILE=LINUX.X64_213000_grid_home.zip +DB_SW_ZIP_FILE=LINUX.X64_213000_db_home.zip +OP_TYPE=setuprac +SCAN_NAME=racnodepc1-scan +INSTALL_NODE=racnodep1 +GRID_RESPONSE_FILE=/tmp/grid_21c.rsp +DBCA_RESPONSE_FILE=/tmp/dbca_21c.rsp +ASM_ON_NAS=True +DB_PWD_FILE=pwdsecret +PWD_KEY=keysecret +DB_SERVICE=service:soepdb \ No newline at end of file diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racslimimage/withresponsefiles/nfsdevices/envfile_racnodep2 b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racslimimage/withresponsefiles/nfsdevices/envfile_racnodep2 new file mode 100644 index 0000000000..360aefdede --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racslimimage/withresponsefiles/nfsdevices/envfile_racnodep2 @@ -0,0 +1,17 @@ +DNS_SERVERS=10.0.20.25 +CRS_PRIVATE_IP1=192.168.17.171 +CRS_PRIVATE_IP2=192.168.18.171 +GRID_HOME=/u01/app/21c/grid +DEFAULT_GATEWAY=10.0.20.1 +STAGING_SOFTWARE_LOC=/scratch/software/21c/goldimages +GRID_SW_ZIP_FILE=LINUX.X64_213000_grid_home.zip +DB_SW_ZIP_FILE=LINUX.X64_213000_db_home.zip +OP_TYPE=setuprac +SCAN_NAME=racnodepc1-scan +INSTALL_NODE=racnodep1 +GRID_RESPONSE_FILE=/tmp/grid_21c.rsp +DBCA_RESPONSE_FILE=/tmp/dbca_21c.rsp +ASM_ON_NAS=True +DB_PWD_FILE=pwdsecret +PWD_KEY=keysecret +DB_SERVICE=service:soepdb \ No newline at end of file diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racslimimage/withresponsefiles/nfsdevices/grid_setup_new_21c.rsp b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racslimimage/withresponsefiles/nfsdevices/grid_setup_new_21c.rsp new file mode 100644 index 0000000000..16062dd6cb --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/docs/rac-container/racslimimage/withresponsefiles/nfsdevices/grid_setup_new_21c.rsp @@ -0,0 +1,64 @@ +oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v21.0.0 +INVENTORY_LOCATION=/u01/app/oraInventory +oracle.install.option=CRS_CONFIG +ORACLE_BASE=/u01/app/grid +oracle.install.asm.OSDBA=dba +oracle.install.asm.OSOPER= +oracle.install.asm.OSASM=asmadmin +oracle.install.crs.config.scanType=LOCAL_SCAN +oracle.install.crs.config.SCANClientDataFile= +oracle.install.crs.config.gpnp.scanName=racnodepc1-scan +oracle.install.crs.config.gpnp.scanPort=1521 +oracle.install.crs.config.ClusterConfiguration=STANDALONE +oracle.install.crs.config.configureAsExtendedCluster=false +oracle.install.crs.config.memberClusterManifestFile= +oracle.install.crs.config.clusterName=rac01cluster +oracle.install.crs.config.gpnp.configureGNS= +oracle.install.crs.config.autoConfigureClusterNodeVIP=false +oracle.install.crs.config.gpnp.gnsOption= +oracle.install.crs.config.gpnp.gnsClientDataFile= +oracle.install.crs.config.gpnp.gnsSubDomain= +oracle.install.crs.config.gpnp.gnsVIPAddress= +oracle.install.crs.config.sites= +oracle.install.crs.config.clusterNodes=racnodep1:racnodep1-vip:HUB,racnodep2:racnodep2-vip:HUB +oracle.install.crs.config.networkInterfaceList=eth0:10.0.20.0:1,eth1:192.168.17.0:5,eth2:192.168.18.0:5 +oracle.install.asm.configureGIMRDataDG=false +oracle.install.crs.config.storageOption= +oracle.install.crs.config.useIPMI=false +oracle.install.crs.config.ipmi.bmcUsername= +oracle.install.crs.config.ipmi.bmcPassword= +oracle.install.asm.storageOption=ASM +oracle.install.asmOnNAS.ocrLocation= +oracle.install.asmOnNAS.configureGIMRDataDG=false +oracle.install.asmOnNAS.gimrLocation= +oracle.install.asm.SYSASMPassword=ORacle__21c +oracle.install.asm.diskGroup.name=DATA +oracle.install.asm.diskGroup.redundancy=EXTERNAL +oracle.install.asm.diskGroup.AUSize=4 +oracle.install.asm.diskGroup.FailureGroups= +oracle.install.asm.diskGroup.disksWithFailureGroupNames=/oradata/asm_disk01.img,,/oradata/asm_disk02.img,,/oradata/asm_disk03.img,,/oradata/asm_disk04.img,,/oradata/asm_disk05.im +oracle.install.asm.diskGroup.disks=/oradata/asm_disk01.img,/oradata/asm_disk02.img,/oradata/asm_disk03.img,/oradata/asm_disk04.img,/oradata/asm_disk05.img +oracle.install.asm.diskGroup.quorumFailureGroupNames= +oracle.install.asm.diskGroup.diskDiscoveryString=/oradata/asm_disk* +oracle.install.asm.monitorPassword=ORacle__21c +oracle.install.asm.gimrDG.name= +oracle.install.asm.gimrDG.redundancy= +oracle.install.asm.gimrDG.AUSize=1 +oracle.install.asm.gimrDG.FailureGroups= +oracle.install.asm.gimrDG.disksWithFailureGroupNames= +oracle.install.asm.gimrDG.disks= +oracle.install.asm.gimrDG.quorumFailureGroupNames= +oracle.install.asm.configureAFD=false +oracle.install.crs.configureRHPS=false +oracle.install.crs.config.ignoreDownNodes=false +oracle.install.config.managementOption=NONE +oracle.install.config.omsHost= +oracle.install.config.omsPort=0 +oracle.install.config.emAdminUser= +oracle.install.config.emAdminPassword= +oracle.install.crs.rootconfig.executeRootScript=false +oracle.install.crs.rootconfig.configMethod=ROOT +oracle.install.crs.rootconfig.sudoPath= +oracle.install.crs.rootconfig.sudoUserName= +oracle.install.crs.config.batchinfo= +oracle.install.crs.deleteNode.nodes= \ No newline at end of file diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/samples/README.md b/OracleDatabase/RAC/OracleRealApplicationClusters/samples/README.md index 86c9ca297b..ee1c82427c 100644 --- a/OracleDatabase/RAC/OracleRealApplicationClusters/samples/README.md +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/samples/README.md @@ -15,7 +15,10 @@ Example of how to create a 2-node RAC based on Docker Compose. You can create a racpodmancompose ---------------- -Example of how to create a 2-node RAC based on Podman Compose. You can create a single-node RAC using Podman Compose based on your environment. For details, please refer to [README.MD of racpodmancompose](./racpodmancompose/README.md). +Example of how to create 2 node Oracle RAC Setup on **Podman Compose** using Oracle RAC image or RAC slim image, with or without User Defined Response files. You can also create multinode rac using responsefiles based on your environment. + +Refer [Podman Compose using Oracle RAC container image](./rac-compose/racimage/README.md) for details in order to setup 2 node Oracle RAC Setup on Podman Compose using Oracle RAC Container Image. +Refer [Podman Compose using Oracle RAC slim image](./rac-compose/racslimimage/README.md) for details in order to setup 2 node Oracle RAC Setup on Podman Compose using Oracle RAC Slim Image. Copyright --------- diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/samples/applypatch/README.md b/OracleDatabase/RAC/OracleRealApplicationClusters/samples/applypatch/README.md index 39cea6e80a..75cb2dc9a8 100644 --- a/OracleDatabase/RAC/OracleRealApplicationClusters/samples/applypatch/README.md +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/samples/applypatch/README.md @@ -1,35 +1,14 @@ # Example of how to create a patched database image - ============================================= - -- [Example of how to create a patched database image](#example-of-how-to-create-a-patched-database-image) - - [Build Oracle RAC Slim and Base Image](#build-oracle-rac-slim-and-base-image) - - [The patch structure](#the-patch-structure) - - [Installing the patch](#installing-the-patch) - - [Copyright](#copyright) - -## Build Oracle RAC Slim and Base Image - -- Create RAC slim image based on the version you want build the patched images. This image will be used during multi-stage build to minimize the size requirements. - - Change directory to `/docker-images/OracleDatabase/RAC/OracleRealApplicationClusters/dockerfiles` - - Build the RAC slim image - - ```bash - ./buildContainerImage.sh -v -i -p -o '--build-arg BASE_OL_IMAGE=oraclelinux:8 --build-arg SLIMMING=true' - - Example: - ./buildContainerImage.sh -v 21.3.0 -i -p -o '--build-arg BASE_OL_IMAGE=oraclelinux:8 --build-arg SLIMMING=true' - ``` - -**Note**: For Docker, you need to change `BASE_OL_IMAGE` to `oraclelinux:7-slim`. - -- If you have not already built the base Oracle RAC image, you need to build it by following the [README.md](../../../OracleRealApplicationClusters/README.md). Once you have built the base Oracle RAC image, you can create a patched version of it. In order to build such an image you will have to provide the patch zip file. +## Pre-requisites +After you build your base Oracle RAC image following the [README.md](../../../OracleRealApplicationClusters/README.md#building-oracle-rac-database-container-image), it is mandatory to create **Oracle RAC Slim image** following [README.md](../../../OracleRealApplicationClusters/README.md#building-oracle-rac-database-container-slim-image), then you can create a patched version of it. +To build a patched image, you must provide the patch zip file. **Notes:** -- Some patches require a newer version of `OPatch`, the Oracle Interim Patch Installer utility. It is highly recommended, you always update opatch with the new version. -- You can only patch 19.3.0 and above using this script. -- The scripts will automatically install a newer OPatch version, if provided. +* Some patches require a newer version of `OPatch`, the Oracle Interim Patch Installer utility. Oracle highly recommends that you always update opatch with the new version. +* You can only patch releases 19.3.0 or later using this script. +* The scripts automatically install a newer OPatch version, if provided. ## The patch structure @@ -52,19 +31,19 @@ The scripts used in this example rely on following directory structure: p6880880*.zip (optional, OPatch zip file) ``` -**patches:** The working directory for patch installation. -**grid:**: The directory containing patches(Release Update) for Oracle Grid Infrastructure. -**oracle**: The directory containing patches(Release Update) for Oracle RAC Home and Database -**001**: The directory containing the patch(Release Update) zip file. +**patches:** The working directory for patch installation. +**grid:**: The directory containing patches (Release Update) for Oracle Grid Infrastructure. +**oracle**: The directory containing patches (Release Update) for Oracle Real Application Clusters (Oracle RAC) and Oracle Database +**001**: The directory containing the patch (Release Update) zip file. **00N**: The second, third, ... directory containing the second, third, ... patch zip file. -This is useful if you want to install multiple patches at once. The script will go into each of these directories in the numbered order and apply the patches. -**Important**: It is up to the user to guarantee the patch order, if any. +These directories are useful if you want to install multiple patches at once. The script will go into each of these directories in the numbered order and apply the patches. +**Important**: It is up to you to guarantee the patch order, if any order is required. -### Installing the patch +## Installing the patch -- If you have multiple patches to be applied at once, add more sub directories following the numbering scheme of 002, 003, 004, 005, 00N. -- If you have a new version of OPatch, put the OPatch zip file directly into the patches directory. Do not change the name of the zip file! -- A utility script named `buildPatchedContainerImage.sh` has been provided to assist with building the patched image: +* If you have multiple patches that you want to apply at once, then add more subdirectories following the numbering scheme of 002, 003, 004, 005, 00_N_. +* If you have a new version of OPatch, then put the OPatch zip file directly into the patches directory. **Do not change the name of the OPatch zip file**. +* A utility script named `buildPatchedContainerImage.sh` is provided to assist with building the patched image: ```bash [oracle@localhost applypatch]# ./buildPatchedContainerImage.sh -h @@ -77,16 +56,15 @@ This is useful if you want to install multiple patches at once. The script will -o: passes on container build option -p: patch label to be used for the tag ``` - - - Following is an example of building patched image using 21.3.0. Note that `BASE_RAC_IMAGE=oracle/database-rac:21.3.0` set to 21.3.0. You need to set BASE_RAC_IMAGE based on your environment. +* The following is an example of building a patched image using 21.3.0. Note that `BASE_RAC_IMAGE=oracle/database-rac:21.3.0` is set to 21.3.0. You must set BASE_RAC_IMAGE and RAC_SLIM_IMAGE based on your enviornment. ```bash - ./buildPatchedContainerImage.sh -v 21.3.0 -p 21.7.0 -o '--build-arg BASE_RAC_IMAGE=localhost/oracle/database-rac:21.3.0 --build-arg RAC_SLIM_IMAGE=localhost/oracle/database-rac:21.3.0-slim' + # ./buildPatchedContainerImage.sh -v 21.3.0 -p 21.16.0 -o '--build-arg BASE_RAC_IMAGE=localhost/oracle/database-rac:21.3.0 --build-arg RAC_SLIM_IMAGE=localhost/oracle/database-rac:21.3.0-slim' ``` -**Important:** It is not supported to apply patches on already existing databases. You will have to create a new, patched database container image. You can use the PDB unplug/plug functionality to carry over your PDB into the patched container database! +**Important:** It is not supported to apply patches on already existing databases. You must create a new, patched database container image. You can use the PDB unplug/plug functionality to carry over your PDB into the patched container database. -**Notes**: If you are trying to patch the image on OL8 on PODMAN host, you must have `podman-docker` package installed on your PODMAN host. +**Notes**: If you are trying to patch the image on Oracle Linux 8 (OL8) on the PODMAN host, then you must have the `podman-docker` package installed on your PODMAN host. ## Copyright diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/samples/applypatch/buildPatchedContainerImage.sh b/OracleDatabase/RAC/OracleRealApplicationClusters/samples/applypatch/buildPatchedContainerImage.sh index b9a2d113ec..7779e9f818 100755 --- a/OracleDatabase/RAC/OracleRealApplicationClusters/samples/applypatch/buildPatchedContainerImage.sh +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/samples/applypatch/buildPatchedContainerImage.sh @@ -11,7 +11,6 @@ It builds a patched RAC container image Parameters: -v: version to build - Choose one of: $(for i in $(ls -d */); do echo -n "${i%%/} "; done) -o: passes on container build option -p: patch label to be used for the tag @@ -32,8 +31,11 @@ if [ "$#" -eq 0 ]; then fi # Parameters +# shellcheck disable=SC2034 ENTERPRISE=0 +# shellcheck disable=SC2034 STANDARD=0 +# shellcheck disable=SC2034 LATEST="latest" VERSION='x' PATCHLABEL="patch" @@ -68,26 +70,28 @@ done IMAGE_NAME="oracle/database-rac:$VERSION-$PATCHLABEL" # Go into version folder +# shellcheck disable=SC2164 cd latest # Proxy settings PROXY_SETTINGS="" +# shellcheck disable=SC2154 if [ "${http_proxy}" != "" ]; then PROXY_SETTINGS="$PROXY_SETTINGS --build-arg http_proxy=${http_proxy}" fi - +# shellcheck disable=SC2154 if [ "${https_proxy}" != "" ]; then PROXY_SETTINGS="$PROXY_SETTINGS --build-arg https_proxy=${https_proxy}" fi - +# shellcheck disable=SC2154 if [ "${ftp_proxy}" != "" ]; then PROXY_SETTINGS="$PROXY_SETTINGS --build-arg ftp_proxy=${ftp_proxy}" fi - +# shellcheck disable=SC2154 if [ "${no_proxy}" != "" ]; then PROXY_SETTINGS="$PROXY_SETTINGS --build-arg no_proxy=${no_proxy}" fi - +# shellcheck disable=SC2154 if [ "$PROXY_SETTINGS" != "" ]; then echo "Proxy settings were found and will be used during the build." fi @@ -99,15 +103,23 @@ echo "Building image '$IMAGE_NAME' ..." # BUILD THE IMAGE (replace all environment variables) BUILD_START=$(date '+%s') -docker build --force-rm=true --no-cache=true $DOCKEROPS $PROXY_SETTINGS -t $IMAGE_NAME -f Dockerfile . || { +docker build --no-cache=true $DOCKEROPS $PROXY_SETTINGS -t env -f ContainerfileEnv . +# shellcheck disable=SC2046 +docker cp $(docker create --name env-070125 --rm env):/tmp/.env ./ +# shellcheck disable=SC2046 +docker build --no-cache=true $DOCKEROPS \ + --build-arg GRID_HOME=$(grep GRID_HOME .env | cut -d '=' -f2) \ + --build-arg DB_HOME=$(grep DB_HOME .env | cut -d '=' -f2) $PROXY_SETTINGS -t $IMAGE_NAME -f Containerfile . || { echo "There was an error building the image." exit 1 } +docker rmi -f env-070125 + BUILD_END=$(date '+%s') BUILD_ELAPSED=`expr $BUILD_END - $BUILD_START` echo "" - +# shellcheck disable=SC2320 if [ $? -eq 0 ]; then cat << EOF Oracle Database container image for Real Application Clusters (RAC) version $VERSION is ready to be extended: diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/samples/applypatch/latest/Dockerfile b/OracleDatabase/RAC/OracleRealApplicationClusters/samples/applypatch/latest/Containerfile similarity index 55% rename from OracleDatabase/RAC/OracleRealApplicationClusters/samples/applypatch/latest/Dockerfile rename to OracleDatabase/RAC/OracleRealApplicationClusters/samples/applypatch/latest/Containerfile index b71c0ee04c..46cd29abee 100644 --- a/OracleDatabase/RAC/OracleRealApplicationClusters/samples/applypatch/latest/Dockerfile +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/samples/applypatch/latest/Containerfile @@ -5,7 +5,7 @@ # ORACLE DOCKERFILES PROJECT # -------------------------- # This is the Dockerfile for a patched Oracle Database 21c Release 3 -# +# # REQUIREMETNS FOR THIS IMAGE # ---------------------------------- # The oracle/rac-database:21.3.0 image has to exist @@ -13,28 +13,19 @@ # HOW TO BUILD THIS IMAGE # ----------------------- # Put the downloaded patch(es) into the sub folders patch/0NN -# Run: -# $ docker build -f Dockerfile -t oracle/rac-database:21.3.0- . +# Run: +# $ docker build -f Dockerfile -t oracle/rac-database:21.3.0- . # -# hadolint global ignore=DL3006,DL3025 -ARG BASE_RAC_IMAGE=oracle/database-rac:21.3.0 +ARG BASE_RAC_IMAGE=localhost/oracle/database-rac:19.3.0 +ARG RAC_SLIM_IMAGE=localhost/oracle/database-rac:19.3.0-slim # Pull base image # --------------- +# hadolint ignore=DL3006 FROM $BASE_RAC_IMAGE as builder -ARG RAC_SLIM_IMAGE - -# Labels -# ------ -LABEL "provider"="Oracle" \ - "issues"="https://github.com/oracle/docker-images/issues" \ - "maintainer"="paramdeep Saini " \ - "volume.setup.location1"="/opt/scripts" \ - "volume.startup.location1"="/opt/scripts/startup" \ - "port.listener"="1521" \ - "port.oemexpress"="5500" # Argument to control removal of components not needed after db software installation +ARG SLIMMING=false ARG PATCH_DIR="patches" ARG DB_EDITION="EE" ARG USER="root" @@ -42,7 +33,8 @@ ARG WORKDIR="/rac-work-dir" # Environment variables required for this build (do NOT change) # ------------------------------------------------------------- -USER $USER +# hadolint ignore=DL3002 +USER root ENV PATCH_DIR=$PATCH_DIR \ GRID_PATCH_FILE="applyGridPatches.sh" \ @@ -51,7 +43,7 @@ ENV PATCH_DIR=$PATCH_DIR \ DB_USER="oracle" \ USER=$USER \ WORKDIR=$WORKDIR \ - GRID_USER="grid" + GRID_USER="grid" # Use second ENV so that variable get substituted ENV PATCH_INSTALL_DIR=/tmp/patches @@ -74,43 +66,59 @@ RUN chown -R grid:oinstall $PATCH_INSTALL_DIR/*.sh && \ USER oracle RUN $PATCH_INSTALL_DIR/$DB_PATCH_FILE $PATCH_INSTALL_DIR +# hadolint ignore=DL3002 +USER root -USER $USER - -RUN "$PATCH_INSTALL_DIR"/"$FIXUP_PREQ_FILE" && \ - cp "$PATCH_INSTALL_DIR"/"$FIXUP_PREQ_FILE" "$SCRIPT_DIR"/"$FIXUP_PREQ_FILE" && \ +RUN $PATCH_INSTALL_DIR/$FIXUP_PREQ_FILE && \ rm -rf /etc/oracle && \ - rm -rf "$PATCH_INSTALL_DIR" - -############################################# -# ------------------------------------------- -# Start new stage for grid/DB with Slim image -# ------------------------------------------- -############################################# - -FROM $RAC_SLIM_IMAGE as final -ARG USER + rm -rf $PATCH_INSTALL_DIR + +##################### +# hadolint ignore=DL3006 +FROM $RAC_SLIM_IMAGE AS final + +# Define build-time arguments +ARG GRID_HOME +ARG DB_HOME + +#Set environment variables using build arguments +ENV GRID_BASE=/u01/app/grid \ + GRID_HOME=$GRID_HOME \ + DB_BASE=/u01/app/oracle \ + DB_HOME=$DB_HOME \ + INSTALL_SCRIPTS=/opt/scripts/install \ + SCRIPT_DIR=/opt/scripts/startup \ + RAC_SCRIPTS_DIR="scripts" + +ENV GRID_PATH=$GRID_HOME/bin:$GRID_HOME/OPatch/:$GRID_HOME/perl/bin:/usr/sbin:/bin:/sbin \ + DB_PATH=$DB_HOME/bin:$DB_HOME/OPatch/:$DB_HOME/perl/bin:/usr/sbin:/bin:/sbin \ + GRID_LD_LIBRARY_PATH=$GRID_HOME/lib:/usr/lib:/lib \ + DB_LD_LIBRARY_PATH=$DB_HOME/lib:/usr/lib:/lib + +# Run some basic command to verify the environment variables (optional) +RUN echo "GRID_BASE=$GRID_BASE" && \ + echo "GRID_HOME=$GRID_HOME" && \ + echo "DB_BASE=$DB_BASE" && \ + echo "DB_HOME=$DB_HOME" + +RUN if [ -d "/u01" ]; then \ + rm -rf /u01 && \ + :; \ +fi COPY --from=builder /u01 /u01 -RUN mkdir -p /tmp/scripts -COPY --from=builder $SCRIPT_DIR /tmp/scripts -RUN cp -rn /tmp/scripts/* $SCRIPT_DIR/ && \ - rm -rf /tmp/scripts - -RUN chmod 755 "$SCRIPT_DIR"/* && \ - "$INVENTORY"/orainstRoot.sh && \ - "$GRID_HOME"/root.sh && \ - "$DB_HOME"/root.sh && \ - "$SCRIPT_DIR"/"$FIXUP_PREQ_FILE" && \ - cp "$SCRIPT_DIR"/"$INITSH" /usr/bin/"$INITSH" && \ - chmod 755 /usr/bin/"$INITSH" && \ - rm -f "$SCRIPT_DIR"/"$FIXUP_PREQ_FILE" - -USER $USER -WORKDIR $WORKDIR +USER ${USER} VOLUME ["/common_scripts"] +WORKDIR $WORKDIR +HEALTHCHECK --interval=2m --start-period=30m \ + CMD "$SCRIPT_DIR/scripts/main.py --checkracinst=true" >/dev/null || exit 1 +#Fix SID detection +# hadolint ignore=SC2086 +RUN $INVENTORY/orainstRoot.sh && \ + $GRID_HOME/root.sh && \ + $DB_HOME/root.sh # Define default command to start Oracle Grid and RAC Database setup. - +# hadolint ignore=DL3025 ENTRYPOINT /usr/bin/$INITSH \ No newline at end of file diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/samples/applypatch/latest/ContainerfileEnv b/OracleDatabase/RAC/OracleRealApplicationClusters/samples/applypatch/latest/ContainerfileEnv new file mode 100644 index 0000000000..c6e78351fa --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/samples/applypatch/latest/ContainerfileEnv @@ -0,0 +1,9 @@ +# Stage 1: Base Stage with Environment Variables +ARG BASE_RAC_IMAGE=localhost/oracle/database-rac:19.3.0 +FROM $BASE_RAC_IMAGE + +# Write the environment variables to a .env file +RUN echo "GRID_HOME=$GRID_HOME" >> /tmp/.env && \ + echo "DB_HOME=$DB_HOME" >> /tmp/.env + +ENTRYPOINT ["/bin/bash"] \ No newline at end of file diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/samples/rac-compose/racimage/README.md b/OracleDatabase/RAC/OracleRealApplicationClusters/samples/rac-compose/racimage/README.md new file mode 100644 index 0000000000..90b2db7037 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/samples/rac-compose/racimage/README.md @@ -0,0 +1,801 @@ +# Oracle RAC on Podman Compose using Oracle RAC Image +=============================================================== + +Refer below instructions for setup of Oracle RAC on Podman using Oracle RAC Image for various scenarios. + +- [Oracle RAC on Podman Compose using Oracle RAC Image](#oracle-rac-on-podman-compose-using-oracle-rac-image) + - [Section 1 : Prerequisites for Setting up Oracle RAC on Container using Oracle RAC Image](#section-1-prerequisites-for-setting-up-oracle-rac-on-container-using-oracle-rac-image) + - [Section 2: Setup Oracle RAC Containers with Oracle RAC Image using Podman Compose Files](#section-2-setup-oracle-rac-containers-with-oracle-rac-image-using-podman-compose-files) + - [Section 2.1: Deploying With BlockDevices](#section-21-deploying-with-blockdevices) + - [Section 2.1.1: Setup Without Using User Defined Response files](#section-211-setup-without-using-user-defined-response-files) + - [Section 2.1.2: Setup Using User Defined Response files](#section-212-setup-using-user-defined-response-files) + - [Section 2.2: Deploying With NFS Storage Devices](#section-22-deploying-with-nfs-storage-devices) + - [Section 2.2.1: Setup Without Using User Defined Response files](#section-221-setup-without-using-user-defined-response-files) + - [Section 2.2.2: Setup Using User Defined Response files](#section-222-setup-using-user-defined-response-files) + - [Section 3: Sample of Addition of Nodes to Oracle RAC Containers based on Oracle RAC Image](#section-3-sample-of-addition-of-nodes-to-oracle-rac-containers-based-on-oracle-rac-image) + - [Section 3.1: Sample of Addition of Nodes to Oracle RAC Containers using Podman Compose based on Oracle RAC Image with BlockDevices](#section-31-sample-of-addition-of-nodes-to-oracle-rac-containers-using-podman-compose-based-on-oracle-rac-image-with-blockdevices) + - [Section 3.2: Sample of Addition of Nodes to Oracle RAC Containers using Podman Compose based on Oracle RAC Image with NFS Storage Devices](#section-32-sample-of-addition-of-nodes-to-oracle-rac-containers-using-podman-compose-based-on-oracle-rac-image-with-nfs-storage-devices) + - [Section 4: Environment Variables for Oracle RAC on Podman Compose](#section-4-environment-variables-for-oracle-rac-on-podman-compose) + - [Section 5: Validating Oracle RAC Environment](#section-5-validating-oracle-rac-environment) + - [Section 6: Connecting to Oracle RAC Environment](#section-6-connecting-to-oracle-rac-environment) + - [Cleanup](#cleanup) + - [Support](#support) + - [License](#license) + - [Copyright](#copyright) + +## Oracle RAC Setup on Podman Compose using Oracle RAC Image + +You can deploy multi node Oracle RAC Setup using Oracle RAC Image either on Block Devices or NFS storage Devices by using User Defined Response Files or without using same. All these scenarios are discussed in detail as you proceed further below. + +## Section 1: Prerequisites for Setting up Oracle RAC on Container using Oracle RAC Image +**IMPORTANT :** Execute all the steps specified in this section (customized for your environment) before you proceed to the next section. Completing prerequisite steps is a requirement for successful configuration. + + +* Execute the [Preparation Steps for running Oracle RAC database in containers](../../../README.md#preparation-steps-for-running-oracle-rac-database-in-containers) +* Create Oracle Connection Manager on Container image and container if the IPs are not available on user network.Please refer [RAC Oracle Connection Manager README.MD](../../../../OracleConnectionManager/README.md). +* Make sure Oracle RAC Oracle RAC Image is present. Either you can pull and use Oracle RAC Image from Oracle Container Registry or you can create the Oracle RAC Container imageby following [Building Oracle RAC Database Container Images](../../../README.md#getting-oracle-rac-database-container-images) + ```bash + # podman images|grep database-rac + localhost/oracle/database-rac 21.3.0 52a490e77887 4 days ago 9.52 GB + ``` +* Execute the [Network](../../../README.md#network-management). +* Execute the [Password Management](../../../README.md#password-management). +* `podman-compose` is part of [ol8_developer_EPEL](https://yum.oracle.com/repo/OracleLinux/ol8/developer/EPEL/x86_64/index.html). Enable `ol8_developer_EPEL` repository and install `podman-compose` as below- + ```bash + sudo dnf config-manager --enable ol8_developer_EPEL + sudo dnf install -y podman-compose + ``` +In order to setup 2 Node RAC containers using Podman compose, please make sure pre-requisites are completed before proceeding further - + +## Section 2: Setup Oracle RAC Containers with Oracle RAC Image using Podman Compose Files +### Section 2.1: Deploying With BlockDevices +#### Section 2.1.1: Setup Without Using User Defined Response files +Export the required environment variables required by `podman-compose.yml` file - +```bash +export HEALTHCHECK_INTERVAL=60s +export HEALTHCHECK_TIMEOUT=120s +export HEALTHCHECK_RETRIES=240 +export RACNODE1_CONTAINER_NAME=racnodep1 +export RACNODE1_HOST_NAME=racnodep1 +export RACNODE1_PUBLIC_IP=10.0.20.170 +export RACNODE1_CRS_PRIVATE_IP1=192.168.17.170 +export RACNODE1_CRS_PRIVATE_IP2=192.168.18.170 +export INSTALL_NODE=racnodep1 +export RAC_IMAGE_NAME=localhost/oracle/database-rac:21.3.0 +export DEFAULT_GATEWAY="10.0.20.1" +export CRS_NODES="\"pubhost:racnodep1,viphost:racnodep1-vip;pubhost:racnodep2,viphost:racnodep2-vip\"" +export SCAN_NAME=racnodepc1-scan +export ASM_DEVICE1="/dev/asm-disk1" +export ASM_DEVICE2="/dev/asm-disk2" +export CRS_ASM_DEVICE_LIST="${ASM_DEVICE1},${ASM_DEVICE2}" +export ASM_DISK1="/dev/oracleoci/oraclevdd" +export ASM_DISK2="/dev/oracleoci/oraclevde" +export RACNODE2_CONTAINER_NAME=racnodep2 +export RACNODE2_HOST_NAME=racnodep2 +export RACNODE2_PUBLIC_IP=10.0.20.171 +export RACNODE2_CRS_PRIVATE_IP1=192.168.17.171 +export RACNODE2_CRS_PRIVATE_IP2=192.168.18.171 +export PWD_SECRET_FILE=/opt/.secrets/pwdfile.enc +export KEY_SECRET_FILE=/opt/.secrets/key.pem +export DNS_CONTAINER_NAME=rac-dnsserver +export DNS_HOST_NAME=racdns +export DNS_IMAGE_NAME="oracle/rac-dnsserver:latest" +export RAC_NODE_NAME_PREFIXD="racnoded" +export RAC_NODE_NAME_PREFIXP="racnodep" +export DNS_DOMAIN=example.info +export PUBLIC_NETWORK_NAME="rac_pub1_nw" +export PUBLIC_NETWORK_SUBNET="10.0.20.0/24" +export PRIVATE1_NETWORK_NAME="rac_priv1_nw" +export PRIVATE1_NETWORK_SUBNET="192.168.17.0/24" +export PRIVATE2_NETWORK_NAME="rac_priv2_nw" +export PRIVATE2_NETWORK_SUBNET="192.168.18.0/24" +export DNS_PUBLIC_IP=10.0.20.25 +export DNS_PRIVATE1_IP=192.168.17.25 +export DNS_PRIVATE2_IP=192.168.18.25 +export CMAN_CONTAINER_NAME=racnode-cman +export CMAN_HOST_NAME=racnode-cman1 +export CMAN_IMAGE_NAME="localhost/oracle/client-cman:21.3.0" +export CMAN_PUBLIC_IP=10.0.20.15 +export CMAN_PUBLIC_HOSTNAME="racnode-cman1" +export DB_HOSTDETAILS="HOST=racnodepc1-scan:RULE_ACT=accept,HOST=racnodep1:IP=10.0.20.170" +export DB_SERVICE=service:soepdb +``` + +Create compose file named [podman-compose.yml](./withoutresponsefiles/blockdevices/podman-compose.yml) in your working directory. + +Create podman networks- +```bash +podman network create --driver=bridge --subnet=${PUBLIC_NETWORK_SUBNET} ${PUBLIC_NETWORK_NAME} +podman network create --driver=bridge --subnet=${PRIVATE1_NETWORK_SUBNET} ${PRIVATE1_NETWORK_NAME} --disable-dns +podman network create --driver=bridge --subnet=${PRIVATE2_NETWORK_SUBNET} ${PRIVATE2_NETWORK_NAME} --disable-dns +``` + +Bring up DNS Containers- +```bash +podman-compose up -d ${DNS_CONTAINER_NAME} +podman-compose logs ${DNS_CONTAINER_NAME} +``` +DNS Container Logs- +```bash +podman-compose logs ${DNS_CONTAINER_NAME} +03-28-2024 07:46:59 UTC : : ################################################ +03-28-2024 07:46:59 UTC : : DNS Server IS READY TO USE! +03-28-2024 07:46:59 UTC : : ################################################ + +``` +Bring up RAC Containers- +```bash +podman-compose --podman-run-args="-t -i --systemd=always --cpuset-cpus 0-1 --memory 16G --memory-swap 32G" up -d ${RACNODE1_CONTAINER_NAME} +podman-compose stop ${RACNODE1_CONTAINER_NAME} + +podman-compose --podman-run-args="-t -i --systemd=always --cpuset-cpus 0-1 --memory 16G --memory-swap 32G" up -d ${RACNODE2_CONTAINER_NAME} +podman-compose stop ${RACNODE2_CONTAINER_NAME} + +podman network disconnect ${PUBLIC_NETWORK_NAME} ${RACNODE1_CONTAINER_NAME} +podman network disconnect ${PRIVATE1_NETWORK_NAME} ${RACNODE1_CONTAINER_NAME} +podman network disconnect ${PRIVATE2_NETWORK_NAME} ${RACNODE1_CONTAINER_NAME} + +podman network disconnect ${PUBLIC_NETWORK_NAME} ${RACNODE2_CONTAINER_NAME} +podman network disconnect ${PRIVATE1_NETWORK_NAME} ${RACNODE2_CONTAINER_NAME} +podman network disconnect ${PRIVATE2_NETWORK_NAME} ${RACNODE2_CONTAINER_NAME} + +podman network connect ${PUBLIC_NETWORK_NAME} --ip ${RACNODE1_PUBLIC_IP} ${RACNODE1_CONTAINER_NAME} +podman network connect ${PRIVATE1_NETWORK_NAME} --ip ${RACNODE1_CRS_PRIVATE_IP1} ${RACNODE1_CONTAINER_NAME} +podman network connect ${PRIVATE2_NETWORK_NAME} --ip ${RACNODE1_CRS_PRIVATE_IP2} ${RACNODE1_CONTAINER_NAME} + +podman network connect ${PUBLIC_NETWORK_NAME} --ip ${RACNODE2_PUBLIC_IP} ${RACNODE2_CONTAINER_NAME} +podman network connect ${PRIVATE1_NETWORK_NAME} --ip ${RACNODE2_CRS_PRIVATE_IP1} ${RACNODE2_CONTAINER_NAME} +podman network connect ${PRIVATE2_NETWORK_NAME} --ip ${RACNODE2_CRS_PRIVATE_IP2} ${RACNODE2_CONTAINER_NAME} + +podman-compose start ${RACNODE1_CONTAINER_NAME} +podman-compose start ${RACNODE2_CONTAINER_NAME} +podman exec ${RACNODE1_CONTAINER_NAME} /bin/bash -c "tail -f /tmp/orod/oracle_rac_setup.log" +``` + +Successful Message when RAC container is setup properly- +```bash +=================================== +ORACLE RAC DATABASE IS READY TO USE +=================================== +``` + +(Optionally) Bring up CMAN Container- +```bash +podman-compose up -d ${CMAN_CONTAINER_NAME} + +podman-compose logs -f ${CMAN_CONTAINER_NAME} +################################################ + CONNECTION MANAGER IS READY TO USE! +################################################ +``` +#### Section 2.1.2: Setup Using User Defined Response files +Make sure you completed pre-requisites step to install Podman Compose on required Podman Host Machines. + +On the shared folder between both RAC nodes, copy file named [grid_setup_new_21c.rsp](withresponsefiles/nfsdevices/grid_setup_new_21c.rsp) to shared location e.g `/scratch/common_scripts/podman/rac/grid_setup_new_21c.rsp`. You can skip this step if you are planing to not to use **User Defined Response Files for RAC**. + +If SELinux host is enable on machine then execute the following as well - +```bash +semanage fcontext -a -t container_file_t /scratch/common_scripts/podman/rac/grid_setup_new_21c.rsp +restorecon -v /scratch/common_scripts/podman/rac/grid_setup_new_21c.rsp +semanage fcontext -a -t container_file_t /scratch/common_scripts/podman/rac/dbca_21c.rsp +restorecon -v /scratch/common_scripts/podman/rac/dbca_21c.rsp +``` +Now, Export the required environment variables required by `podman-compose.yml` file - +```bash +export HEALTHCHECK_INTERVAL=60s +export HEALTHCHECK_TIMEOUT=120s +export HEALTHCHECK_RETRIES=240 +export RACNODE1_CONTAINER_NAME=racnodep1 +export RACNODE1_HOST_NAME=racnodep1 +export RACNODE1_PUBLIC_IP=10.0.20.170 +export RACNODE1_CRS_PRIVATE_IP1=192.168.17.170 +export RACNODE1_CRS_PRIVATE_IP2=192.168.18.170 +export INSTALL_NODE=racnodep1 +export RAC_IMAGE_NAME=localhost/oracle/database-rac:21.3.0 +export DEFAULT_GATEWAY="10.0.20.1" +export CRS_NODES="\"pubhost:racnodep1,viphost:racnodep1-vip;pubhost:racnodep2,viphost:racnodep2-vip\"" +export SCAN_NAME=racnodepc1-scan +export ASM_DEVICE1="/dev/asm-disk1" +export ASM_DEVICE2="/dev/asm-disk2" +export CRS_ASM_DEVICE_LIST="${ASM_DEVICE1},${ASM_DEVICE2}" +export ASM_DISK1="/dev/oracleoci/oraclevdd" +export ASM_DISK2="/dev/oracleoci/oraclevde" +export RACNODE2_CONTAINER_NAME=racnodep2 +export RACNODE2_HOST_NAME=racnodep2 +export RACNODE2_PUBLIC_IP=10.0.20.171 +export RACNODE2_CRS_PRIVATE_IP1=192.168.17.171 +export RACNODE2_CRS_PRIVATE_IP2=192.168.18.171 +export PWD_SECRET_FILE=/opt/.secrets/pwdfile.enc +export KEY_SECRET_FILE=/opt/.secrets/key.pem +export DNS_CONTAINER_NAME=rac-dnsserver +export DNS_HOST_NAME=racdns +export DNS_IMAGE_NAME="oracle/rac-dnsserver:latest" +export RAC_NODE_NAME_PREFIXP="racnodep" +export DNS_DOMAIN=example.info +export PUBLIC_NETWORK_NAME="rac_pub1_nw" +export PUBLIC_NETWORK_SUBNET="10.0.20.0/24" +export PRIVATE1_NETWORK_NAME="rac_priv1_nw" +export PRIVATE1_NETWORK_SUBNET="192.168.17.0/24" +export PRIVATE2_NETWORK_NAME="rac_priv2_nw" +export PRIVATE2_NETWORK_SUBNET="192.168.18.0/24" +export DNS_PUBLIC_IP=10.0.20.25 +export DNS_PRIVATE1_IP=192.168.17.25 +export DNS_PRIVATE2_IP=192.168.18.25 +export CMAN_CONTAINER_NAME=racnodepc1-cman +export CMAN_HOST_NAME=racnodepc1-cman +export CMAN_IMAGE_NAME="localhost/oracle/client-cman:21.3.0" +export CMAN_PUBLIC_IP=10.0.20.15 +export CMAN_PUBLIC_HOSTNAME="racnodepc1-cman" +export DB_HOSTDETAILS="HOST=racnodepc1-scan:RULE_ACT=accept,HOST=racnodep1:IP=10.0.20.170" +export GRID_RESPONSE_FILE="/scratch/common_scripts/podman/rac/grid_setup_new_21c.rsp" +export DB_SERVICE=service:soepdb +``` +Create podman networks- +```bash +podman network create --driver=bridge --subnet=${PUBLIC_NETWORK_SUBNET} ${PUBLIC_NETWORK_NAME} +podman network create --driver=bridge --subnet=${PRIVATE1_NETWORK_SUBNET} ${PRIVATE1_NETWORK_NAME} --disable-dns +podman network create --driver=bridge --subnet=${PRIVATE2_NETWORK_SUBNET} ${PRIVATE2_NETWORK_NAME} --disable-dns +``` + +Create compose file named [podman-compose.yml](./withresponsefiles/blockdevices/podman-compose.yml) in your working directory. + +Bring up DNS Containers- +```bash +podman-compose up -d ${DNS_CONTAINER_NAME} +podman-compose stop ${DNS_CONTAINER_NAME} + +podman network disconnect ${PUBLIC_NETWORK_NAME} ${DNS_CONTAINER_NAME} +podman network connect ${PUBLIC_NETWORK_NAME} --ip ${DNS_PUBLIC_IP} ${DNS_CONTAINER_NAME} +podman-compose start ${DNS_CONTAINER_NAME} +``` + +Successful logs when DNS container comes up- +```bash +podman-compose logs ${DNS_CONTAINER_NAME} +################################################ + DNS Server IS READY TO USE! +################################################ +``` +Bring up RAC Containers- +```bash +podman-compose --podman-run-args="-t -i --systemd=always --cpuset-cpus 0-1 --memory 16G --memory-swap 32G" up -d ${RACNODE1_CONTAINER_NAME} +podman-compose stop ${RACNODE1_CONTAINER_NAME} + +podman-compose --podman-run-args="-t -i --systemd=always --cpuset-cpus 0-1 --memory 16G --memory-swap 32G" up -d ${RACNODE2_CONTAINER_NAME} +podman-compose stop ${RACNODE2_CONTAINER_NAME} + +podman network disconnect ${PUBLIC_NETWORK_NAME} ${RACNODE1_CONTAINER_NAME} +podman network disconnect ${PRIVATE1_NETWORK_NAME} ${RACNODE1_CONTAINER_NAME} +podman network disconnect ${PRIVATE2_NETWORK_NAME} ${RACNODE1_CONTAINER_NAME} + +podman network disconnect ${PUBLIC_NETWORK_NAME} ${RACNODE2_CONTAINER_NAME} +podman network disconnect ${PRIVATE1_NETWORK_NAME} ${RACNODE2_CONTAINER_NAME} +podman network disconnect ${PRIVATE2_NETWORK_NAME} ${RACNODE2_CONTAINER_NAME} + +podman network connect ${PUBLIC_NETWORK_NAME} --ip ${RACNODE1_PUBLIC_IP} ${RACNODE1_CONTAINER_NAME} +podman network connect ${PRIVATE1_NETWORK_NAME} --ip ${RACNODE1_CRS_PRIVATE_IP1} ${RACNODE1_CONTAINER_NAME} +podman network connect ${PRIVATE2_NETWORK_NAME} --ip ${RACNODE1_CRS_PRIVATE_IP2} ${RACNODE1_CONTAINER_NAME} + +podman network connect ${PUBLIC_NETWORK_NAME} --ip ${RACNODE2_PUBLIC_IP} ${RACNODE2_CONTAINER_NAME} +podman network connect ${PRIVATE1_NETWORK_NAME} --ip ${RACNODE2_CRS_PRIVATE_IP1} ${RACNODE2_CONTAINER_NAME} +podman network connect ${PRIVATE2_NETWORK_NAME} --ip ${RACNODE2_CRS_PRIVATE_IP2} ${RACNODE2_CONTAINER_NAME} + +podman-compose start ${RACNODE1_CONTAINER_NAME} +podman-compose start ${RACNODE2_CONTAINER_NAME} +podman exec ${RACNODE1_CONTAINER_NAME} /bin/bash -c "tail -f /tmp/orod/oracle_rac_setup.log" +``` + +Successful Message when RAC container is setup properly- +```bash +=================================== +ORACLE RAC DATABASE IS READY TO USE +=================================== +``` + +Bring up CMAN Container- +```bash +podman-compose up -d ${CMAN_CONTAINER_NAME} + +podman-compose logs -f ${CMAN_CONTAINER_NAME} +################################################ + CONNECTION MANAGER IS READY TO USE! +################################################ +``` + +### Section 2.2: Deploying With NFS Storage Devices +#### Section 2.2.1: Setup Without Using User Defined Response files +Make sure you completed pre-requisites step to install Podman Compose on required Podman Host Machines. + +Create placeholder for NFS storage and make sure it is empty - + +```bash +export ORACLE_DBNAME=ORCLCDB +mkdir -p /scratch/stage/rac-storage/$ORACLE_DBNAME +rm -rf /scratch/stage/rac-storage/ORCLCDB/asm_disk0* +``` + +Now, Export the required environment variables required by `podman-compose.yml` file - + +```bash +export HEALTHCHECK_INTERVAL=60s +export HEALTHCHECK_TIMEOUT=120s +export HEALTHCHECK_RETRIES=240 +export RACNODE1_CONTAINER_NAME=racnodep1 +export RACNODE1_HOST_NAME=racnodep1 +export RACNODE1_PUBLIC_IP=10.0.20.170 +export RACNODE1_CRS_PRIVATE_IP1=192.168.17.170 +export RACNODE1_CRS_PRIVATE_IP2=192.168.18.170 +export INSTALL_NODE=racnodep1 +export RAC_IMAGE_NAME=localhost/oracle/database-rac:21.3.0 +export DEFAULT_GATEWAY="10.0.20.1" +export CRS_NODES="\"pubhost:racnodep1,viphost:racnodep1-vip;pubhost:racnodep2,viphost:racnodep2-vip\"" +export SCAN_NAME=racnodepc1-scan +export CRS_ASM_DISCOVERY_STRING="/oradata" +export CRS_ASM_DEVICE_LIST="/oradata/asm_disk01.img,/oradata/asm_disk02.img,/oradata/asm_disk03.img,/oradata/asm_disk04.img,/oradata/asm_disk05.img" +export RACNODE2_CONTAINER_NAME=racnodep2 +export RACNODE2_HOST_NAME=racnodep2 +export RACNODE2_PUBLIC_IP=10.0.20.171 +export RACNODE2_CRS_PRIVATE_IP1=192.168.17.171 +export RACNODE2_CRS_PRIVATE_IP2=192.168.18.171 +export DNS_CONTAINER_NAME=rac-dnsserver +export DNS_HOST_NAME=racdns +export DNS_IMAGE_NAME="oracle/rac-dnsserver:latest" +export RAC_NODE_NAME_PREFIXP="racnodep" +export DNS_DOMAIN=example.info +export PUBLIC_NETWORK_NAME="rac_pub1_nw" +export PUBLIC_NETWORK_SUBNET="10.0.20.0/24" +export PRIVATE1_NETWORK_NAME="rac_priv1_nw" +export PRIVATE1_NETWORK_SUBNET="192.168.17.0/24" +export PRIVATE2_NETWORK_NAME="rac_priv2_nw" +export PRIVATE2_NETWORK_SUBNET="192.168.18.0/24" +export PWD_SECRET_FILE=/opt/.secrets/pwdfile.enc +export KEY_SECRET_FILE=/opt/.secrets/key.pem +export DNS_PUBLIC_IP=10.0.20.25 +export DNS_PRIVATE1_IP=192.168.17.25 +export DNS_PRIVATE2_IP=192.168.18.25 +export CMAN_CONTAINER_NAME=racnode-cman +export CMAN_HOST_NAME=racnode-cman1 +export CMAN_IMAGE_NAME="localhost/oracle/client-cman:21.3.0" +export CMAN_PUBLIC_IP=10.0.20.15 +export CMAN_PUBLIC_HOSTNAME="racnode-cman1" +export DB_HOSTDETAILS="HOST=racnodepc1-scan:RULE_ACT=accept,HOST=racnodep1:IP=10.0.20.170" +export STORAGE_CONTAINER_NAME="racnode-storage" +export STORAGE_HOST_NAME="racnode-storage" +export STORAGE_IMAGE_NAME="localhost/oracle/rac-storage-server:latest" +export ORACLE_DBNAME="ORCLCDB" +export STORAGE_PUBLIC_IP=10.0.20.80 +export NFS_STORAGE_VOLUME="/scratch/stage/rac-storage/$ORACLE_DBNAME" +export DB_SERVICE=service:soepdb +``` +Create podman networks- +```bash +podman network create --driver=bridge --subnet=${PUBLIC_NETWORK_SUBNET} ${PUBLIC_NETWORK_NAME} +podman network create --driver=bridge --subnet=${PRIVATE1_NETWORK_SUBNET} ${PRIVATE1_NETWORK_NAME} --disable-dns +podman network create --driver=bridge --subnet=${PRIVATE2_NETWORK_SUBNET} ${PRIVATE2_NETWORK_NAME} --disable-dns +``` + +Create compose file named [podman-compose.yml](./withoutresponsefiles/nfsdevices/podman-compose.yml) in your working directory. + + +Bring up DNS Containers- +```bash +podman-compose up -d ${DNS_CONTAINER_NAME} +podman-compose logs ${DNS_CONTAINER_NAME} +``` +Logs- +```bash +04-03-2024 13:22:54 UTC : : ################################################ +04-03-2024 13:22:54 UTC : : DNS Server IS READY TO USE! +04-03-2024 13:22:54 UTC : : ##################################### +``` + +Bring up Storage Container- +```bash +podman-compose --podman-run-args="-t -i --systemd=always" up -d ${STORAGE_CONTAINER_NAME} +podman-compose exec ${STORAGE_CONTAINER_NAME} tail -f /tmp/storage_setup.log +``` +Logs- +```bash +Export list for racnode-storage: +/oradata * +################################################# + Setup Completed +################################################# +``` + +Create NFS volume- +```bash +podman volume create --driver local \ +--opt type=nfs \ +--opt o=addr=10.0.20.80,rw,bg,hard,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 \ +--opt device=10.0.20.80:/oradata \ +racstorage +``` +Bring up RAC Containers- +```bash +podman-compose --podman-run-args="-t -i --systemd=always --cpuset-cpus 0-1 --memory 16G --memory-swap 32G" up -d ${RACNODE1_CONTAINER_NAME} +podman-compose stop ${RACNODE1_CONTAINER_NAME} + +podman-compose --podman-run-args="-t -i --systemd=always --cpuset-cpus 0-1 --memory 16G --memory-swap 32G" up -d ${RACNODE2_CONTAINER_NAME} +podman-compose stop ${RACNODE2_CONTAINER_NAME} + +podman network disconnect ${PUBLIC_NETWORK_NAME} ${RACNODE1_CONTAINER_NAME} +podman network disconnect ${PRIVATE1_NETWORK_NAME} ${RACNODE1_CONTAINER_NAME} +podman network disconnect ${PRIVATE2_NETWORK_NAME} ${RACNODE1_CONTAINER_NAME} + +podman network disconnect ${PUBLIC_NETWORK_NAME} ${RACNODE2_CONTAINER_NAME} +podman network disconnect ${PRIVATE1_NETWORK_NAME} ${RACNODE2_CONTAINER_NAME} +podman network disconnect ${PRIVATE2_NETWORK_NAME} ${RACNODE2_CONTAINER_NAME} + +podman network connect ${PUBLIC_NETWORK_NAME} --ip ${RACNODE1_PUBLIC_IP} ${RACNODE1_CONTAINER_NAME} +podman network connect ${PRIVATE1_NETWORK_NAME} --ip ${RACNODE1_CRS_PRIVATE_IP1} ${RACNODE1_CONTAINER_NAME} +podman network connect ${PRIVATE2_NETWORK_NAME} --ip ${RACNODE1_CRS_PRIVATE_IP2} ${RACNODE1_CONTAINER_NAME} + +podman network connect ${PUBLIC_NETWORK_NAME} --ip ${RACNODE2_PUBLIC_IP} ${RACNODE2_CONTAINER_NAME} +podman network connect ${PRIVATE1_NETWORK_NAME} --ip ${RACNODE2_CRS_PRIVATE_IP1} ${RACNODE2_CONTAINER_NAME} +podman network connect ${PRIVATE2_NETWORK_NAME} --ip ${RACNODE2_CRS_PRIVATE_IP2} ${RACNODE2_CONTAINER_NAME} + +podman-compose start ${RACNODE1_CONTAINER_NAME} +podman-compose start ${RACNODE2_CONTAINER_NAME} +podman exec ${RACNODE1_CONTAINER_NAME} /bin/bash -c "tail -f /tmp/orod/oracle_rac_setup.log" +``` + +Successful Message when RAC container is setup properly- +```bash +=================================== +ORACLE RAC DATABASE IS READY TO USE +=================================== +``` + +(Optionally) Bring up CMAN Container- +```bash +podman-compose up -d ${CMAN_CONTAINER_NAME} + +podman-compose logs -f ${CMAN_CONTAINER_NAME} +################################################ + CONNECTION MANAGER IS READY TO USE! +################################################ +``` +#### Section 2.2.2: Setup Using User Defined Response files +Make sure you completed pre-requisites step to install Podman Compose on required Podman Host Machines. + +If SELinux is enabled in your host machine then execute the following as well - +```bash +semanage fcontext -a -t container_file_t /scratch/common_scripts/podman/rac/grid_setup_new_21c.rsp +restorecon -v /scratch/common_scripts/podman/rac/grid_setup_new_21c.rsp +semanage fcontext -a -t container_file_t /scratch/common_scripts/podman/rac/dbca_21c.rsp +restorecon -v /scratch/common_scripts/podman/rac/dbca_21c.rsp +``` +Create placeholder for NFS storage and make sure it is empty - + +```bash +export ORACLE_DBNAME=ORCLCDB +mkdir -p /scratch/stage/rac-storage/$ORACLE_DBNAME +rm -rf /scratch/stage/rac-storage/ORCLCDB/asm_disk0* +``` + +On the shared folder between both RAC nodes, copy file name [grid_setup_new_21c.rsp](withresponsefiles/nfsdevices/grid_setup_new_21c.rsp) to shared location e.g `/scratch/common_scripts/podman/rac/grid_setup_new_21c.rsp`. You can skip this step if you are planing to not to use **User Defined Response Files for RAC**. +If SELinux host is enable on machine then execute the following as well - +```bash +semanage fcontext -a -t container_file_t /scratch/common_scripts/podman/rac/grid_setup_new_21c.rsp +restorecon -v /scratch/common_scripts/podman/rac/grid_setup_new_21c.rsp +semanage fcontext -a -t container_file_t /scratch/common_scripts/podman/rac/dbca_21c.rsp +restorecon -v /scratch/common_scripts/podman/rac/dbca_21c.rsp +``` +Now, Export the required environment variables required by `podman-compose.yml` file - +```bash +export HEALTHCHECK_INTERVAL=60s +export HEALTHCHECK_TIMEOUT=120s +export HEALTHCHECK_RETRIES=240 +export RACNODE1_CONTAINER_NAME=racnodep1 +export RACNODE1_HOST_NAME=racnodep1 +export RACNODE1_PUBLIC_IP=10.0.20.170 +export RACNODE1_CRS_PRIVATE_IP1=192.168.17.170 +export RACNODE1_CRS_PRIVATE_IP2=192.168.18.170 +export INSTALL_NODE=racnodep1 +export RAC_IMAGE_NAME=localhost/oracle/database-rac:21.3.0 +export DEFAULT_GATEWAY="10.0.20.1" +export CRS_NODES="\"pubhost:racnodep1,viphost:racnodep1-vip;pubhost:racnodep2,viphost:racnodep2-vip\"" +export SCAN_NAME=racnodepc1-scan +export CRS_ASM_DISCOVERY_STRING="/oradata" +export CRS_ASM_DEVICE_LIST="/oradata/asm_disk01.img,/oradata/asm_disk02.img,/oradata/asm_disk03.img,/oradata/asm_disk04.img,/oradata/asm_disk05.img" +export RACNODE2_CONTAINER_NAME=racnodep2 +export RACNODE2_HOST_NAME=racnodep2 +export RACNODE2_PUBLIC_IP=10.0.20.171 +export RACNODE2_CRS_PRIVATE_IP1=192.168.17.171 +export RACNODE2_CRS_PRIVATE_IP2=192.168.18.171 +export DNS_CONTAINER_NAME=rac-dnsserver +export DNS_HOST_NAME=racdns +export DNS_IMAGE_NAME="oracle/rac-dnsserver:latest" +export RAC_NODE_NAME_PREFIXP="racnodep" +export DNS_DOMAIN=example.info +export PUBLIC_NETWORK_NAME="rac_pub1_nw" +export PUBLIC_NETWORK_SUBNET="10.0.20.0/24" +export PRIVATE1_NETWORK_NAME="rac_priv1_nw" +export PRIVATE1_NETWORK_SUBNET="192.168.17.0/24" +export PRIVATE2_NETWORK_NAME="rac_priv2_nw" +export PRIVATE2_NETWORK_SUBNET="192.168.18.0/24" +export PWD_SECRET_FILE=/opt/.secrets/pwdfile.enc +export KEY_SECRET_FILE=/opt/.secrets/key.pem +export DNS_PUBLIC_IP=10.0.20.25 +export DNS_PRIVATE1_IP=192.168.17.25 +export DNS_PRIVATE2_IP=192.168.18.25 +export CMAN_CONTAINER_NAME=racnode-cman +export CMAN_HOST_NAME=racnode-cman1 +export CMAN_IMAGE_NAME="localhost/oracle/client-cman:21.3.0" +export CMAN_PUBLIC_IP=10.0.20.15 +export CMAN_PUBLIC_HOSTNAME="racnode-cman1" +export DB_HOSTDETAILS="HOST=racnodepc1-scan:RULE_ACT=accept,HOST=racnodep1:IP=10.0.20.170" +export STORAGE_CONTAINER_NAME="racnode-storage" +export STORAGE_HOST_NAME="racnode-storage" +export STORAGE_IMAGE_NAME="localhost/oracle/rac-storage-server:latest" +export ORACLE_DBNAME="ORCLCDB" +export STORAGE_PUBLIC_IP=10.0.20.80 +export NFS_STORAGE_VOLUME="/scratch/stage/rac-storage/$ORACLE_DBNAME" +export GRID_RESPONSE_FILE="/scratch/common_scripts/podman/rac/grid_setup_new_21c.rsp" +export DB_SERVICE=service:soepdb +``` + +Create podman networks- +```bash +podman network create --driver=bridge --subnet=${PUBLIC_NETWORK_SUBNET} ${PUBLIC_NETWORK_NAME} +podman network create --driver=bridge --subnet=${PRIVATE1_NETWORK_SUBNET} ${PRIVATE1_NETWORK_NAME} --disable-dns +podman network create --driver=bridge --subnet=${PRIVATE2_NETWORK_SUBNET} ${PRIVATE2_NETWORK_NAME} --disable-dns +``` + +Create compose file named [podman-compose.yml](./withresponsefiles/nfsdevices/podman-compose.yml) in your working directory. + +Bring up DNS Containers- +```bash +podman-compose up -d ${DNS_CONTAINER_NAME} +podman-compose stop ${DNS_CONTAINER_NAME} +podman network disconnect ${PUBLIC_NETWORK_NAME} ${DNS_CONTAINER_NAME} +podman network connect ${PUBLIC_NETWORK_NAME} --ip ${DNS_PUBLIC_IP} ${DNS_CONTAINER_NAME} +podman-compose start ${DNS_CONTAINER_NAME} +``` + +Successful logs when DNS container comes up- +```bash +podman-compose logs ${DNS_CONTAINER_NAME} +################################################ + DNS Server IS READY TO USE! +################################################ +``` + +Bring up Storage Container- +```bash +podman-compose --podman-run-args="-t -i --systemd=always" up -d ${STORAGE_CONTAINER_NAME} +podman-compose exec ${STORAGE_CONTAINER_NAME} tail -f /tmp/storage_setup.log + +Export list for racnode-storage: +/oradata * +################################################# + Setup Completed +################################################# +``` + +Create NFS volume- +```bash +podman volume create --driver local \ +--opt type=nfs \ +--opt o=addr=10.0.20.80,rw,bg,hard,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 \ +--opt device=10.0.20.80:/oradata \ +racstorage +``` +Bring up RAC Containers- +```bash +podman-compose --podman-run-args="-t -i --systemd=always --cpuset-cpus 0-1 --memory 16G --memory-swap 32G" up -d ${RACNODE1_CONTAINER_NAME} +podman-compose stop ${RACNODE1_CONTAINER_NAME} + +podman-compose --podman-run-args="-t -i --systemd=always --cpuset-cpus 0-1 --memory 16G --memory-swap 32G" up -d ${RACNODE2_CONTAINER_NAME} +podman-compose stop ${RACNODE2_CONTAINER_NAME} + +podman network disconnect ${PUBLIC_NETWORK_NAME} ${RACNODE1_CONTAINER_NAME} +podman network disconnect ${PRIVATE1_NETWORK_NAME} ${RACNODE1_CONTAINER_NAME} +podman network disconnect ${PRIVATE2_NETWORK_NAME} ${RACNODE1_CONTAINER_NAME} + +podman network disconnect ${PUBLIC_NETWORK_NAME} ${RACNODE2_CONTAINER_NAME} +podman network disconnect ${PRIVATE1_NETWORK_NAME} ${RACNODE2_CONTAINER_NAME} +podman network disconnect ${PRIVATE2_NETWORK_NAME} ${RACNODE2_CONTAINER_NAME} + +podman network connect ${PUBLIC_NETWORK_NAME} --ip ${RACNODE1_PUBLIC_IP} ${RACNODE1_CONTAINER_NAME} +podman network connect ${PRIVATE1_NETWORK_NAME} --ip ${RACNODE1_CRS_PRIVATE_IP1} ${RACNODE1_CONTAINER_NAME} +podman network connect ${PRIVATE2_NETWORK_NAME} --ip ${RACNODE1_CRS_PRIVATE_IP2} ${RACNODE1_CONTAINER_NAME} + +podman network connect ${PUBLIC_NETWORK_NAME} --ip ${RACNODE2_PUBLIC_IP} ${RACNODE2_CONTAINER_NAME} +podman network connect ${PRIVATE1_NETWORK_NAME} --ip ${RACNODE2_CRS_PRIVATE_IP1} ${RACNODE2_CONTAINER_NAME} +podman network connect ${PRIVATE2_NETWORK_NAME} --ip ${RACNODE2_CRS_PRIVATE_IP2} ${RACNODE2_CONTAINER_NAME} + +podman-compose start ${RACNODE1_CONTAINER_NAME} +podman-compose start ${RACNODE2_CONTAINER_NAME} +podman exec ${RACNODE1_CONTAINER_NAME} /bin/bash -c "tail -f /tmp/orod/oracle_rac_setup.log" +``` + +Successful Message when RAC container is setup properly- +```bash +=================================== +ORACLE RAC DATABASE IS READY TO USE +=================================== +``` + +(Optionally) Bring up CMAN Container- +```bash +podman-compose up -d ${CMAN_CONTAINER_NAME} +podman-compose logs -f ${CMAN_CONTAINER_NAME} +################################################ + CONNECTION MANAGER IS READY TO USE! +################################################ +``` +## Section 3: Sample of Addition of Nodes to Oracle RAC Containers based on Oracle RAC Image + +### Section 3.1: Sample of Addition of Nodes to Oracle RAC Containers using Podman Compose based on Oracle RAC Image with BlockDevices + +Below is an example to add one more node to existing Oracle RAC 2 node cluster using Oracle RAC Image and with user defined files using podman compose file - + +Create compose file named [podman-compose.yml](./withoutresponsefiles/blockdevices/addition/podman-compose.yml) in your working directory. + +Export the required environment variables required by `podman-compose.yml` file - +```bash +export HEALTHCHECK_INTERVAL=60s +export HEALTHCHECK_TIMEOUT=120s +export HEALTHCHECK_RETRIES=240 +export RACNODE3_CONTAINER_NAME=racnodep3 +export RACNODE3_HOST_NAME=racnodep3 +export RACNODE3_PUBLIC_IP=10.0.20.172 +export RACNODE3_CRS_PRIVATE_IP1=192.168.17.172 +export RACNODE3_CRS_PRIVATE_IP2=192.168.18.172 +export INSTALL_NODE=racnodep3 +export RAC_IMAGE_NAME=localhost/oracle/database-rac:21.3.0 +export DEFAULT_GATEWAY="10.0.20.1" +export CRS_NODES="\"pubhost:racnodep3,viphost:racnodep3-vip\"" +export EXISTING_CLS_NODE="racnodep1,racnodep2" +export SCAN_NAME=racnodepc1-scan +export ASM_DEVICE1="/dev/asm-disk1" +export ASM_DEVICE2="/dev/asm-disk2" +export CRS_ASM_DEVICE_LIST="${ASM_DEVICE1},${ASM_DEVICE2}" +export ASM_DISK1="/dev/oracleoci/oraclevdd" +export ASM_DISK2="/dev/oracleoci/oraclevde" +export DNS_CONTAINER_NAME=rac-dnsserver +export DNS_HOST_NAME=racdns +export DNS_IMAGE_NAME="oracle/rac-dnsserver:latest" +export RAC_NODE_NAME_PREFIXD="racnoded" +export RAC_NODE_NAME_PREFIXP="racnodep" +export DNS_DOMAIN=example.info +export PUBLIC_NETWORK_NAME="rac_pub1_nw" +export PUBLIC_NETWORK_SUBNET="10.0.20.0/24" +export PRIVATE1_NETWORK_NAME="rac_priv1_nw" +export PRIVATE1_NETWORK_SUBNET="192.168.17.0/24" +export PRIVATE2_NETWORK_NAME="rac_priv2_nw" +export PRIVATE2_NETWORK_SUBNET="192.168.18.0/24" +export DNS_PUBLIC_IP=10.0.20.25 +export PWD_SECRET_FILE=/opt/.secrets/pwdfile.enc +export KEY_SECRET_FILE=/opt/.secrets/key.pem +export DB_SERVICE=service:soepdb +``` +Bring up RAC Containers- +```bash +podman-compose --podman-run-args="-t -i --systemd=always --cpuset-cpus 0-1 --memory 16G --memory-swap 32G" up -d ${RACNODE3_CONTAINER_NAME} +podman-compose stop ${RACNODE3_CONTAINER_NAME} + +podman network disconnect ${PUBLIC_NETWORK_NAME} ${RACNODE3_CONTAINER_NAME} +podman network disconnect ${PRIVATE1_NETWORK_NAME} ${RACNODE3_CONTAINER_NAME} +podman network disconnect ${PRIVATE2_NETWORK_NAME} ${RACNODE3_CONTAINER_NAME} + +podman network connect ${PUBLIC_NETWORK_NAME} --ip ${RACNODE3_PUBLIC_IP} ${RACNODE3_CONTAINER_NAME} +podman network connect ${PRIVATE1_NETWORK_NAME} --ip ${RACNODE3_CRS_PRIVATE_IP1} ${RACNODE3_CONTAINER_NAME} +podman network connect ${PRIVATE2_NETWORK_NAME} --ip ${RACNODE3_CRS_PRIVATE_IP2} ${RACNODE3_CONTAINER_NAME} + +podman-compose start ${RACNODE3_CONTAINER_NAME} +podman exec ${RACNODE3_CONTAINER_NAME} /bin/bash -c "tail -f /tmp/orod/oracle_rac_setup.log" +``` + +Successful Message when RAC container is setup properly- +```bash +======================================================== +Oracle Database ORCLCDB3 is up and running on racnodep3. +======================================================== +``` + +### Section 3.2: Sample of Addition of Nodes to Oracle RAC Containers using Podman Compose based on Oracle RAC Image with NFS Storage Devices +Below is the example to add one more node to existing Oracle RAC 2 node cluster using Oracle RAC Image and with user defined files using podman compose file - + +Create compose file named [podman-compose.yml](./withoutresponsefiles/nfsdevices/addition/podman-compose.yml) in your working directory. + + +Export the required environment variables required by `podman-compose.yml` file - +```bash +export HEALTHCHECK_INTERVAL=60s +export HEALTHCHECK_TIMEOUT=120s +export HEALTHCHECK_RETRIES=240 +export RACNODE3_CONTAINER_NAME=racnodep3 +export RACNODE3_HOST_NAME=racnodep3 +export RACNODE3_PUBLIC_IP=10.0.20.172 +export RACNODE3_CRS_PRIVATE_IP1=192.168.17.172 +export RACNODE3_CRS_PRIVATE_IP2=192.168.18.172 +export INSTALL_NODE=racnodep3 +export RAC_IMAGE_NAME=localhost/oracle/database-rac:21.3.0 +export DEFAULT_GATEWAY="10.0.20.1" +export CRS_NODES="\"pubhost:racnodep3,viphost:racnodep3-vip\"" +export EXISTING_CLS_NODE="racnodep1,racnodep2" +export SCAN_NAME=racnodepc1-scan +export CRS_ASM_DISCOVERY_STRING="/oradata" +export CRS_ASM_DEVICE_LIST="/oradata/asm_disk01.img,/oradata/asm_disk02.img,/oradata/asm_disk03.img,/oradata/asm_disk04.img,/oradata/asm_disk05.img" +export DNS_CONTAINER_NAME=rac-dnsserver +export DNS_HOST_NAME=racdns +export DNS_IMAGE_NAME="oracle/rac-dnsserver:latest" +export RAC_NODE_NAME_PREFIXP="racnodep" +export DNS_DOMAIN=example.info +export PUBLIC_NETWORK_NAME="rac_pub1_nw" +export PUBLIC_NETWORK_SUBNET="10.0.20.0/24" +export PRIVATE1_NETWORK_NAME="rac_priv1_nw" +export PRIVATE1_NETWORK_SUBNET="192.168.17.0/24" +export PRIVATE2_NETWORK_NAME="rac_priv2_nw" +export PRIVATE2_NETWORK_SUBNET="192.168.18.0/24" +export DNS_PUBLIC_IP=10.0.20.25 +export PWD_SECRET_FILE=/opt/.secrets/pwdfile.enc +export KEY_SECRET_FILE=/opt/.secrets/key.pem +export CMAN_CONTAINER_NAME=racnodepc1-cman +export CMAN_HOST_NAME=racnodepc1-cman +export CMAN_PUBLIC_IP=10.0.20.15 +export CMAN_PUBLIC_HOSTNAME="racnodepc1-cman" +export DB_SERVICE=service:soepdb +``` +Bring up RAC Containers- +```bash +podman-compose --podman-run-args="-t -i --systemd=always --cpuset-cpus 0-1 --memory 16G --memory-swap 32G" up -d ${RACNODE3_CONTAINER_NAME} +podman-compose stop ${RACNODE3_CONTAINER_NAME} + +podman network disconnect ${PUBLIC_NETWORK_NAME} ${RACNODE3_CONTAINER_NAME} +podman network disconnect ${PRIVATE1_NETWORK_NAME} ${RACNODE3_CONTAINER_NAME} +podman network disconnect ${PRIVATE2_NETWORK_NAME} ${RACNODE3_CONTAINER_NAME} + +podman network connect ${PUBLIC_NETWORK_NAME} --ip ${RACNODE3_PUBLIC_IP} ${RACNODE3_CONTAINER_NAME} +podman network connect ${PRIVATE1_NETWORK_NAME} --ip ${RACNODE3_CRS_PRIVATE_IP1} ${RACNODE3_CONTAINER_NAME} +podman network connect ${PRIVATE2_NETWORK_NAME} --ip ${RACNODE3_CRS_PRIVATE_IP2} ${RACNODE3_CONTAINER_NAME} + +podman-compose start ${RACNODE3_CONTAINER_NAME} +podman exec ${RACNODE3_CONTAINER_NAME} /bin/bash -c "tail -f /tmp/orod/oracle_rac_setup.log" +``` + +Successful Message when RAC container is setup properly- +```bash +======================================================== +Oracle Database ORCLCDB3 is up and running on racnodep3. +======================================================== +``` +## Section 4: Environment Variables for Oracle RAC on Podman Compose +Refer [Environment Variables Explained for Oracle RAC on Podman Compose](../../../docs/ENVVARIABLESCOMPOSE.md) for explanation of all the environment variables related to Oracle RAC on Podman Compose. Change or Set these environment variables as per your environment. + +## Section 5: Validating Oracle RAC Environment +You can validate if Oracle Container environment is healthy by running below command- +```bash +podman ps -a +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +f1345fd4047b localhost/oracle/rac-dnsserver:latest /bin/sh -c exec $... 8 hours ago Up 8 hours (healthy) rac-dnsserver +2f42e49758d1 localhost/oracle/database-rac:21.3.0 46 minutes ago Up 37 minutes (healthy) racnodep1 +a27fceea9fe6 localhost/oracle/database-rac:21.3.0 46 minutes ago Up 37 minutes (healthy) racnodep2 +``` +Note: +- Look for `(healthy)` next to container names under `STATUS` section. + +## Section 6: Connecting to Oracle RAC Environment + +**IMPORTANT:** This section assumes that you have successfully created an Oracle RAC cluster using the preceding sections. +Refer [README](../../../docs/CONNECTING.md) for instructions on how to connect to Oracle RAC Database. + +## Cleanup +Refer [README](../../../docs/CLEANUP.md) for instructions on how to connect to cleanup Oracle RAC Database Container Environment. + +## Support + +At the time of this release, Oracle RAC on Podman is supported for Oracle Linux 8.10 later. To see current Linux support certifications, refer [Oracle RAC on Podman Documentation](https://docs.oracle.com/en/database/oracle/oracle-database/21/install-and-upgrade.html) + +## License + +To download and run Oracle Grid and Database, regardless of whether inside or outside a container, you must download the binaries from the Oracle website and accept the license indicated on that page. + +All scripts and files hosted in this repository which are required to build the container images are, unless otherwise noted, released under UPL 1.0 license. + +## Copyright + +Copyright (c) 2014-2024 Oracle and/or its affiliates. diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/samples/rac-compose/racimage/withoutresponsefiles/blockdevices/addition/podman-compose.yml b/OracleDatabase/RAC/OracleRealApplicationClusters/samples/rac-compose/racimage/withoutresponsefiles/blockdevices/addition/podman-compose.yml new file mode 100644 index 0000000000..f3df64cace --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/samples/rac-compose/racimage/withoutresponsefiles/blockdevices/addition/podman-compose.yml @@ -0,0 +1,73 @@ +--- +version: "3" +networks: + rac_pub1_nw: + external: true + rac_priv1_nw: + external: true + rac_priv2_nw: + external: true +secrets: + pwdsecret: + file: ${PWD_SECRET_FILE} + keysecret: + file: ${KEY_SECRET_FILE} +services: + racnodep3: + container_name: ${RACNODE3_CONTAINER_NAME} + hostname: ${RACNODE3_HOST_NAME} + image: ${RAC_IMAGE_NAME} + restart: always + dns: ${DNS_PUBLIC_IP} + dns_search: ${DNS_DOMAIN} + shm_size: 4G + secrets: + - pwdsecret + - keysecret + volumes: + - racstorage:/oradata + environment: + DNS_SERVERS: ${DNS_PUBLIC_IP} + CRS_PRIVATE_IP1: ${RACNODE3_CRS_PRIVATE_IP1} + CRS_PRIVATE_IP2: ${RACNODE3_CRS_PRIVATE_IP2} + INSTALL_NODE: ${INSTALL_NODE} + OP_TYPE: racaddnode + EXISTING_CLS_NODE: ${EXISTING_CLS_NODE} + CRS_NODES: ${CRS_NODES} + SCAN_NAME: ${SCAN_NAME} + CRS_ASM_DEVICE_LIST: ${CRS_ASM_DEVICE_LIST} + INIT_SGA_SIZE: 3G + INIT_PGA_SIZE: 2G + DB_PWD_FILE: pwdsecret + PWD_KEY: keysecret + CMAN_HOST: ${CMAN_HOST_NAME} + CMAN_PORT: 1521 + ASM_ON_NAS: True + DB_SERVICE: ${DB_SERVICE} + sysctls: + - kernel.shmall=2097152 + - kernel.shmmax=8589934592 + - kernel.shmmni=4096 + - 'kernel.sem=250 32000 100 128' + - 'net.ipv4.conf.eth1.rp_filter=2' + - 'net.ipv4.conf.eth2.rp_filter=2' + ulimits: + rtprio: 99 + cap_add: + - SYS_RESOURCE + - NET_ADMIN + - SYS_NICE + - AUDIT_WRITE + - AUDIT_CONTROL + devices: + - "${ASM_DISK1}:${ASM_DEVICE1}" + - "${ASM_DISK2}:${ASM_DEVICE2}" + networks: + - rac_pub1_nw + - rac_priv1_nw + - rac_priv2_nw + healthcheck: + test: ["CMD", "/bin/python3", "/opt/scripts/startup/scripts/main.py", "--checkracstatus"] + interval: ${HEALTHCHECK_INTERVAL} + timeout: ${HEALTHCHECK_TIMEOUT} + retries: ${HEALTHCHECK_RETRIES} \ No newline at end of file diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/samples/rac-compose/racimage/withoutresponsefiles/blockdevices/podman-compose.yml b/OracleDatabase/RAC/OracleRealApplicationClusters/samples/rac-compose/racimage/withoutresponsefiles/blockdevices/podman-compose.yml new file mode 100644 index 0000000000..8c9c23dcbd --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/samples/rac-compose/racimage/withoutresponsefiles/blockdevices/podman-compose.yml @@ -0,0 +1,172 @@ +--- +version: "3" +networks: + rac_pub1_nw: + external: true + rac_priv1_nw: + external: true + rac_priv2_nw: + external: true +secrets: + pwdsecret: + file: ${PWD_SECRET_FILE} + keysecret: + file: ${KEY_SECRET_FILE} +services: + rac-dnsserver: + container_name: ${DNS_CONTAINER_NAME} + hostname: ${DNS_HOST_NAME} + image: ${DNS_IMAGE_NAME} + dns_search: ${DNS_DOMAIN} + environment: + SETUP_DNS_CONFIG_FILES: "setup_true" + DOMAIN_NAME: ${DNS_DOMAIN} + RAC_NODE_NAME_PREFIXP: ${RAC_NODE_NAME_PREFIXP} + WEBMIN_ENABLED: false + SETUP_DNS_CONFIG_FILES: "setup_true" + cap_add: + - AUDIT_WRITE + healthcheck: + test: ["CMD-SHELL", "pgrep named"] + interval: ${HEALTHCHECK_INTERVAL} + timeout: ${HEALTHCHECK_TIMEOUT} + retries: ${HEALTHCHECK_RETRIES} + privileged: false + networks: + rac_pub1_nw: + ipv4_address: ${DNS_PUBLIC_IP} + racnodep1: + container_name: ${RACNODE1_CONTAINER_NAME} + hostname: ${RACNODE1_HOST_NAME} + image: ${RAC_IMAGE_NAME} + restart: always + dns: ${DNS_PUBLIC_IP} + dns_search: ${DNS_DOMAIN} + shm_size: 4G + secrets: + - pwdsecret + - keysecret + environment: + DNS_SERVERS: ${DNS_PUBLIC_IP} + CRS_PRIVATE_IP1: ${RACNODE1_CRS_PRIVATE_IP1} + CRS_PRIVATE_IP2: ${RACNODE1_CRS_PRIVATE_IP2} + OP_TYPE: setuprac + INSTALL_NODE: ${INSTALL_NODE} + CRS_NODES: ${CRS_NODES} + SCAN_NAME: ${SCAN_NAME} + CRS_ASM_DEVICE_LIST: ${CRS_ASM_DEVICE_LIST} + INIT_SGA_SIZE: 3G + INIT_PGA_SIZE: 2G + DB_PWD_FILE: pwdsecret + PWD_KEY: keysecret + CMAN_HOST: ${CMAN_HOST_NAME} + CMAN_PORT: 1521 + DB_SERVICE: ${DB_SERVICE} + sysctls: + - kernel.shmall=2097152 + - kernel.shmmax=8589934592 + - kernel.shmmni=4096 + - 'kernel.sem=250 32000 100 128' + - 'net.ipv4.conf.eth1.rp_filter=2' + - 'net.ipv4.conf.eth2.rp_filter=2' + ulimits: + rtprio: 99 + cap_add: + - SYS_RESOURCE + - NET_ADMIN + - SYS_NICE + - AUDIT_WRITE + - AUDIT_CONTROL + - NET_RAW + devices: + - "${ASM_DISK1}:${ASM_DEVICE1}" + - "${ASM_DISK2}:${ASM_DEVICE2}" + networks: + - rac_pub1_nw + - rac_priv1_nw + - rac_priv2_nw + healthcheck: + test: ["CMD", "/bin/python3", "/opt/scripts/startup/scripts/main.py", "--checkracstatus"] + interval: ${HEALTHCHECK_INTERVAL} + timeout: ${HEALTHCHECK_TIMEOUT} + retries: ${HEALTHCHECK_RETRIES} + racnodep2: + container_name: ${RACNODE2_CONTAINER_NAME} + hostname: ${RACNODE2_HOST_NAME} + image: ${RAC_IMAGE_NAME} + restart: always + dns: ${DNS_PUBLIC_IP} + dns_search: ${DNS_DOMAIN} + shm_size: 4G + secrets: + - pwdsecret + - keysecret + environment: + DNS_SERVERS: ${DNS_PUBLIC_IP} + CRS_PRIVATE_IP1: ${RACNODE2_CRS_PRIVATE_IP1} + CRS_PRIVATE_IP2: ${RACNODE2_CRS_PRIVATE_IP2} + INSTALL_NODE: ${INSTALL_NODE} + CRS_NODES: ${CRS_NODES} + SCAN_NAME: ${SCAN_NAME} + CRS_ASM_DEVICE_LIST: ${CRS_ASM_DEVICE_LIST} + CRS_ASM_DISCOVERY_STRING: ${CRS_ASM_DISCOVERY_STRING} + INIT_SGA_SIZE: 3G + INIT_PGA_SIZE: 2G + DB_PWD_FILE: pwdsecret + PWD_KEY: keysecret + CMAN_HOST: ${CMAN_HOST_NAME} + CMAN_PORT: 1521 + DB_SERVICE: ${DB_SERVICE} + sysctls: + - kernel.shmall=2097152 + - kernel.shmmax=8589934592 + - kernel.shmmni=4096 + - 'kernel.sem=250 32000 100 128' + - 'net.ipv4.conf.eth1.rp_filter=2' + - 'net.ipv4.conf.eth2.rp_filter=2' + ulimits: + rtprio: 99 + cap_add: + - SYS_RESOURCE + - NET_ADMIN + - SYS_NICE + - AUDIT_WRITE + - AUDIT_CONTROL + - NET_RAW + devices: + - "${ASM_DISK1}:${ASM_DEVICE1}" + - "${ASM_DISK2}:${ASM_DEVICE2}" + networks: + - rac_pub1_nw + - rac_priv1_nw + - rac_priv2_nw + healthcheck: + test: ["CMD", "/bin/python3", "/opt/scripts/startup/scripts/main.py", "--checkracstatus"] + interval: ${HEALTHCHECK_INTERVAL} + timeout: ${HEALTHCHECK_TIMEOUT} + retries: ${HEALTHCHECK_RETRIES} + racnodepc1-cman: + container_name: ${CMAN_CONTAINER_NAME} + hostname: ${CMAN_HOST_NAME} + image: ${CMAN_IMAGE_NAME} + dns_search: ${DNS_DOMAIN} + dns: ${DNS_PUBLIC_IP} + environment: + DOMAIN_NAME: ${DNS_DOMAIN} + PUBLIC_IP: ${CMAN_PUBLIC_IP} + PUBLIC_HOSTNAME: ${CMAN_PUBLIC_HOSTNAME} + DB_HOSTDETAILS: ${DB_HOSTDETAILS} + privileged: false + ports: + - 1521:1521 + networks: + rac_pub1_nw: + ipv4_address: ${CMAN_PUBLIC_IP} + cap_add: + - AUDIT_WRITE + - NET_RAW + healthcheck: + test: ["CMD-SHELL", "pgrep -f 'cmadmin'"] + interval: ${HEALTHCHECK_INTERVAL} + timeout: ${HEALTHCHECK_TIMEOUT} + retries: ${HEALTHCHECK_RETRIES} \ No newline at end of file diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/samples/rac-compose/racimage/withoutresponsefiles/nfsdevices/addition/podman-compose.yml b/OracleDatabase/RAC/OracleRealApplicationClusters/samples/rac-compose/racimage/withoutresponsefiles/nfsdevices/addition/podman-compose.yml new file mode 100644 index 0000000000..d1b3af742e --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/samples/rac-compose/racimage/withoutresponsefiles/nfsdevices/addition/podman-compose.yml @@ -0,0 +1,75 @@ +--- +version: "3" +networks: + rac_pub1_nw: + external: true + rac_priv1_nw: + external: true + rac_priv2_nw: + external: true +secrets: + pwdsecret: + file: ${PWD_SECRET_FILE} + keysecret: + file: ${KEY_SECRET_FILE} +volumes: + racstorage: + external: true +services: + racnodep3: + container_name: ${RACNODE3_CONTAINER_NAME} + hostname: ${RACNODE3_HOST_NAME} + image: ${RAC_IMAGE_NAME} + restart: always + dns: ${DNS_PUBLIC_IP} + dns_search: ${DNS_DOMAIN} + shm_size: 4G + secrets: + - pwdsecret + - keysecret + volumes: + - racstorage:/oradata + environment: + DNS_SERVERS: ${DNS_PUBLIC_IP} + CRS_PRIVATE_IP1: ${RACNODE3_CRS_PRIVATE_IP1} + CRS_PRIVATE_IP2: ${RACNODE3_CRS_PRIVATE_IP2} + INSTALL_NODE: ${INSTALL_NODE} + OP_TYPE: racaddnode + EXISTING_CLS_NODE: ${EXISTING_CLS_NODE} + CRS_NODES: ${CRS_NODES} + SCAN_NAME: ${SCAN_NAME} + CRS_ASM_DEVICE_LIST: ${CRS_ASM_DEVICE_LIST} + CRS_ASM_DISCOVERY_STRING: "/oradata" + INIT_SGA_SIZE: 3G + INIT_PGA_SIZE: 2G + DB_PWD_FILE: pwdsecret + PWD_KEY: keysecret + CMAN_HOST: ${CMAN_HOST_NAME} + CMAN_PORT: 1521 + ASM_ON_NAS: True + DB_SERVICE: ${DB_SERVICE} + sysctls: + - kernel.shmall=2097152 + - kernel.shmmax=8589934592 + - kernel.shmmni=4096 + - 'kernel.sem=250 32000 100 128' + - 'net.ipv4.conf.eth1.rp_filter=2' + - 'net.ipv4.conf.eth2.rp_filter=2' + ulimits: + rtprio: 99 + cap_add: + - SYS_RESOURCE + - NET_ADMIN + - SYS_NICE + - AUDIT_WRITE + - AUDIT_CONTROL + - NET_RAW + networks: + - rac_pub1_nw + - rac_priv1_nw + - rac_priv2_nw + healthcheck: + test: ["CMD", "/bin/python3", "/opt/scripts/startup/scripts/main.py", "--checkracstatus"] + interval: ${HEALTHCHECK_INTERVAL} + timeout: ${HEALTHCHECK_TIMEOUT} + retries: ${HEALTHCHECK_RETRIES} \ No newline at end of file diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/samples/rac-compose/racimage/withoutresponsefiles/nfsdevices/podman-compose.yml b/OracleDatabase/RAC/OracleRealApplicationClusters/samples/rac-compose/racimage/withoutresponsefiles/nfsdevices/podman-compose.yml new file mode 100644 index 0000000000..bf2143bab4 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/samples/rac-compose/racimage/withoutresponsefiles/nfsdevices/podman-compose.yml @@ -0,0 +1,198 @@ +--- +version: "3" +networks: + rac_pub1_nw: + external: true + rac_priv1_nw: + external: true + rac_priv2_nw: + external: true +secrets: + pwdsecret: + file: ${PWD_SECRET_FILE} + keysecret: + file: ${KEY_SECRET_FILE} +volumes: + racstorage: + external: true +services: + rac-dnsserver: + container_name: ${DNS_CONTAINER_NAME} + hostname: ${DNS_HOST_NAME} + image: ${DNS_IMAGE_NAME} + dns_search: ${DNS_DOMAIN} + environment: + SETUP_DNS_CONFIG_FILES: "setup_true" + DOMAIN_NAME: ${DNS_DOMAIN} + RAC_NODE_NAME_PREFIXP: ${RAC_NODE_NAME_PREFIXP} + WEBMIN_ENABLED: false + SETUP_DNS_CONFIG_FILES: "setup_true" + cap_add: + - AUDIT_WRITE + healthcheck: + test: ["CMD-SHELL", "pgrep named"] + interval: ${HEALTHCHECK_INTERVAL} + timeout: ${HEALTHCHECK_TIMEOUT} + retries: ${HEALTHCHECK_RETRIES} + privileged: false + networks: + rac_pub1_nw: + ipv4_address: ${DNS_PUBLIC_IP} + racnode-storage: + container_name: ${STORAGE_CONTAINER_NAME} + hostname: ${STORAGE_HOST_NAME} + image: ${STORAGE_IMAGE_NAME} + dns_search: ${DNS_DOMAIN} + volumes: + - ${NFS_STORAGE_VOLUME}:/oradata + cap_add: + - SYS_ADMIN + - AUDIT_WRITE + - NET_ADMIN + restart: always + healthcheck: + test: + - CMD-SHELL + - /bin/bash -c "ls -lrt /oradata/ && showmount -e | grep '/oradata'" + interval: ${HEALTHCHECK_INTERVAL} + timeout: ${HEALTHCHECK_TIMEOUT} + retries: ${HEALTHCHECK_RETRIES} + networks: + rac_pub1_nw: + ipv4_address: ${STORAGE_PUBLIC_IP} + racnodep1: + container_name: ${RACNODE1_CONTAINER_NAME} + hostname: ${RACNODE1_HOST_NAME} + image: ${RAC_IMAGE_NAME} + restart: always + dns: ${DNS_PUBLIC_IP} + dns_search: ${DNS_DOMAIN} + shm_size: 4G + secrets: + - pwdsecret + - keysecret + volumes: + - racstorage:/oradata + environment: + DNS_SERVERS: ${DNS_PUBLIC_IP} + CRS_PRIVATE_IP1: ${RACNODE1_CRS_PRIVATE_IP1} + CRS_PRIVATE_IP2: ${RACNODE1_CRS_PRIVATE_IP2} + OP_TYPE: setuprac + INSTALL_NODE: ${INSTALL_NODE} + CRS_NODES: ${CRS_NODES} + SCAN_NAME: ${SCAN_NAME} + CRS_ASM_DEVICE_LIST: ${CRS_ASM_DEVICE_LIST} + CRS_ASM_DISCOVERY_STRING: "/oradata" + INIT_SGA_SIZE: 3G + INIT_PGA_SIZE: 2G + DB_PWD_FILE: pwdsecret + PWD_KEY: keysecret + CMAN_HOST: ${CMAN_HOST_NAME} + CMAN_PORT: 1521 + ASM_ON_NAS: True + DB_SERVICE: ${DB_SERVICE} + sysctls: + - kernel.shmall=2097152 + - kernel.shmmax=8589934592 + - kernel.shmmni=4096 + - 'kernel.sem=250 32000 100 128' + - 'net.ipv4.conf.eth1.rp_filter=2' + - 'net.ipv4.conf.eth2.rp_filter=2' + ulimits: + rtprio: 99 + cap_add: + - SYS_RESOURCE + - NET_ADMIN + - SYS_NICE + - AUDIT_WRITE + - AUDIT_CONTROL + - NET_RAW + networks: + - rac_pub1_nw + - rac_priv1_nw + - rac_priv2_nw + healthcheck: + test: ["CMD", "/bin/python3", "/opt/scripts/startup/scripts/main.py", "--checkracstatus"] + interval: ${HEALTHCHECK_INTERVAL} + timeout: ${HEALTHCHECK_TIMEOUT} + retries: ${HEALTHCHECK_RETRIES} + racnodep2: + container_name: ${RACNODE2_CONTAINER_NAME} + hostname: ${RACNODE2_HOST_NAME} + image: ${RAC_IMAGE_NAME} + restart: always + dns: ${DNS_PUBLIC_IP} + dns_search: ${DNS_DOMAIN} + shm_size: 4G + secrets: + - pwdsecret + - keysecret + volumes: + - racstorage:${CRS_ASM_DISCOVERY_STRING} + environment: + DNS_SERVERS: ${DNS_PUBLIC_IP} + CRS_PRIVATE_IP1: ${RACNODE2_CRS_PRIVATE_IP1} + CRS_PRIVATE_IP2: ${RACNODE2_CRS_PRIVATE_IP2} + INSTALL_NODE: ${INSTALL_NODE} + CRS_NODES: ${CRS_NODES} + SCAN_NAME: ${SCAN_NAME} + CRS_ASM_DEVICE_LIST: ${CRS_ASM_DEVICE_LIST} + CRS_ASM_DISCOVERY_STRING: "/oradata" + INIT_SGA_SIZE: 3G + INIT_PGA_SIZE: 2G + DB_PWD_FILE: pwdsecret + PWD_KEY: keysecret + CMAN_HOST: ${CMAN_HOST_NAME} + CMAN_PORT: 1521 + ASM_ON_NAS: True + DB_SERVICE: ${DB_SERVICE} + sysctls: + - kernel.shmall=2097152 + - kernel.shmmax=8589934592 + - kernel.shmmni=4096 + - 'kernel.sem=250 32000 100 128' + - 'net.ipv4.conf.eth1.rp_filter=2' + - 'net.ipv4.conf.eth2.rp_filter=2' + ulimits: + rtprio: 99 + cap_add: + - SYS_RESOURCE + - NET_ADMIN + - SYS_NICE + - AUDIT_WRITE + - AUDIT_CONTROL + - NET_RAW + networks: + - rac_pub1_nw + - rac_priv1_nw + - rac_priv2_nw + healthcheck: + test: ["CMD", "/bin/python3", "/opt/scripts/startup/scripts/main.py", "--checkracstatus"] + interval: ${HEALTHCHECK_INTERVAL} + timeout: ${HEALTHCHECK_TIMEOUT} + retries: ${HEALTHCHECK_RETRIES} + racnodepc1-cman: + container_name: ${CMAN_CONTAINER_NAME} + hostname: ${CMAN_HOST_NAME} + image: ${CMAN_IMAGE_NAME} + dns_search: ${DNS_DOMAIN} + dns: ${DNS_PUBLIC_IP} + environment: + DOMAIN_NAME: ${DNS_DOMAIN} + PUBLIC_IP: ${CMAN_PUBLIC_IP} + PUBLIC_HOSTNAME: ${CMAN_PUBLIC_HOSTNAME} + DB_HOSTDETAILS: ${DB_HOSTDETAILS} + privileged: false + ports: + - 1521:1521 + networks: + rac_pub1_nw: + ipv4_address: ${CMAN_PUBLIC_IP} + cap_add: + - AUDIT_WRITE + - NET_RAW + healthcheck: + test: ["CMD-SHELL", "pgrep -f 'cmadmin'"] + interval: ${HEALTHCHECK_INTERVAL} + timeout: ${HEALTHCHECK_TIMEOUT} + retries: ${HEALTHCHECK_RETRIES} \ No newline at end of file diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/samples/rac-compose/racimage/withresponsefiles/blockdevices/grid_setup_new_21c.rsp b/OracleDatabase/RAC/OracleRealApplicationClusters/samples/rac-compose/racimage/withresponsefiles/blockdevices/grid_setup_new_21c.rsp new file mode 100644 index 0000000000..c7ffe19d4a --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/samples/rac-compose/racimage/withresponsefiles/blockdevices/grid_setup_new_21c.rsp @@ -0,0 +1,64 @@ +oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v21.0.0 +INVENTORY_LOCATION=/u01/app/oraInventory +oracle.install.option=CRS_CONFIG +ORACLE_BASE=/u01/app/grid +oracle.install.asm.OSDBA=dba +oracle.install.asm.OSOPER= +oracle.install.asm.OSASM=asmadmin +oracle.install.crs.config.scanType=LOCAL_SCAN +oracle.install.crs.config.SCANClientDataFile= +oracle.install.crs.config.gpnp.scanName=racnodepc1-scan +oracle.install.crs.config.gpnp.scanPort=1521 +oracle.install.crs.config.ClusterConfiguration=STANDALONE +oracle.install.crs.config.configureAsExtendedCluster=false +oracle.install.crs.config.memberClusterManifestFile= +oracle.install.crs.config.clusterName=rac01cluster +oracle.install.crs.config.gpnp.configureGNS= +oracle.install.crs.config.autoConfigureClusterNodeVIP=false +oracle.install.crs.config.gpnp.gnsOption= +oracle.install.crs.config.gpnp.gnsClientDataFile= +oracle.install.crs.config.gpnp.gnsSubDomain= +oracle.install.crs.config.gpnp.gnsVIPAddress= +oracle.install.crs.config.sites= +oracle.install.crs.config.clusterNodes=racnodep1:racnodep1-vip:HUB,racnodep2:racnodep2-vip:HUB +oracle.install.crs.config.networkInterfaceList=eth0:10.0.20.0:1,eth1:192.168.17.0:5,eth2:192.168.18.0:5 +oracle.install.asm.configureGIMRDataDG=false +oracle.install.crs.config.storageOption= +oracle.install.crs.config.useIPMI=false +oracle.install.crs.config.ipmi.bmcUsername= +oracle.install.crs.config.ipmi.bmcPassword= +oracle.install.asm.storageOption=ASM +oracle.install.asmOnNAS.ocrLocation= +oracle.install.asmOnNAS.configureGIMRDataDG=false +oracle.install.asmOnNAS.gimrLocation= +oracle.install.asm.SYSASMPassword=ORacle__21c +oracle.install.asm.diskGroup.name=DATA +oracle.install.asm.diskGroup.redundancy=EXTERNAL +oracle.install.asm.diskGroup.AUSize=4 +oracle.install.asm.diskGroup.FailureGroups= +oracle.install.asm.diskGroup.disksWithFailureGroupNames=/dev/asm-disk1,,/dev/asm-disk2, +oracle.install.asm.diskGroup.disks=/dev/asm-disk1,/dev/asm-disk2 +oracle.install.asm.diskGroup.quorumFailureGroupNames= +oracle.install.asm.diskGroup.diskDiscoveryString=/dev/asm* +oracle.install.asm.monitorPassword=ORacle__21c +oracle.install.asm.gimrDG.name= +oracle.install.asm.gimrDG.redundancy= +oracle.install.asm.gimrDG.AUSize=1 +oracle.install.asm.gimrDG.FailureGroups= +oracle.install.asm.gimrDG.disksWithFailureGroupNames= +oracle.install.asm.gimrDG.disks= +oracle.install.asm.gimrDG.quorumFailureGroupNames= +oracle.install.asm.configureAFD=false +oracle.install.crs.configureRHPS=false +oracle.install.crs.config.ignoreDownNodes=false +oracle.install.config.managementOption=NONE +oracle.install.config.omsHost= +oracle.install.config.omsPort=0 +oracle.install.config.emAdminUser= +oracle.install.config.emAdminPassword= +oracle.install.crs.rootconfig.executeRootScript=false +oracle.install.crs.rootconfig.configMethod=ROOT +oracle.install.crs.rootconfig.sudoPath= +oracle.install.crs.rootconfig.sudoUserName= +oracle.install.crs.config.batchinfo= +oracle.install.crs.deleteNode.nodes= \ No newline at end of file diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/samples/rac-compose/racimage/withresponsefiles/blockdevices/podman-compose.yml b/OracleDatabase/RAC/OracleRealApplicationClusters/samples/rac-compose/racimage/withresponsefiles/blockdevices/podman-compose.yml new file mode 100644 index 0000000000..c5bdf2bd39 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/samples/rac-compose/racimage/withresponsefiles/blockdevices/podman-compose.yml @@ -0,0 +1,176 @@ +--- +version: "3" +networks: + rac_pub1_nw: + external: true + rac_priv1_nw: + external: true + rac_priv2_nw: + external: true +secrets: + pwdsecret: + file: ${PWD_SECRET_FILE} + keysecret: + file: ${KEY_SECRET_FILE} +services: + rac-dnsserver: + container_name: ${DNS_CONTAINER_NAME} + hostname: ${DNS_HOST_NAME} + image: ${DNS_IMAGE_NAME} + dns_search: ${DNS_DOMAIN} + environment: + SETUP_DNS_CONFIG_FILES: "setup_true" + DOMAIN_NAME: ${DNS_DOMAIN} + RAC_NODE_NAME_PREFIXP: ${RAC_NODE_NAME_PREFIXP} + WEBMIN_ENABLED: false + SETUP_DNS_CONFIG_FILES: "setup_true" + cap_add: + - AUDIT_WRITE + healthcheck: + test: ["CMD-SHELL", "pgrep named"] + interval: ${HEALTHCHECK_INTERVAL} + timeout: ${HEALTHCHECK_TIMEOUT} + retries: ${HEALTHCHECK_RETRIES} + privileged: false + networks: + rac_pub1_nw: + ipv4_address: ${DNS_PUBLIC_IP} + racnodep1: + container_name: ${RACNODE1_CONTAINER_NAME} + hostname: ${RACNODE1_HOST_NAME} + image: ${RAC_IMAGE_NAME} + restart: always + dns: ${DNS_PUBLIC_IP} + dns_search: ${DNS_DOMAIN} + shm_size: 4G + secrets: + - pwdsecret + - keysecret + volumes: + - ${GRID_RESPONSE_FILE}:/tmp/grid_21c.rsp + environment: + DNS_SERVERS: ${DNS_PUBLIC_IP} + CRS_PRIVATE_IP1: ${RACNODE1_CRS_PRIVATE_IP1} + CRS_PRIVATE_IP2: ${RACNODE1_CRS_PRIVATE_IP2} + OP_TYPE: setuprac + INSTALL_NODE: ${INSTALL_NODE} + CRS_NODES: ${CRS_NODES} + SCAN_NAME: ${SCAN_NAME} + CRS_ASM_DEVICE_LIST: ${CRS_ASM_DEVICE_LIST} + INIT_SGA_SIZE: 3G + INIT_PGA_SIZE: 2G + DB_PWD_FILE: pwdsecret + PWD_KEY: keysecret + GRID_RESPONSE_FILE: /tmp/grid_21c.rsp + CMAN_HOST: ${CMAN_HOST_NAME} + CMAN_PORT: 1521 + DB_SERVICE: ${DB_SERVICE} + sysctls: + - kernel.shmall=2097152 + - kernel.shmmax=8589934592 + - kernel.shmmni=4096 + - 'kernel.sem=250 32000 100 128' + - 'net.ipv4.conf.eth1.rp_filter=2' + - 'net.ipv4.conf.eth2.rp_filter=2' + ulimits: + rtprio: 99 + cap_add: + - SYS_RESOURCE + - NET_ADMIN + - SYS_NICE + - AUDIT_WRITE + - AUDIT_CONTROL + - NET_RAW + devices: + - "${ASM_DISK1}:${ASM_DEVICE1}" + - "${ASM_DISK2}:${ASM_DEVICE2}" + networks: + - rac_pub1_nw + - rac_priv1_nw + - rac_priv2_nw + healthcheck: + test: ["CMD", "/bin/python3", "/opt/scripts/startup/scripts/main.py", "--checkracstatus"] + interval: ${HEALTHCHECK_INTERVAL} + timeout: ${HEALTHCHECK_TIMEOUT} + retries: ${HEALTHCHECK_RETRIES} + racnodep2: + container_name: ${RACNODE2_CONTAINER_NAME} + hostname: ${RACNODE2_HOST_NAME} + image: ${RAC_IMAGE_NAME} + restart: always + dns: ${DNS_PUBLIC_IP} + dns_search: ${DNS_DOMAIN} + shm_size: 4G + secrets: + - pwdsecret + - keysecret + volumes: + - ${GRID_RESPONSE_FILE}:/tmp/grid_21c.rsp + environment: + DNS_SERVERS: ${DNS_PUBLIC_IP} + CRS_PRIVATE_IP1: ${RACNODE2_CRS_PRIVATE_IP1} + CRS_PRIVATE_IP2: ${RACNODE2_CRS_PRIVATE_IP2} + INSTALL_NODE: ${INSTALL_NODE} + CRS_NODES: ${CRS_NODES} + SCAN_NAME: ${SCAN_NAME} + CRS_ASM_DEVICE_LIST: ${CRS_ASM_DEVICE_LIST} + INIT_SGA_SIZE: 3G + INIT_PGA_SIZE: 2G + DB_PWD_FILE: pwdsecret + PWD_KEY: keysecret + GRID_RESPONSE_FILE: /tmp/grid_21c.rsp + CMAN_HOST: ${CMAN_HOST_NAME} + CMAN_PORT: 1521 + DB_SERVICE: ${DB_SERVICE} + sysctls: + - kernel.shmall=2097152 + - kernel.shmmax=8589934592 + - kernel.shmmni=4096 + - 'kernel.sem=250 32000 100 128' + - 'net.ipv4.conf.eth1.rp_filter=2' + - 'net.ipv4.conf.eth2.rp_filter=2' + ulimits: + rtprio: 99 + cap_add: + - SYS_RESOURCE + - NET_ADMIN + - SYS_NICE + - AUDIT_WRITE + - AUDIT_CONTROL + - NET_RAW + devices: + - "${ASM_DISK1}:${ASM_DEVICE1}" + - "${ASM_DISK2}:${ASM_DEVICE2}" + networks: + - rac_pub1_nw + - rac_priv1_nw + - rac_priv2_nw + healthcheck: + test: ["CMD", "/bin/python3", "/opt/scripts/startup/scripts/main.py", "--checkracstatus"] + interval: ${HEALTHCHECK_INTERVAL} + timeout: ${HEALTHCHECK_TIMEOUT} + retries: ${HEALTHCHECK_RETRIES} + racnodepc1-cman: + container_name: ${CMAN_CONTAINER_NAME} + hostname: ${CMAN_HOST_NAME} + image: ${CMAN_IMAGE_NAME} + dns_search: ${DNS_DOMAIN} + dns: ${DNS_PUBLIC_IP} + environment: + DOMAIN_NAME: ${DNS_DOMAIN} + PUBLIC_IP: ${CMAN_PUBLIC_IP} + PUBLIC_HOSTNAME: ${CMAN_PUBLIC_HOSTNAME} + DB_HOSTDETAILS: ${DB_HOSTDETAILS} + privileged: false + ports: + - 1521:1521 + networks: + rac_pub1_nw: + ipv4_address: ${CMAN_PUBLIC_IP} + cap_add: + - AUDIT_WRITE + healthcheck: + test: ["CMD-SHELL", "pgrep -f 'cmadmin'"] + interval: ${HEALTHCHECK_INTERVAL} + timeout: ${HEALTHCHECK_TIMEOUT} + retries: ${HEALTHCHECK_RETRIES} \ No newline at end of file diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/samples/rac-compose/racimage/withresponsefiles/nfsdevices/grid_setup_new_21c.rsp b/OracleDatabase/RAC/OracleRealApplicationClusters/samples/rac-compose/racimage/withresponsefiles/nfsdevices/grid_setup_new_21c.rsp new file mode 100644 index 0000000000..16062dd6cb --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/samples/rac-compose/racimage/withresponsefiles/nfsdevices/grid_setup_new_21c.rsp @@ -0,0 +1,64 @@ +oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v21.0.0 +INVENTORY_LOCATION=/u01/app/oraInventory +oracle.install.option=CRS_CONFIG +ORACLE_BASE=/u01/app/grid +oracle.install.asm.OSDBA=dba +oracle.install.asm.OSOPER= +oracle.install.asm.OSASM=asmadmin +oracle.install.crs.config.scanType=LOCAL_SCAN +oracle.install.crs.config.SCANClientDataFile= +oracle.install.crs.config.gpnp.scanName=racnodepc1-scan +oracle.install.crs.config.gpnp.scanPort=1521 +oracle.install.crs.config.ClusterConfiguration=STANDALONE +oracle.install.crs.config.configureAsExtendedCluster=false +oracle.install.crs.config.memberClusterManifestFile= +oracle.install.crs.config.clusterName=rac01cluster +oracle.install.crs.config.gpnp.configureGNS= +oracle.install.crs.config.autoConfigureClusterNodeVIP=false +oracle.install.crs.config.gpnp.gnsOption= +oracle.install.crs.config.gpnp.gnsClientDataFile= +oracle.install.crs.config.gpnp.gnsSubDomain= +oracle.install.crs.config.gpnp.gnsVIPAddress= +oracle.install.crs.config.sites= +oracle.install.crs.config.clusterNodes=racnodep1:racnodep1-vip:HUB,racnodep2:racnodep2-vip:HUB +oracle.install.crs.config.networkInterfaceList=eth0:10.0.20.0:1,eth1:192.168.17.0:5,eth2:192.168.18.0:5 +oracle.install.asm.configureGIMRDataDG=false +oracle.install.crs.config.storageOption= +oracle.install.crs.config.useIPMI=false +oracle.install.crs.config.ipmi.bmcUsername= +oracle.install.crs.config.ipmi.bmcPassword= +oracle.install.asm.storageOption=ASM +oracle.install.asmOnNAS.ocrLocation= +oracle.install.asmOnNAS.configureGIMRDataDG=false +oracle.install.asmOnNAS.gimrLocation= +oracle.install.asm.SYSASMPassword=ORacle__21c +oracle.install.asm.diskGroup.name=DATA +oracle.install.asm.diskGroup.redundancy=EXTERNAL +oracle.install.asm.diskGroup.AUSize=4 +oracle.install.asm.diskGroup.FailureGroups= +oracle.install.asm.diskGroup.disksWithFailureGroupNames=/oradata/asm_disk01.img,,/oradata/asm_disk02.img,,/oradata/asm_disk03.img,,/oradata/asm_disk04.img,,/oradata/asm_disk05.im +oracle.install.asm.diskGroup.disks=/oradata/asm_disk01.img,/oradata/asm_disk02.img,/oradata/asm_disk03.img,/oradata/asm_disk04.img,/oradata/asm_disk05.img +oracle.install.asm.diskGroup.quorumFailureGroupNames= +oracle.install.asm.diskGroup.diskDiscoveryString=/oradata/asm_disk* +oracle.install.asm.monitorPassword=ORacle__21c +oracle.install.asm.gimrDG.name= +oracle.install.asm.gimrDG.redundancy= +oracle.install.asm.gimrDG.AUSize=1 +oracle.install.asm.gimrDG.FailureGroups= +oracle.install.asm.gimrDG.disksWithFailureGroupNames= +oracle.install.asm.gimrDG.disks= +oracle.install.asm.gimrDG.quorumFailureGroupNames= +oracle.install.asm.configureAFD=false +oracle.install.crs.configureRHPS=false +oracle.install.crs.config.ignoreDownNodes=false +oracle.install.config.managementOption=NONE +oracle.install.config.omsHost= +oracle.install.config.omsPort=0 +oracle.install.config.emAdminUser= +oracle.install.config.emAdminPassword= +oracle.install.crs.rootconfig.executeRootScript=false +oracle.install.crs.rootconfig.configMethod=ROOT +oracle.install.crs.rootconfig.sudoPath= +oracle.install.crs.rootconfig.sudoUserName= +oracle.install.crs.config.batchinfo= +oracle.install.crs.deleteNode.nodes= \ No newline at end of file diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/samples/rac-compose/racimage/withresponsefiles/nfsdevices/podman-compose.yml b/OracleDatabase/RAC/OracleRealApplicationClusters/samples/rac-compose/racimage/withresponsefiles/nfsdevices/podman-compose.yml new file mode 100644 index 0000000000..3816a34ca7 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/samples/rac-compose/racimage/withresponsefiles/nfsdevices/podman-compose.yml @@ -0,0 +1,200 @@ +--- +version: "3" +networks: + rac_pub1_nw: + external: true + rac_priv1_nw: + external: true + rac_priv2_nw: + external: true +secrets: + pwdsecret: + file: ${PWD_SECRET_FILE} + keysecret: + file: ${KEY_SECRET_FILE} +volumes: + racstorage: + external: true +services: + rac-dnsserver: + container_name: ${DNS_CONTAINER_NAME} + hostname: ${DNS_HOST_NAME} + image: ${DNS_IMAGE_NAME} + dns_search: ${DNS_DOMAIN} + environment: + SETUP_DNS_CONFIG_FILES: "setup_true" + DOMAIN_NAME: ${DNS_DOMAIN} + RAC_NODE_NAME_PREFIXP: ${RAC_NODE_NAME_PREFIXP} + WEBMIN_ENABLED: false + SETUP_DNS_CONFIG_FILES: "setup_true" + cap_add: + - AUDIT_WRITE + healthcheck: + test: ["CMD-SHELL", "pgrep named"] + interval: ${HEALTHCHECK_INTERVAL} + timeout: ${HEALTHCHECK_TIMEOUT} + retries: ${HEALTHCHECK_RETRIES} + privileged: false + networks: + rac_pub1_nw: + ipv4_address: ${DNS_PUBLIC_IP} + racnode-storage: + container_name: ${STORAGE_CONTAINER_NAME} + hostname: ${STORAGE_HOST_NAME} + image: ${STORAGE_IMAGE_NAME} + dns_search: ${DNS_DOMAIN} + volumes: + - ${NFS_STORAGE_VOLUME}:/oradata + cap_add: + - SYS_ADMIN + - AUDIT_WRITE + - NET_ADMIN + restart: always + healthcheck: + test: + - CMD-SHELL + - /bin/bash -c "ls -lrt /oradata/ && showmount -e | grep '/oradata'" + interval: ${HEALTHCHECK_INTERVAL} + timeout: ${HEALTHCHECK_TIMEOUT} + retries: ${HEALTHCHECK_RETRIES} + networks: + rac_pub1_nw: + ipv4_address: ${STORAGE_PUBLIC_IP} + racnodep1: + container_name: ${RACNODE1_CONTAINER_NAME} + hostname: ${RACNODE1_HOST_NAME} + image: ${RAC_IMAGE_NAME} + restart: always + dns: ${DNS_PUBLIC_IP} + dns_search: ${DNS_DOMAIN} + shm_size: 4G + secrets: + - pwdsecret + - keysecret + volumes: + - racstorage:/oradata + - ${GRID_RESPONSE_FILE}:/tmp/grid_21c.rsp + environment: + DNS_SERVERS: ${DNS_PUBLIC_IP} + CRS_PRIVATE_IP1: ${RACNODE1_CRS_PRIVATE_IP1} + CRS_PRIVATE_IP2: ${RACNODE1_CRS_PRIVATE_IP2} + OP_TYPE: setuprac + INSTALL_NODE: ${INSTALL_NODE} + CRS_NODES: ${CRS_NODES} + SCAN_NAME: ${SCAN_NAME} + CRS_ASM_DEVICE_LIST: ${CRS_ASM_DEVICE_LIST} + CRS_ASM_DISCOVERY_STRING: "/oradata" + INIT_SGA_SIZE: 3G + INIT_PGA_SIZE: 2G + DB_PWD_FILE: pwdsecret + PWD_KEY: keysecret + CMAN_HOST: ${CMAN_HOST_NAME} + CMAN_PORT: 1521 + ASM_ON_NAS: True + DB_SERVICE: ${DB_SERVICE} + sysctls: + - kernel.shmall=2097152 + - kernel.shmmax=8589934592 + - kernel.shmmni=4096 + - 'kernel.sem=250 32000 100 128' + - 'net.ipv4.conf.eth1.rp_filter=2' + - 'net.ipv4.conf.eth2.rp_filter=2' + ulimits: + rtprio: 99 + cap_add: + - SYS_RESOURCE + - NET_ADMIN + - SYS_NICE + - AUDIT_WRITE + - AUDIT_CONTROL + - NET_RAW + networks: + - rac_pub1_nw + - rac_priv1_nw + - rac_priv2_nw + healthcheck: + test: ["CMD", "/bin/python3", "/opt/scripts/startup/scripts/main.py", "--checkracstatus"] + interval: ${HEALTHCHECK_INTERVAL} + timeout: ${HEALTHCHECK_TIMEOUT} + retries: ${HEALTHCHECK_RETRIES} + racnodep2: + container_name: ${RACNODE2_CONTAINER_NAME} + hostname: ${RACNODE2_HOST_NAME} + image: ${RAC_IMAGE_NAME} + restart: always + dns: ${DNS_PUBLIC_IP} + dns_search: ${DNS_DOMAIN} + shm_size: 4G + secrets: + - pwdsecret + - keysecret + volumes: + - racstorage:${CRS_ASM_DISCOVERY_STRING} + - ${GRID_RESPONSE_FILE}:/tmp/grid_21c.rsp + environment: + DNS_SERVERS: ${DNS_PUBLIC_IP} + CRS_PRIVATE_IP1: ${RACNODE2_CRS_PRIVATE_IP1} + CRS_PRIVATE_IP2: ${RACNODE2_CRS_PRIVATE_IP2} + INSTALL_NODE: ${INSTALL_NODE} + CRS_NODES: ${CRS_NODES} + SCAN_NAME: ${SCAN_NAME} + CRS_ASM_DEVICE_LIST: ${CRS_ASM_DEVICE_LIST} + CRS_ASM_DISCOVERY_STRING: "/oradata" + INIT_SGA_SIZE: 3G + INIT_PGA_SIZE: 2G + DB_PWD_FILE: pwdsecret + PWD_KEY: keysecret + CMAN_HOST: ${CMAN_HOST_NAME} + CMAN_PORT: 1521 + ASM_ON_NAS: True + DB_SERVICE: ${DB_SERVICE} + sysctls: + - kernel.shmall=2097152 + - kernel.shmmax=8589934592 + - kernel.shmmni=4096 + - 'kernel.sem=250 32000 100 128' + - 'net.ipv4.conf.eth1.rp_filter=2' + - 'net.ipv4.conf.eth2.rp_filter=2' + ulimits: + rtprio: 99 + cap_add: + - SYS_RESOURCE + - NET_ADMIN + - SYS_NICE + - AUDIT_WRITE + - AUDIT_CONTROL + - NET_RAW + networks: + - rac_pub1_nw + - rac_priv1_nw + - rac_priv2_nw + healthcheck: + test: ["CMD", "/bin/python3", "/opt/scripts/startup/scripts/main.py", "--checkracstatus"] + interval: ${HEALTHCHECK_INTERVAL} + timeout: ${HEALTHCHECK_TIMEOUT} + retries: ${HEALTHCHECK_RETRIES} + racnodepc1-cman: + container_name: ${CMAN_CONTAINER_NAME} + hostname: ${CMAN_HOST_NAME} + image: ${CMAN_IMAGE_NAME} + dns_search: ${DNS_DOMAIN} + dns: ${DNS_PUBLIC_IP} + environment: + DOMAIN_NAME: ${DNS_DOMAIN} + PUBLIC_IP: ${CMAN_PUBLIC_IP} + PUBLIC_HOSTNAME: ${CMAN_PUBLIC_HOSTNAME} + DB_HOSTDETAILS: ${DB_HOSTDETAILS} + privileged: false + ports: + - 1521:1521 + networks: + rac_pub1_nw: + ipv4_address: ${CMAN_PUBLIC_IP} + cap_add: + - AUDIT_WRITE + - NET_RAW + healthcheck: + test: ["CMD-SHELL", "pgrep -f 'cmadmin'"] + interval: ${HEALTHCHECK_INTERVAL} + timeout: ${HEALTHCHECK_TIMEOUT} + retries: ${HEALTHCHECK_RETRIES} \ No newline at end of file diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/samples/rac-compose/racslimimage/README.md b/OracleDatabase/RAC/OracleRealApplicationClusters/samples/rac-compose/racslimimage/README.md new file mode 100644 index 0000000000..b2923df944 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/samples/rac-compose/racslimimage/README.md @@ -0,0 +1,827 @@ +# Oracle RAC on Podman Compose using Slim Image +=============================================================== + +Refer below instructions for setup of Oracle RAC on Podman Compose using Slim Image for various scenarios. + +- [Oracle RAC on Podman Compose using Slim Image](#oracle-rac-on-podman-compose-using-slim-image) + - [Section 1 : Prerequisites for Setting up Oracle RAC on Container Using Slim Image](#section-1-prerequisites-for-setting-up-oracle-rac-on-container-using-slim-image) + - [Section 2: Setup Oracle RAC Containers with Slim Image using Podman Compose Files](#section-2-setup-oracle-rac-containers-with-slim-image-using-podman-compose-files) + - [Section 2.1: Deploying With BlockDevices](#section-21-deploying-with-blockdevices) + - [Section 2.1.1: Setup Without Using User Defined Response files](#section-211-setup-without-using-user-defined-response-files) + - [Section 2.1.2: Setup Using User Defined Response files](#section-212-setup-using-user-defined-response-files) + - [Section 2.2: Deploying With NFS Storage Devices](#section-22-deploying-with-nfs-storage-devices) + - [Section 2.2.1: Setup Without Using User Defined Response files](#section-221-setup-without-using-user-defined-response-files) + - [Section 2.2.2: Setup Using User Defined Response files](#section-222-setup-using-user-defined-response-files) + - [Section 3: Sample of Addition of Nodes to Oracle RAC Containers based on Slim Image](#section-3-sample-of-addition-of-nodes-to-oracle-rac-containers-based-on-slim-image) + - [Section 3.1: Sample of Addition of Nodes to Oracle RAC Containers using Podman Compose based on Oracle RAC Slim Image with BlockDevices](#section-31-sample-of-addition-of-nodes-to-oracle-rac-containers-using-podman-compose-based-on-oracle-rac-slim-image-with-blockdevices) + - [Section 3.2: Sample of Addition of Nodes to Oracle RAC Containers using Podman Compose based on Oracle RAC Slim Image with NFS Storage Devices](#section-32-sample-of-addition-of-nodes-to-oracle-rac-containers-using-podman-compose-based-on-oracle-rac-slim-image-with-nfs-storage-devices) + - [Section 4: Environment Variables for Oracle RAC on Podman Compose](#section-4-environment-variables-for-oracle-rac-on-podman-compose) + - [Section 5: Validating Oracle RAC Environment](#section-5-validating-oracle-rac-environment) + - [Section 6: Connecting to Oracle RAC Environment](#section-6-connecting-to-oracle-rac-environment) + - [Cleanup](#cleanup) + - [Support](#support) + - [License](#license) + - [Copyright](#copyright) + +## Oracle RAC Setup on Podman Compose using Slim Image + +You can deploy multi node Oracle RAC Setup using Slim Image either on Block Devices or NFS storage Devices by using User Defined Response Files or without using same. All these scenarios are discussed in detail as you proceed further below. +## Section 1: Prerequisites for Setting up Oracle RAC on Container using Slim Image +**IMPORTANT :** Execute all the steps specified in this section before you proceed to the next section. Completing prerequisite steps is a requirement for successful configuration. + +* Execute the [Preparation Steps for running Oracle RAC Database in Containers](../../../README.md#preparation-steps-for-running-oracle-rac-database-in-containers) +* Create Oracle Connection Manager on Container image and container if the IPs are not available on user network.Please refer [RAC Oracle Connection Manager README.MD](../../../../OracleConnectionManager/README.md). +* Make sure Oracle RAC Slim Image is present as shown below. If you have not created the Oracle RAC Container image, execute the [Section 2.1: Building Oracle RAC Database Slim Image](../../../README.md). + ```bash + # podman images|grep database-rac + localhost/oracle/database-rac 21.3.0-slim bf6ae21ccd5a 8 hours ago 517 MB + ``` +* Execute the [Network](../../../README.md#network-management). +* Execute the [Password Management](../../../README.md#password-management). +* `podman-compose` is part of [ol8_developer_EPEL](https://yum.oracle.com/repo/OracleLinux/ol8/developer/EPEL/x86_64/index.html). Enable `ol8_developer_EPEL` repository and install `podman-compose` as below- + ```bash + dnf config-manager --enable ol8_developer_EPEL + dnf install -y podman-compose + ``` +* Prepare Hosts with empty paths for 2 nodes similar to below, these are going to be mounted to Oracle RAC nodes for installing Oracle RAC Software binaries later during container creation - + ```bash + mkdir -p /scratch/rac/cluster01/node1 + rm -rf /scratch/rac/cluster01/node1/* + + mkdir -p /scratch/rac/cluster01/node2 + rm -rf /scratch/rac/cluster01/node2/* + ``` + +* Make sure downloaded Oracle RAC software location is staged, & available for both RAC nodes. In below example, we have staged Oracle RAC software at location `/scratch/software/21c/goldimages` + ```bash + ls /scratch/software/21c/goldimages + LINUX.X64_213000_db_home.zip LINUX.X64_213000_grid_home.zip + ``` +* If SELinux is enabled on the host machine then execute the following as well - + ```bash + semanage fcontext -a -t container_file_t /scratch/rac/cluster01/node1 + restorecon -v /scratch/rac/cluster01/node1 + semanage fcontext -a -t container_file_t /scratch/rac/cluster01/node2 + restorecon -v /scratch/rac/cluster01/node2 + semanage fcontext -a -t container_file_t /scratch/software/21c/goldimages/LINUX.X64_213000_grid_home.zip + restorecon -v /scratch/software/21c/goldimages/LINUX.X64_213000_grid_home.zip + semanage fcontext -a -t container_file_t /scratch/software/21c/goldimages/LINUX.X64_213000_db_home.zip + restorecon -v /scratch/software/21c/goldimages/LINUX.X64_213000_db_home.zip + ``` +In order to setup 2 Node RAC containers using Podman compose, please make sure pre-requisites are completed before proceeding further - + +## Section 2: Setup Oracle RAC Containers with Slim Image using Podman Compose Files + +### Section 2.1: Deploying With BlockDevices + +#### Section 2.1.1: Setup Without Using User Defined Response files +Make sure you completed pre-requisites step to install Podman Compose on required Podman Host Machines. + +Now, Export the required environment variables required by `podman-compose.yml` file - +```bash +export HEALTHCHECK_INTERVAL=60s +export HEALTHCHECK_TIMEOUT=120s +export HEALTHCHECK_RETRIES=240 +export RACNODE1_CONTAINER_NAME=racnodep1 +export RACNODE1_HOST_NAME=racnodep1 +export RACNODE1_PUBLIC_IP=10.0.20.170 +export RACNODE1_CRS_PRIVATE_IP1=192.168.17.170 +export RACNODE1_CRS_PRIVATE_IP2=192.168.18.170 +export INSTALL_NODE=racnodep1 +export RAC_IMAGE_NAME=localhost/oracle/database-rac:21.3.0-slim +export DEFAULT_GATEWAY="10.0.20.1" +export CRS_NODES="\"pubhost:racnodep1,viphost:racnodep1-vip;pubhost:racnodep2,viphost:racnodep2-vip\"" +export SCAN_NAME=racnodepc1-scan +export ASM_DEVICE1="/dev/asm-disk1" +export ASM_DEVICE2="/dev/asm-disk2" +export CRS_ASM_DEVICE_LIST="${ASM_DEVICE1},${ASM_DEVICE2}" +export ASM_DISK1="/dev/oracleoci/oraclevdd" +export ASM_DISK2="/dev/oracleoci/oraclevde" +export CRS_ASM_DISCOVERY_STRING="/dev/asm*" +export STAGING_SOFTWARE_LOC="/scratch/software/21c/goldimages/" +export RACNODE2_CONTAINER_NAME=racnodep2 +export RACNODE2_HOST_NAME=racnodep2 +export RACNODE2_PUBLIC_IP=10.0.20.171 +export RACNODE2_CRS_PRIVATE_IP1=192.168.17.171 +export RACNODE2_CRS_PRIVATE_IP2=192.168.18.171 +export DNS_CONTAINER_NAME=rac-dnsserver +export DNS_HOST_NAME=racdns +export DNS_IMAGE_NAME="oracle/rac-dnsserver:latest" +export RAC_NODE_NAME_PREFIXP="racnodep" +export DNS_DOMAIN=example.info +export PUBLIC_NETWORK_NAME="rac_pub1_nw" +export PUBLIC_NETWORK_SUBNET="10.0.20.0/24" +export PRIVATE1_NETWORK_NAME="rac_priv1_nw" +export PRIVATE1_NETWORK_SUBNET="192.168.17.0/24" +export PRIVATE2_NETWORK_NAME="rac_priv2_nw" +export PRIVATE2_NETWORK_SUBNET="192.168.18.0/24" +export DNS_PUBLIC_IP=10.0.20.25 +export DNS_PRIVATE1_IP=192.168.17.25 +export DNS_PRIVATE2_IP=192.168.18.25 +export CMAN_CONTAINER_NAME=racnodepc1-cman +export CMAN_HOST_NAME=racnodepc1-cman +export CMAN_IMAGE_NAME="localhost/oracle/client-cman:21.3.0" +export CMAN_PUBLIC_IP=10.0.20.15 +export CMAN_PUBLIC_HOSTNAME="racnodepc1-cman" +export DB_HOSTDETAILS="HOST=racnodepc1-scan:RULE_ACT=accept,HOST=racnodep1:IP=10.0.20.170" +export PWD_SECRET_FILE=/opt/.secrets/pwdfile.enc +export KEY_SECRET_FILE=/opt/.secrets/key.pem +export DB_SERVICE=service:soepdb +``` +Create podman networks- +```bash +podman network create --driver=bridge --subnet=${PUBLIC_NETWORK_SUBNET} ${PUBLIC_NETWORK_NAME} +podman network create --driver=bridge --subnet=${PRIVATE1_NETWORK_SUBNET} ${PRIVATE1_NETWORK_NAME} --disable-dns --internal +podman network create --driver=bridge --subnet=${PRIVATE2_NETWORK_SUBNET} ${PRIVATE2_NETWORK_NAME} --disable-dns --internal +``` +Create compose file named [podman-compose.yml](./withoutresponsefiles/blockdevices/podman-compose.yml) in your working directory. + + +Bring up DNS Containers- +```bash +podman-compose up -d ${DNS_CONTAINER_NAME} +podman-compose stop ${DNS_CONTAINER_NAME} +podman network disconnect ${PUBLIC_NETWORK_NAME} ${DNS_CONTAINER_NAME} +podman network disconnect ${PRIVATE1_NETWORK_NAME} ${DNS_CONTAINER_NAME} +podman network disconnect ${PRIVATE2_NETWORK_NAME} ${DNS_CONTAINER_NAME} +podman network connect ${PUBLIC_NETWORK_NAME} --ip ${DNS_PUBLIC_IP} ${DNS_CONTAINER_NAME} +podman network connect ${PRIVATE1_NETWORK_NAME} --ip ${DNS_PRIVATE1_IP} ${DNS_CONTAINER_NAME} +podman network connect ${PRIVATE2_NETWORK_NAME} --ip ${DNS_PRIVATE2_IP} ${DNS_CONTAINER_NAME} +podman-compose start ${DNS_CONTAINER_NAME} +``` +Bring up RAC Containers- +```bash +podman-compose --podman-run-args="-t -i --systemd=always --cpuset-cpus 0-1 --memory 16G --memory-swap 32G" up -d ${RACNODE1_CONTAINER_NAME} +podman-compose stop ${RACNODE1_CONTAINER_NAME} + +podman-compose --podman-run-args="-t -i --systemd=always --cpuset-cpus 0-1 --memory 16G --memory-swap 32G" up -d ${RACNODE2_CONTAINER_NAME} +podman-compose stop ${RACNODE2_CONTAINER_NAME} + +rm -rf /scratch/rac/cluster01/node1/* +rm -rf /scratch/rac/cluster01/node2/* + +podman network disconnect ${PUBLIC_NETWORK_NAME} ${RACNODE1_CONTAINER_NAME} +podman network disconnect ${PRIVATE1_NETWORK_NAME} ${RACNODE1_CONTAINER_NAME} +podman network disconnect ${PRIVATE2_NETWORK_NAME} ${RACNODE1_CONTAINER_NAME} + +podman network disconnect ${PUBLIC_NETWORK_NAME} ${RACNODE2_CONTAINER_NAME} +podman network disconnect ${PRIVATE1_NETWORK_NAME} ${RACNODE2_CONTAINER_NAME} +podman network disconnect ${PRIVATE2_NETWORK_NAME} ${RACNODE2_CONTAINER_NAME} + +podman network connect ${PUBLIC_NETWORK_NAME} --ip ${RACNODE1_PUBLIC_IP} ${RACNODE1_CONTAINER_NAME} +podman network connect ${PRIVATE1_NETWORK_NAME} --ip ${RACNODE1_CRS_PRIVATE_IP1} ${RACNODE1_CONTAINER_NAME} +podman network connect ${PRIVATE2_NETWORK_NAME} --ip ${RACNODE1_CRS_PRIVATE_IP2} ${RACNODE1_CONTAINER_NAME} + +podman network connect ${PUBLIC_NETWORK_NAME} --ip ${RACNODE2_PUBLIC_IP} ${RACNODE2_CONTAINER_NAME} +podman network connect ${PRIVATE1_NETWORK_NAME} --ip ${RACNODE2_CRS_PRIVATE_IP1} ${RACNODE2_CONTAINER_NAME} +podman network connect ${PRIVATE2_NETWORK_NAME} --ip ${RACNODE2_CRS_PRIVATE_IP2} ${RACNODE2_CONTAINER_NAME} + +podman-compose start ${RACNODE1_CONTAINER_NAME} +podman-compose start ${RACNODE2_CONTAINER_NAME} +podman exec ${RACNODE1_CONTAINER_NAME} /bin/bash -c "tail -f /tmp/orod/oracle_rac_setup.log" +``` + +Successful Message when RAC container is setup properly- +```bash +=================================== +ORACLE RAC DATABASE IS READY TO USE +=================================== +``` + +Bring up CMAN Container- +```bash +podman-compose up -d ${CMAN_CONTAINER_NAME} +``` + +Successful Message when CMAN container is setup properly- +```bash +################################################ +CONNECTION MANAGER IS READY TO USE! +################################################ +``` +#### Section 2.1.2: Setup Using User Defined Response files +* On the shared folder between both RAC nodes, create file name `grid_setup_new_21c.rsp` similar as below inside directory named `/scratch/common_scripts/podman/rac/`. Same is also saved in this [grid_setup_new_21c.rsp](withresponsefiles/blockdevices/grid_setup_new_21c.rsp) file. +* Also, prepare database response file similar to this [dbca_21c.rsp](./dbca_21c.rsp). +* If SELinux host is enable on machine then execute the following as well - + ```bash + semanage fcontext -a -t container_file_t /scratch/common_scripts/podman/rac/grid_setup_new_21c.rsp + restorecon -v /scratch/common_scripts/podman/rac/grid_setup_new_21c.rsp + semanage fcontext -a -t container_file_t /scratch/common_scripts/podman/rac/dbca_21c.rsp + restorecon -v /scratch/common_scripts/podman/rac/dbca_21c.rsp + ``` +You can skip this step if you are planing to not to use **User Defined Response Files for RAC**. + +Now, Export the required environment variables required by `podman-compose.yml` file - +```bash +export HEALTHCHECK_INTERVAL=60s +export HEALTHCHECK_TIMEOUT=120s +export HEALTHCHECK_RETRIES=240 +export RACNODE1_CONTAINER_NAME=racnodep1 +export RACNODE1_HOST_NAME=racnodep1 +export RACNODE1_PUBLIC_IP=10.0.20.170 +export RACNODE1_CRS_PRIVATE_IP1=192.168.17.170 +export RACNODE1_CRS_PRIVATE_IP2=192.168.18.170 +export INSTALL_NODE=racnodep1 +export RAC_IMAGE_NAME=localhost/oracle/database-rac:21.3.0-slim +export STAGING_SOFTWARE_LOC="/scratch/software/21c/goldimages/" +export DEFAULT_GATEWAY="10.0.20.1" +export ASM_DEVICE1="/dev/asm-disk1" +export ASM_DEVICE2="/dev/asm-disk2" +export CRS_ASM_DEVICE_LIST="${ASM_DEVICE1},${ASM_DEVICE2}" +export ASM_DISK1="/dev/oracleoci/oraclevdd" +export ASM_DISK2="/dev/oracleoci/oraclevde" +export RACNODE2_CONTAINER_NAME=racnodep2 +export RACNODE2_HOST_NAME=racnodep2 +export RACNODE2_PUBLIC_IP=10.0.20.171 +export RACNODE2_CRS_PRIVATE_IP1=192.168.17.171 +export RACNODE2_CRS_PRIVATE_IP2=192.168.18.171 +export DNS_CONTAINER_NAME=rac-dnsserver +export DNS_HOST_NAME=racdns +export DNS_IMAGE_NAME="oracle/rac-dnsserver:latest" +export RAC_NODE_NAME_PREFIXP="racnodep" +export DNS_DOMAIN=example.info +export PUBLIC_NETWORK_NAME="rac_pub1_nw" +export PUBLIC_NETWORK_SUBNET="10.0.20.0/24" +export PRIVATE1_NETWORK_NAME="rac_priv1_nw" +export PRIVATE1_NETWORK_SUBNET="192.168.17.0/24" +export PRIVATE2_NETWORK_NAME="rac_priv2_nw" +export PRIVATE2_NETWORK_SUBNET="192.168.18.0/24" +export DNS_PUBLIC_IP=10.0.20.25 +export CMAN_CONTAINER_NAME=racnodepc1-cman +export CMAN_HOST_NAME=racnodepc1-cman +export CMAN_IMAGE_NAME="localhost/oracle/client-cman:21.3.0" +export CMAN_PUBLIC_IP=10.0.20.15 +export CMAN_PUBLIC_HOSTNAME="racnodepc1-cman" +export DB_HOSTDETAILS="HOST=racnodepc1-scan:RULE_ACT=accept,HOST=racnodep1:IP=10.0.20.170" +export GRID_RESPONSE_FILE="/scratch/common_scripts/podman/rac/grid_setup_new_21c.rsp" +export DB_RESPONSE_FILE="/scratch/common_scripts/podman/rac/dbca_21c.rsp" +export PWD_SECRET_FILE=/opt/.secrets/pwdfile.enc +export KEY_SECRET_FILE=/opt/.secrets/key.pem +export DB_SERVICE=service:soepdb +``` +Create podman networks- +```bash +podman network create --driver=bridge --subnet=${PUBLIC_NETWORK_SUBNET} ${PUBLIC_NETWORK_NAME} +podman network create --driver=bridge --subnet=${PRIVATE1_NETWORK_SUBNET} ${PRIVATE1_NETWORK_NAME} --disable-dns --internal +podman network create --driver=bridge --subnet=${PRIVATE2_NETWORK_SUBNET} ${PRIVATE2_NETWORK_NAME} --disable-dns --internal +``` +Create compose file named [podman-compose.yml](./withresponsefiles/blockdevices/podman-compose.yml) in your working directory. + + +Bring up DNS Containers- +```bash +podman-compose up -d ${DNS_CONTAINER_NAME} +podman-compose stop ${DNS_CONTAINER_NAME} +podman network disconnect ${PUBLIC_NETWORK_NAME} ${DNS_CONTAINER_NAME} +podman network disconnect ${PRIVATE1_NETWORK_NAME} ${DNS_CONTAINER_NAME} +podman network disconnect ${PRIVATE2_NETWORK_NAME} ${DNS_CONTAINER_NAME} +podman network connect ${PUBLIC_NETWORK_NAME} --ip ${DNS_PUBLIC_IP} ${DNS_CONTAINER_NAME} +podman network connect ${PRIVATE1_NETWORK_NAME} --ip ${DNS_PRIVATE1_IP} ${DNS_CONTAINER_NAME} +podman network connect ${PRIVATE2_NETWORK_NAME} --ip ${DNS_PRIVATE2_IP} ${DNS_CONTAINER_NAME} +podman-compose start ${DNS_CONTAINER_NAME} +``` +Bring up RAC Containers- +```bash +podman-compose --podman-run-args="-t -i --systemd=always --cpuset-cpus 0-1 --memory 16G --memory-swap 32G" up -d ${RACNODE1_CONTAINER_NAME} +podman-compose stop ${RACNODE1_CONTAINER_NAME} +podman-compose --podman-run-args="-t -i --systemd=always --cpuset-cpus 0-1 --memory 16G --memory-swap 32G" up -d ${RACNODE2_CONTAINER_NAME} +podman-compose stop ${RACNODE2_CONTAINER_NAME} +rm -rf /scratch/rac/cluster01/node1/* +rm -rf /scratch/rac/cluster01/node2/* +podman network disconnect ${PUBLIC_NETWORK_NAME} ${RACNODE1_CONTAINER_NAME} +podman network disconnect ${PRIVATE1_NETWORK_NAME} ${RACNODE1_CONTAINER_NAME} +podman network disconnect ${PRIVATE2_NETWORK_NAME} ${RACNODE1_CONTAINER_NAME} + +podman network disconnect ${PUBLIC_NETWORK_NAME} ${RACNODE2_CONTAINER_NAME} +podman network disconnect ${PRIVATE1_NETWORK_NAME} ${RACNODE2_CONTAINER_NAME} +podman network disconnect ${PRIVATE2_NETWORK_NAME} ${RACNODE2_CONTAINER_NAME} + +podman network connect ${PUBLIC_NETWORK_NAME} --ip ${RACNODE1_PUBLIC_IP} ${RACNODE1_CONTAINER_NAME} +podman network connect ${PRIVATE1_NETWORK_NAME} --ip ${RACNODE1_CRS_PRIVATE_IP1} ${RACNODE1_CONTAINER_NAME} +podman network connect ${PRIVATE2_NETWORK_NAME} --ip ${RACNODE1_CRS_PRIVATE_IP2} ${RACNODE1_CONTAINER_NAME} + +podman network connect ${PUBLIC_NETWORK_NAME} --ip ${RACNODE2_PUBLIC_IP} ${RACNODE2_CONTAINER_NAME} +podman network connect ${PRIVATE1_NETWORK_NAME} --ip ${RACNODE2_CRS_PRIVATE_IP1} ${RACNODE2_CONTAINER_NAME} +podman network connect ${PRIVATE2_NETWORK_NAME} --ip ${RACNODE2_CRS_PRIVATE_IP2} ${RACNODE2_CONTAINER_NAME} + +podman-compose start ${RACNODE1_CONTAINER_NAME} +podman-compose start ${RACNODE2_CONTAINER_NAME} +podman exec ${RACNODE1_CONTAINER_NAME} /bin/bash -c "tail -f /tmp/orod/oracle_rac_setup.log" +``` + +Successful Message when RAC container is setup properly- +```bash +=================================== +ORACLE RAC DATABASE IS READY TO USE +=================================== +``` + +Bring up CMAN Container- +```bash +podman-compose up -d ${CMAN_CONTAINER_NAME} +``` + +Successful Message when CMAN container is setup properly- +```bash +################################################ +CONNECTION MANAGER IS READY TO USE! +################################################ +``` +### Section 2.2: Deploying With NFS Storage Devices +#### Section 2.2.1: Setup Without Using User Defined Response files + +Create placeholder for NFS storage and make sure it is empty - + + ```bash + export ORACLE_DBNAME=ORCLCDB + mkdir -p /scratch/stage/rac-storage/$ORACLE_DBNAME + rm -rf /scratch/stage/rac-storage/ORCLCDB/asm_disk0* + ``` + +Now, Export the required environment variables required by `podman-compose.yml` file - +```bash +export HEALTHCHECK_INTERVAL=60s +export HEALTHCHECK_TIMEOUT=120s +export HEALTHCHECK_RETRIES=240 +export RACNODE1_CONTAINER_NAME=racnodep1 +export RACNODE1_HOST_NAME=racnodep1 +export RACNODE1_PUBLIC_IP=10.0.20.170 +export RACNODE1_CRS_PRIVATE_IP1=192.168.17.170 +export RACNODE1_CRS_PRIVATE_IP2=192.168.18.170 +export INSTALL_NODE=racnodep1 +export RAC_IMAGE_NAME=localhost/oracle/database-rac:21.3.0-slim +export STAGING_SOFTWARE_LOC="/scratch/software/21c/goldimages/" +export DEFAULT_GATEWAY="10.0.20.1" +export CRS_NODES="\"pubhost:racnodep1,viphost:racnodep1-vip;pubhost:racnodep2,viphost:racnodep2-vip\"" +export SCAN_NAME=racnodepc1-scan +export CRS_ASM_DISCOVERY_STRING="/oradata" +export CRS_ASM_DEVICE_LIST="/oradata/asm_disk01.img,/oradata/asm_disk02.img,/oradata/asm_disk03.img,/oradata/asm_disk04.img,/oradata/asm_disk05.img" +export RACNODE2_CONTAINER_NAME=racnodep2 +export RACNODE2_HOST_NAME=racnodep2 +export RACNODE2_PUBLIC_IP=10.0.20.171 +export RACNODE2_CRS_PRIVATE_IP1=192.168.17.171 +export RACNODE2_CRS_PRIVATE_IP2=192.168.18.171 +export DNS_CONTAINER_NAME=rac-dnsserver +export DNS_HOST_NAME=racdns +export DNS_IMAGE_NAME="oracle/rac-dnsserver:latest" +export RAC_NODE_NAME_PREFIXP="racnodep" +export DNS_DOMAIN=example.info +export PUBLIC_NETWORK_NAME="rac_pub1_nw" +export PUBLIC_NETWORK_SUBNET="10.0.20.0/24" +export PRIVATE1_NETWORK_NAME="rac_priv1_nw" +export PRIVATE1_NETWORK_SUBNET="192.168.17.0/24" +export PRIVATE2_NETWORK_NAME="rac_priv2_nw" +export PRIVATE2_NETWORK_SUBNET="192.168.18.0/24" +export DNS_PUBLIC_IP=10.0.20.25 +export CMAN_CONTAINER_NAME=racnodepc1-cman +export CMAN_HOST_NAME=racnodepc1-cman +export CMAN_IMAGE_NAME="localhost/oracle/client-cman:21.3.0" +export CMAN_PUBLIC_IP=10.0.20.15 +export CMAN_PUBLIC_HOSTNAME="racnodepc1-cman" +export DB_HOSTDETAILS="HOST=racnodepc1-scan:RULE_ACT=accept,HOST=racnodep1:IP=10.0.20.170" +export STORAGE_CONTAINER_NAME="racnode-storage" +export STORAGE_HOST_NAME="racnode-storage" +export STORAGE_IMAGE_NAME="localhost/oracle/rac-storage-server:latest" +export ORACLE_DBNAME="ORCLCDB" +export STORAGE_PUBLIC_IP=10.0.20.80 +export NFS_STORAGE_VOLUME="/scratch/stage/rac-storage/$ORACLE_DBNAME" +export PWD_SECRET_FILE=/opt/.secrets/pwdfile.enc +export KEY_SECRET_FILE=/opt/.secrets/key.pem +export DB_SERVICE=service:soepdb +``` +Create podman networks- +```bash +podman network create --driver=bridge --subnet=${PUBLIC_NETWORK_SUBNET} ${PUBLIC_NETWORK_NAME} +podman network create --driver=bridge --subnet=${PRIVATE1_NETWORK_SUBNET} ${PRIVATE1_NETWORK_NAME} --disable-dns --internal +podman network create --driver=bridge --subnet=${PRIVATE2_NETWORK_SUBNET} ${PRIVATE2_NETWORK_NAME} --disable-dns --internal +``` +Create compose file named [podman-compose.yml](./withoutresponsefiles/nfsdevices/podman-compose.yml) in your working directory. + + +Bring up DNS Containers- +```bash +podman-compose up -d ${DNS_CONTAINER_NAME} +podman-compose stop ${DNS_CONTAINER_NAME} +podman network disconnect ${PUBLIC_NETWORK_NAME} ${DNS_CONTAINER_NAME} +podman network disconnect ${PRIVATE1_NETWORK_NAME} ${DNS_CONTAINER_NAME} +podman network disconnect ${PRIVATE2_NETWORK_NAME} ${DNS_CONTAINER_NAME} +podman network connect ${PUBLIC_NETWORK_NAME} --ip ${DNS_PUBLIC_IP} ${DNS_CONTAINER_NAME} +podman network connect ${PRIVATE1_NETWORK_NAME} --ip ${DNS_PRIVATE1_IP} ${DNS_CONTAINER_NAME} +podman network connect ${PRIVATE2_NETWORK_NAME} --ip ${DNS_PRIVATE2_IP} ${DNS_CONTAINER_NAME} +podman-compose start ${DNS_CONTAINER_NAME} +``` + +Bring up Storage Container- +```bash +podman-compose --podman-run-args="-t -i --systemd=always" up -d ${STORAGE_CONTAINER_NAME} +podman-compose exec ${STORAGE_CONTAINER_NAME} tail -f /tmp/storage_setup.log + +Export list for racnode-storage: +/oradata * +################################################# + Setup Completed +################################################# +``` + +Create NFS volume- +```bash +podman volume create --driver local \ +--opt type=nfs \ +--opt o=addr=10.0.20.80,rw,bg,hard,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 \ +--opt device=10.0.20.80:/oradata \ +racstorage +``` +Bring up RAC Containers- +```bash +podman-compose --podman-run-args="-t -i --systemd=always --cpuset-cpus 0-1 --memory 16G --memory-swap 32G" up -d ${RACNODE1_CONTAINER_NAME} +podman-compose stop ${RACNODE1_CONTAINER_NAME} +podman-compose --podman-run-args="-t -i --systemd=always --cpuset-cpus 0-1 --memory 16G --memory-swap 32G" up -d ${RACNODE2_CONTAINER_NAME} +podman-compose stop ${RACNODE2_CONTAINER_NAME} +rm -rf /scratch/rac/cluster01/node1/* +rm -rf /scratch/rac/cluster01/node2/* +podman network disconnect ${PUBLIC_NETWORK_NAME} ${RACNODE1_CONTAINER_NAME} +podman network disconnect ${PRIVATE1_NETWORK_NAME} ${RACNODE1_CONTAINER_NAME} +podman network disconnect ${PRIVATE2_NETWORK_NAME} ${RACNODE1_CONTAINER_NAME} + +podman network disconnect ${PUBLIC_NETWORK_NAME} ${RACNODE2_CONTAINER_NAME} +podman network disconnect ${PRIVATE1_NETWORK_NAME} ${RACNODE2_CONTAINER_NAME} +podman network disconnect ${PRIVATE2_NETWORK_NAME} ${RACNODE2_CONTAINER_NAME} + +podman network connect ${PUBLIC_NETWORK_NAME} --ip ${RACNODE1_PUBLIC_IP} ${RACNODE1_CONTAINER_NAME} +podman network connect ${PRIVATE1_NETWORK_NAME} --ip ${RACNODE1_CRS_PRIVATE_IP1} ${RACNODE1_CONTAINER_NAME} +podman network connect ${PRIVATE2_NETWORK_NAME} --ip ${RACNODE1_CRS_PRIVATE_IP2} ${RACNODE1_CONTAINER_NAME} + +podman network connect ${PUBLIC_NETWORK_NAME} --ip ${RACNODE2_PUBLIC_IP} ${RACNODE2_CONTAINER_NAME} +podman network connect ${PRIVATE1_NETWORK_NAME} --ip ${RACNODE2_CRS_PRIVATE_IP1} ${RACNODE2_CONTAINER_NAME} +podman network connect ${PRIVATE2_NETWORK_NAME} --ip ${RACNODE2_CRS_PRIVATE_IP2} ${RACNODE2_CONTAINER_NAME} + +podman-compose start ${RACNODE1_CONTAINER_NAME} +podman-compose start ${RACNODE2_CONTAINER_NAME} +podman exec ${RACNODE1_CONTAINER_NAME} /bin/bash -c "tail -f /tmp/orod/oracle_rac_setup.log" +``` + +Successful Message when RAC container is setup properly- +```bash +=================================== +ORACLE RAC DATABASE IS READY TO USE +=================================== +``` + +Bring up CMAN Container- +```bash +podman-compose up -d ${CMAN_CONTAINER_NAME} + +podman-compose logs -f ${CMAN_CONTAINER_NAME} +################################################ + CONNECTION MANAGER IS READY TO USE! +################################################ +``` +#### Section 2.2.2: Setup Using User Defined Response files + +* Create placeholder for NFS storage and make sure it is empty - + + ```bash + export ORACLE_DBNAME=ORCLCDB + mkdir -p /scratch/stage/rac-storage/$ORACLE_DBNAME + rm -rf /scratch/stage/rac-storage/ORCLCDB/asm_disk0* + ``` +* On the shared folder e.g `scratch/common_scripts/podman/rac` between both RAC nodes, copy file named [grid_setup_new_21c.rsp](withresponsefiles/nfsdevices/grid_setup_new_21c.rsp) +* Also copy, [dbca_21c.rsp](./dbca_21c.rsp) in `scratch/common_scripts/podman/rac`. +* If SELinux host is enable on machine then execute the following as well - + ```bash + semanage fcontext -a -t container_file_t /scratch/common_scripts/podman/rac/grid_setup_new_21c.rsp + restorecon -v /scratch/common_scripts/podman/rac/grid_setup_new_21c.rsp + semanage fcontext -a -t container_file_t /scratch/common_scripts/podman/rac/dbca_21c.rsp + restorecon -v /scratch/common_scripts/podman/rac/dbca_21c.rsp + ``` + +You can skip this step if you are planing to not to use **User Defined Response Files for RAC**. + +Now, Export the required environment variables required by `podman-compose.yml` file - +```bash +export HEALTHCHECK_INTERVAL=60s +export HEALTHCHECK_TIMEOUT=120s +export HEALTHCHECK_RETRIES=240 +export RACNODE1_CONTAINER_NAME=racnodep1 +export RACNODE1_HOST_NAME=racnodep1 +export RACNODE1_PUBLIC_IP=10.0.20.170 +export RACNODE1_CRS_PRIVATE_IP1=192.168.17.170 +export RACNODE1_CRS_PRIVATE_IP2=192.168.18.170 +export INSTALL_NODE=racnodep1 +export RAC_IMAGE_NAME=localhost/oracle/database-rac:21.3.0-slim +export STAGING_SOFTWARE_LOC="/scratch/software/21c/goldimages/" +export DEFAULT_GATEWAY="10.0.20.1" +export SCAN_NAME=racnodepc1-scan +export RACNODE2_CONTAINER_NAME=racnodep2 +export RACNODE2_HOST_NAME=racnodep2 +export RACNODE2_PUBLIC_IP=10.0.20.171 +export RACNODE2_CRS_PRIVATE_IP1=192.168.17.171 +export RACNODE2_CRS_PRIVATE_IP2=192.168.18.171 +export DNS_CONTAINER_NAME=rac-dnsserver +export DNS_HOST_NAME=racdns +export DNS_IMAGE_NAME="oracle/rac-dnsserver:latest" +export RAC_NODE_NAME_PREFIXP="racnodep" +export DNS_DOMAIN=example.info +export PUBLIC_NETWORK_NAME="rac_pub1_nw" +export PUBLIC_NETWORK_SUBNET="10.0.20.0/24" +export PRIVATE1_NETWORK_NAME="rac_priv1_nw" +export PRIVATE1_NETWORK_SUBNET="192.168.17.0/24" +export PRIVATE2_NETWORK_NAME="rac_priv2_nw" +export PRIVATE2_NETWORK_SUBNET="192.168.18.0/24" +export DNS_PUBLIC_IP=10.0.20.25 +export CMAN_CONTAINER_NAME=racnodepc1-cman +export CMAN_HOST_NAME=racnodepc1-cman +export CMAN_IMAGE_NAME="localhost/oracle/client-cman:21.3.0" +export CMAN_PUBLIC_IP=10.0.20.15 +export CMAN_PUBLIC_HOSTNAME="racnodepc1-cman" +export DB_HOSTDETAILS="HOST=racnodepc1-scan:RULE_ACT=accept,HOST=racnodep1:IP=10.0.20.170" +export STORAGE_CONTAINER_NAME="racnode-storage" +export STORAGE_HOST_NAME="racnode-storage" +export STORAGE_IMAGE_NAME="localhost/oracle/rac-storage-server:latest" +export ORACLE_DBNAME="ORCLCDB" +export STORAGE_PUBLIC_IP=10.0.20.80 +export NFS_STORAGE_VOLUME="/scratch/stage/rac-storage/$ORACLE_DBNAME" +export GRID_RESPONSE_FILE="/scratch/common_scripts/podman/rac/grid_setup_new_21c.rsp" +export DB_RESPONSE_FILE="/scratch/common_scripts/podman/rac/dbca_21c.rsp" +export PWD_SECRET_FILE=/opt/.secrets/pwdfile.enc +export KEY_SECRET_FILE=/opt/.secrets/key.pem +export DB_SERVICE=service:soepdb +``` +Create podman networks- +```bash +podman network create --driver=bridge --subnet=${PUBLIC_NETWORK_SUBNET} ${PUBLIC_NETWORK_NAME} +podman network create --driver=bridge --subnet=${PRIVATE1_NETWORK_SUBNET} ${PRIVATE1_NETWORK_NAME} --disable-dns --internal +podman network create --driver=bridge --subnet=${PRIVATE2_NETWORK_SUBNET} ${PRIVATE2_NETWORK_NAME} --disable-dns --internal +``` +Create compose file named [podman-compose.yml](./withresponsefiles/nfsdevices/podman-compose.yml) in your working directory. +Bring up DNS Containers- +```bash +podman-compose up -d ${DNS_CONTAINER_NAME} +podman-compose stop ${DNS_CONTAINER_NAME} +podman network disconnect ${PUBLIC_NETWORK_NAME} ${DNS_CONTAINER_NAME} +podman network disconnect ${PRIVATE1_NETWORK_NAME} ${DNS_CONTAINER_NAME} +podman network disconnect ${PRIVATE2_NETWORK_NAME} ${DNS_CONTAINER_NAME} +podman network connect ${PUBLIC_NETWORK_NAME} --ip ${DNS_PUBLIC_IP} ${DNS_CONTAINER_NAME} +podman network connect ${PRIVATE1_NETWORK_NAME} --ip ${DNS_PRIVATE1_IP} ${DNS_CONTAINER_NAME} +podman network connect ${PRIVATE2_NETWORK_NAME} --ip ${DNS_PRIVATE2_IP} ${DNS_CONTAINER_NAME} +podman-compose start ${DNS_CONTAINER_NAME} +``` + +Successful logs when DNS container comes up- +```bash +podman-compose logs ${DNS_CONTAINER_NAME} +################################################ + DNS Server IS READY TO USE! +################################################ +``` +Bring up Storage Container- +```bash +podman-compose --podman-run-args="-t -i --systemd=always" up -d ${STORAGE_CONTAINER_NAME} +podman-compose exec ${STORAGE_CONTAINER_NAME} tail -f /tmp/storage_setup.log + +Export list for racnode-storage: +/oradata * +################################################# + Setup Completed +################################################# +``` + +Create NFS volume- +```bash +podman volume create --driver local \ +--opt type=nfs \ +--opt o=addr=10.0.20.80,rw,bg,hard,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 \ +--opt device=10.0.20.80:/oradata \ +racstorage +``` + +Bring up RAC Containers- +```bash +podman-compose --podman-run-args="-t -i --systemd=always --cpuset-cpus 0-1 --memory 16G --memory-swap 32G" up -d ${RACNODE1_CONTAINER_NAME} +podman-compose stop ${RACNODE1_CONTAINER_NAME} + +podman-compose --podman-run-args="-t -i --systemd=always --cpuset-cpus 0-1 --memory 16G --memory-swap 32G" up -d ${RACNODE2_CONTAINER_NAME} +podman-compose stop ${RACNODE2_CONTAINER_NAME} + +rm -rf /scratch/rac/cluster01/node1/* +rm -rf /scratch/rac/cluster01/node2/* + +podman network disconnect ${PUBLIC_NETWORK_NAME} ${RACNODE1_CONTAINER_NAME} +podman network disconnect ${PRIVATE1_NETWORK_NAME} ${RACNODE1_CONTAINER_NAME} +podman network disconnect ${PRIVATE2_NETWORK_NAME} ${RACNODE1_CONTAINER_NAME} + +podman network disconnect ${PUBLIC_NETWORK_NAME} ${RACNODE2_CONTAINER_NAME} +podman network disconnect ${PRIVATE1_NETWORK_NAME} ${RACNODE2_CONTAINER_NAME} +podman network disconnect ${PRIVATE2_NETWORK_NAME} ${RACNODE2_CONTAINER_NAME} + +podman network connect ${PUBLIC_NETWORK_NAME} --ip ${RACNODE1_PUBLIC_IP} ${RACNODE1_CONTAINER_NAME} +podman network connect ${PRIVATE1_NETWORK_NAME} --ip ${RACNODE1_CRS_PRIVATE_IP1} ${RACNODE1_CONTAINER_NAME} +podman network connect ${PRIVATE2_NETWORK_NAME} --ip ${RACNODE1_CRS_PRIVATE_IP2} ${RACNODE1_CONTAINER_NAME} + +podman network connect ${PUBLIC_NETWORK_NAME} --ip ${RACNODE2_PUBLIC_IP} ${RACNODE2_CONTAINER_NAME} +podman network connect ${PRIVATE1_NETWORK_NAME} --ip ${RACNODE2_CRS_PRIVATE_IP1} ${RACNODE2_CONTAINER_NAME} +podman network connect ${PRIVATE2_NETWORK_NAME} --ip ${RACNODE2_CRS_PRIVATE_IP2} ${RACNODE2_CONTAINER_NAME} + +podman-compose start ${RACNODE1_CONTAINER_NAME} +podman-compose start ${RACNODE2_CONTAINER_NAME} +podman exec ${RACNODE1_CONTAINER_NAME} /bin/bash -c "tail -f /tmp/orod/oracle_rac_setup.log" +``` + +Successful Message when RAC container is setup properly- +```bash +=================================== +ORACLE RAC DATABASE IS READY TO USE +=================================== +``` + +(Optionally) Bring up CMAN Container- +```bash +podman-compose up -d ${CMAN_CONTAINER_NAME} +podman-compose logs -f ${CMAN_CONTAINER_NAME} +################################################ + CONNECTION MANAGER IS READY TO USE! +################################################ +``` +## Section 3: Sample of Addition of Nodes to Oracle RAC Containers based on Slim Image + +* Before you proceed to add additional node, create place holder for it - + ```bash + mkdir -p /scratch/rac/cluster01/node3 + rm -rf /scratch/rac/cluster01/node3/* + ``` +* If SELinux is enabled in your machine then execute the following as well - + ```bash + semanage fcontext -a -t container_file_t /scratch/rac/cluster01/node3 + restorecon -v /scratch/rac/cluster01/node3 + ``` + +### Section 3.1: Sample of Addition of Nodes to Oracle RAC Containers using Podman Compose based on Oracle RAC Slim Image with BlockDevices + +Below is example to add one more node to existing Oracle RAC 2 node cluster using full image and with user defined files using podman compose file - + +Create compose file named [podman-compose.yml](./withoutresponsefiles/blockdevices/addition/podman-compose.yml) in your working directory. + +Export the required environment variables required by `podman-compose.yml` file - +```bash +export HEALTHCHECK_INTERVAL=60s +export HEALTHCHECK_TIMEOUT=120s +export HEALTHCHECK_RETRIES=240 +export RACNODE3_CONTAINER_NAME=racnodep3 +export RACNODE3_HOST_NAME=racnodep3 +export RACNODE3_PUBLIC_IP=10.0.20.172 +export RACNODE3_CRS_PRIVATE_IP1=192.168.17.172 +export RACNODE3_CRS_PRIVATE_IP2=192.168.18.172 +export RAC_IMAGE_NAME=localhost/oracle/database-rac:21.3.0-slim +export DEFAULT_GATEWAY="10.0.20.1" +export CRS_NODES=pubhost:racnodep3,viphost:racnodep3-vip +export SCAN_NAME=racnodepc1-scan +export ASM_DEVICE1="/dev/asm-disk1" +export ASM_DEVICE2="/dev/asm-disk2" +export CRS_ASM_DEVICE_LIST="${ASM_DEVICE1},${ASM_DEVICE2}" +export ASM_DISK1="/dev/oracleoci/oraclevdd" +export ASM_DISK2="/dev/oracleoci/oraclevde" +export STAGING_SOFTWARE_LOC="/scratch/software/21c/goldimages/" +export DNS_DOMAIN=example.info +export PUBLIC_NETWORK_NAME="rac_pub1_nw" +export PRIVATE1_NETWORK_NAME="rac_priv1_nw" +export PRIVATE2_NETWORK_NAME="rac_priv2_nw" +export DNS_PUBLIC_IP=10.0.20.25 +export OP_TYPE=racaddnode +export DB_NAME=ORCLCDB +export INSTALL_NODE=racnodep3 +export EXISTING_CLS_NODE=racnodep1,racnodep2 +export PWD_SECRET_FILE=/opt/.secrets/pwdfile.enc +export KEY_SECRET_FILE=/opt/.secrets/key.pem +export DB_SERVICE=service:soepdb +``` +Bring up RAC Containers- +```bash +podman-compose --podman-run-args="-t -i --systemd=always --cpuset-cpus 0-1 --memory 16G --memory-swap 32G" up -d ${RACNODE3_CONTAINER_NAME} +podman-compose stop ${RACNODE3_CONTAINER_NAME} + +podman network disconnect ${PUBLIC_NETWORK_NAME} ${RACNODE3_CONTAINER_NAME} +podman network disconnect ${PRIVATE1_NETWORK_NAME} ${RACNODE3_CONTAINER_NAME} +podman network disconnect ${PRIVATE2_NETWORK_NAME} ${RACNODE3_CONTAINER_NAME} + +podman network connect ${PUBLIC_NETWORK_NAME} --ip ${RACNODE3_PUBLIC_IP} ${RACNODE3_CONTAINER_NAME} +podman network connect ${PRIVATE1_NETWORK_NAME} --ip ${RACNODE3_CRS_PRIVATE_IP1} ${RACNODE3_CONTAINER_NAME} +podman network connect ${PRIVATE2_NETWORK_NAME} --ip ${RACNODE3_CRS_PRIVATE_IP2} ${RACNODE3_CONTAINER_NAME} + +podman-compose start ${RACNODE3_CONTAINER_NAME} +podman exec ${RACNODE3_CONTAINER_NAME} /bin/bash -c "tail -f /tmp/orod/oracle_rac_setup.log" +``` + +Successful Message when RAC container is setup properly- +```bash +======================================================== +Oracle Database ORCLCDB3 is up and running on racnodep3. +======================================================== +``` +## Section 3.2: Sample of Addition of Nodes to Oracle RAC Containers using Podman Compose based on Oracle RAC Slim Image with NFS Storage Devices + +Below is example to add one more node to existing Oracle RAC 2 node cluster using Oracle RAC Image and with user defined files using podman compose file- + +Create compose file named [podman-compose.yml](./withoutresponsefiles/nfsdevices/addition/podman-compose.yml) in your working directory. + + +Export the required environment variables required by `podman-compose.yml` file - +```bash +export HEALTHCHECK_INTERVAL=60s +export HEALTHCHECK_TIMEOUT=120s +export HEALTHCHECK_RETRIES=240 +export RACNODE3_CONTAINER_NAME=racnodep3 +export RACNODE3_HOST_NAME=racnodep3 +export RACNODE3_PUBLIC_IP=10.0.20.172 +export RACNODE3_CRS_PRIVATE_IP1=192.168.17.172 +export RACNODE3_CRS_PRIVATE_IP2=192.168.18.172 +export INSTALL_NODE=racnodep3 +export RAC_IMAGE_NAME=localhost/oracle/database-rac:21.3.0 +export DEFAULT_GATEWAY="10.0.20.1" +export CRS_NODES="\"pubhost:racnodep3,viphost:racnodep3-vip\"" +export EXISTING_CLS_NODE="racnodep1,racnodep2" +export SCAN_NAME=racnodepc1-scan +export CRS_ASM_DISCOVERY_STRING="/oradata" +export CRS_ASM_DEVICE_LIST="/oradata/asm_disk01.img,/oradata/asm_disk02.img,/oradata/asm_disk03.img,/oradata/asm_disk04.img,/oradata/asm_disk05.img" +export DNS_CONTAINER_NAME=rac-dnsserver +export DNS_HOST_NAME=racdns +export DNS_IMAGE_NAME="oracle/rac-dnsserver:latest" +export RAC_NODE_NAME_PREFIXP="racnodep" +export STAGING_SOFTWARE_LOC="/scratch/software/21c/goldimages/" +export DNS_DOMAIN=example.info +export PUBLIC_NETWORK_NAME="rac_pub1_nw" +export PUBLIC_NETWORK_SUBNET="10.0.20.0/24" +export PRIVATE1_NETWORK_NAME="rac_priv1_nw" +export PRIVATE1_NETWORK_SUBNET="192.168.17.0/24" +export PRIVATE2_NETWORK_NAME="rac_priv2_nw" +export PRIVATE2_NETWORK_SUBNET="192.168.18.0/24" +export DNS_PUBLIC_IP=10.0.20.25 +export PWD_SECRET_FILE=/opt/.secrets/pwdfile.enc +export KEY_SECRET_FILE=/opt/.secrets/key.pem +export CMAN_CONTAINER_NAME=racnodepc1-cman +export CMAN_HOST_NAME=racnodepc1-cman1 +export CMAN_PUBLIC_IP=10.0.20.15 +export CMAN_PUBLIC_HOSTNAME="racnodepc1-cman1" +export DB_SERVICE=service:soepdb +``` +Bring up RAC Containers- +```bash +podman-compose --podman-run-args="-t -i --systemd=always --cpuset-cpus 0-1 --memory 16G --memory-swap 32G" up -d ${RACNODE3_CONTAINER_NAME} +podman-compose stop ${RACNODE3_CONTAINER_NAME} + +podman network disconnect ${PUBLIC_NETWORK_NAME} ${RACNODE3_CONTAINER_NAME} +podman network disconnect ${PRIVATE1_NETWORK_NAME} ${RACNODE3_CONTAINER_NAME} +podman network disconnect ${PRIVATE2_NETWORK_NAME} ${RACNODE3_CONTAINER_NAME} + +podman network connect ${PUBLIC_NETWORK_NAME} --ip ${RACNODE3_PUBLIC_IP} ${RACNODE3_CONTAINER_NAME} +podman network connect ${PRIVATE1_NETWORK_NAME} --ip ${RACNODE3_CRS_PRIVATE_IP1} ${RACNODE3_CONTAINER_NAME} +podman network connect ${PRIVATE2_NETWORK_NAME} --ip ${RACNODE3_CRS_PRIVATE_IP2} ${RACNODE3_CONTAINER_NAME} + +podman-compose start ${RACNODE3_CONTAINER_NAME} +podman exec ${RACNODE3_CONTAINER_NAME} /bin/bash -c "tail -f /tmp/orod/oracle_rac_setup.log" +``` + +Successful Message when RAC container is setup properly- +```bash +======================================================== +Oracle Database ORCLCDB3 is up and running on racnodep3. +======================================================== +``` + +## Section 4: Environment Variables for Oracle RAC on Podman Compose + +Refer [Environment Variables Explained for Oracle RAC on Podman Compose](../../../docs/ENVVARIABLESCOMPOSE.md) for explanation of all the environment variables related to Oracle RAC on Podman Compose. Change or Set these environment variables as per your environment. + +## Section 5: Validating Oracle RAC Environment +You can validate if environment is healthy by running below command- +```bash +podman ps -a + +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +f1345fd4047b localhost/oracle/rac-dnsserver:latest /bin/sh -c exec $... 8 hours ago Up 8 hours (healthy) rac-dnsserver +2f42e49758d1 localhost/oracle/database-rac:21.3.0-slim 46 minutes ago Up 37 minutes (healthy) racnodep1 +a27fceea9fe6 localhost/oracle/database-rac:21.3.0-slim 46 minutes ago Up 37 minutes (healthy) racnodep2 +``` +Note: +- Look for `(healthy)` next to container names under `STATUS` section. + +## Section 6: Connecting to Oracle RAC Environment + +**IMPORTANT:** This section assumes that you have successfully created an Oracle RAC cluster using the preceding sections. +Refer [README](../../../docs/CONNECTING.md) for instructions on how to connect to Oracle RAC Database. + +## Cleanup +Refer [README](../../../docs/CLEANUP.md) for instructions on how to connect to cleanup Oracle RAC Database Container Environment. + +## Support + +At the time of this release, Oracle RAC on Podman is supported for Oracle Linux 8.10 later. To see current Linux support certifications, refer [Oracle RAC on Podman Documentation](https://docs.oracle.com/en/database/oracle/oracle-database/21/install-and-upgrade.html) + +## License + +To download and run Oracle Grid and Database, regardless of whether inside or outside a container, you must download the binaries from the Oracle website and accept the license indicated on that page. + +All scripts and files hosted in this repository which are required to build the container images are, unless otherwise noted, released under UPL 1.0 license. + +## Copyright + +Copyright (c) 2014-2024 Oracle and/or its affiliates. \ No newline at end of file diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/samples/rac-compose/racslimimage/dbca_21c.rsp b/OracleDatabase/RAC/OracleRealApplicationClusters/samples/rac-compose/racslimimage/dbca_21c.rsp new file mode 100644 index 0000000000..233879826e --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/samples/rac-compose/racslimimage/dbca_21c.rsp @@ -0,0 +1,58 @@ +responseFileVersion=/oracle/assistants/rspfmt_dbca_response_schema_v21.3.0 +gdbName=ORCLCDB +sid=ORCLCDB +databaseConfigType=RAC +RACOneNodeServiceName= +policyManaged=false +managementPolicy= +createServerPool=false +serverPoolName= +cardinality= +force=false +pqPoolName= +pqCardinality= +createAsContainerDatabase=true +numberOfPDBs=1 +pdbName=ORCLPDB +useLocalUndoForPDBs=true +pdbAdminPassword=ORacle__21c +nodelist=racnodep1,racnodep2 +templateName={ORACLE_HOME}/assistants/dbca/templates/General_Purpose.dbc +sysPassword=ORacle__21c +systemPassword=ORacle__21c +oracleHomeUserPassword= +emConfiguration= +runCVUChecks=true +dbsnmpPassword=ORacle__21c +omsHost= +omsPort= +emUser= +emPassword= +dvConfiguration=false +dvUserName= +dvUserPassword= +dvAccountManagerName= +dvAccountManagerPassword= +olsConfiguration=false +datafileJarLocation={ORACLE_HOME}/assistants/dbca/templates/ +datafileDestination=+DATA/{DB_UNIQUE_NAME}/ +recoveryAreaDestination= +storageType=ASM +diskGroupName=+DATA/{DB_UNIQUE_NAME}/ +asmsnmpPassword= +recoveryGroupName= +characterSet=AL32UTF8 +nationalCharacterSet=AL16UTF16 +registerWithDirService= +dirServiceUserName= +dirServicePassword= +walletPassword= +listeners=LISTENER +variablesFile= +variables=DB_UNIQUE_NAME=ORCLCDB,ORACLE_BASE=/u01/app/oracle,PDB_NAME=ORCLPDB,DB_NAME=ORCLCDB,ORACLE_HOME=/u01/app/oracle/product/21.3.0/dbhome_1,SID=ORCLCDB +initParams=audit_trail=none,audit_sys_operations=false,remote_login_passwordfile=exclusive +sampleSchema=false +memoryPercentage=40 +databaseType=MULTIPURPOSE +automaticMemoryManagement=false +totalMemory=5000 \ No newline at end of file diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/samples/rac-compose/racslimimage/withoutresponsefiles/blockdevices/addition/podman-compose.yml b/OracleDatabase/RAC/OracleRealApplicationClusters/samples/rac-compose/racslimimage/withoutresponsefiles/blockdevices/addition/podman-compose.yml new file mode 100644 index 0000000000..8af9e406f8 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/samples/rac-compose/racslimimage/withoutresponsefiles/blockdevices/addition/podman-compose.yml @@ -0,0 +1,87 @@ +--- +version: "3" +networks: + rac_pub1_nw: + name: ${PUBLIC_NETWORK_NAME} + external: true + rac_priv1_nw: + name: ${PRIVATE1_NETWORK_NAME} + external: true + rac_priv2_nw: + name: ${PRIVATE2_NETWORK_NAME} + external: true +secrets: + pwdsecret: + file: ${PWD_SECRET_FILE} + keysecret: + file: ${KEY_SECRET_FILE} +services: + racnodep3: + container_name: ${RACNODE3_CONTAINER_NAME} + hostname: ${RACNODE3_HOST_NAME} + image: ${RAC_IMAGE_NAME} + restart: always + dns: ${DNS_PUBLIC_IP} + dns_search: ${DNS_DOMAIN} + shm_size: 4G + secrets: + - pwdsecret + - keysecret + volumes: + - /scratch/rac/cluster01/node3:/u01 + - /scratch:/scratch + environment: + DNS_SERVERS: ${DNS_PUBLIC_IP} + CRS_PRIVATE_IP1: ${RACNODE3_CRS_PRIVATE_IP1} + CRS_PRIVATE_IP2: ${RACNODE3_CRS_PRIVATE_IP2} + OP_TYPE: racaddnode + INSTALL_NODE: ${INSTALL_NODE} + EXISTING_CLS_NODE: ${EXISTING_CLS_NODE} + CRS_NODES: ${CRS_NODES} + SCAN_NAME: ${SCAN_NAME} + DB_PWD_FILE: pwdsecret + PWD_KEY: keysecret + CRS_ASM_DEVICE_LIST: ${CRS_ASM_DEVICE_LIST} + CRS_ASM_DISCOVERY_DIR: ${CRS_ASM_DISCOVERY_DIR} + INIT_SGA_SIZE: 3G + INIT_PGA_SIZE: 2G + GRID_HOME: /u01/app/21c/grid + GRID_BASE: /u01/app/grid + DB_HOME: /u01/app/oracle/product/21c/dbhome_1 + DB_BASE: /u01/app/oracle + INVENTORY: /u01/app/oraInventory + STAGING_SOFTWARE_LOC: ${STAGING_SOFTWARE_LOC} + GRID_SW_ZIP_FILE: LINUX.X64_213000_grid_home.zip + DB_SW_ZIP_FILE: LINUX.X64_213000_db_home.zip + DB_NAME: ORCLCDB + CMAN_HOST: ${CMAN_HOST_NAME} + CMAN_PORT: 1521 + DB_SERVICE: ${DB_SERVICE} + sysctls: + - kernel.shmall=2097152 + - kernel.shmmax=8589934592 + - kernel.shmmni=4096 + - 'kernel.sem=250 32000 100 128' + - 'net.ipv4.conf.eth1.rp_filter=2' + - 'net.ipv4.conf.eth2.rp_filter=2' + ulimits: + rtprio: 99 + cap_add: + - SYS_RESOURCE + - NET_ADMIN + - SYS_NICE + - AUDIT_WRITE + - AUDIT_CONTROL + - NET_RAW + devices: + - "${ASM_DISK1}:${ASM_DEVICE1}" + - "${ASM_DISK2}:${ASM_DEVICE2}" + networks: + - rac_pub1_nw + - rac_priv1_nw + - rac_priv2_nw + healthcheck: + test: ["CMD", "/bin/python3", "/opt/scripts/startup/scripts/main.py", "--checkracstatus"] + interval: ${HEALTHCHECK_INTERVAL} + timeout: ${HEALTHCHECK_TIMEOUT} + retries: ${HEALTHCHECK_RETRIES} \ No newline at end of file diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/samples/rac-compose/racslimimage/withoutresponsefiles/blockdevices/podman-compose.yml b/OracleDatabase/RAC/OracleRealApplicationClusters/samples/rac-compose/racslimimage/withoutresponsefiles/blockdevices/podman-compose.yml new file mode 100644 index 0000000000..8f940ed41c --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/samples/rac-compose/racslimimage/withoutresponsefiles/blockdevices/podman-compose.yml @@ -0,0 +1,196 @@ +--- +version: "3" +networks: + rac_pub1_nw: + external: true + rac_priv1_nw: + external: true + rac_priv2_nw: + external: true +secrets: + pwdsecret: + file: ${PWD_SECRET_FILE} + keysecret: + file: ${KEY_SECRET_FILE} +services: + rac-dnsserver: + container_name: ${DNS_CONTAINER_NAME} + hostname: ${DNS_HOST_NAME} + image: ${DNS_IMAGE_NAME} + dns_search: ${DNS_DOMAIN} + environment: + SETUP_DNS_CONFIG_FILES: "setup_true" + DOMAIN_NAME: ${DNS_DOMAIN} + RAC_NODE_NAME_PREFIXP: ${RAC_NODE_NAME_PREFIXP} + WEBMIN_ENABLED: false + SETUP_DNS_CONFIG_FILES: "setup_true" + cap_add: + - AUDIT_WRITE + healthcheck: + test: ["CMD-SHELL", "pgrep named"] + interval: ${HEALTHCHECK_INTERVAL} + timeout: ${HEALTHCHECK_TIMEOUT} + retries: ${HEALTHCHECK_RETRIES} + privileged: false + networks: + rac_pub1_nw: + ipv4_address: ${DNS_PUBLIC_IP} + racnodep1: + container_name: ${RACNODE1_CONTAINER_NAME} + hostname: ${RACNODE1_HOST_NAME} + image: ${RAC_IMAGE_NAME} + restart: always + dns: ${DNS_PUBLIC_IP} + dns_search: ${DNS_DOMAIN} + shm_size: 4G + secrets: + - pwdsecret + - keysecret + volumes: + - /scratch/rac/cluster01/node1:/u01 + - /scratch:/scratch + environment: + DNS_SERVERS: ${DNS_PUBLIC_IP} + CRS_PRIVATE_IP1: ${RACNODE1_CRS_PRIVATE_IP1} + CRS_PRIVATE_IP2: ${RACNODE1_CRS_PRIVATE_IP2} + OP_TYPE: setuprac + INSTALL_NODE: ${INSTALL_NODE} + CRS_NODES: ${CRS_NODES} + SCAN_NAME: ${SCAN_NAME} + CRS_ASM_DEVICE_LIST: ${CRS_ASM_DEVICE_LIST} + INIT_SGA_SIZE: 3G + INIT_PGA_SIZE: 2G + GRID_HOME: /u01/app/21c/grid + GRID_BASE: /u01/app/grid + DB_HOME: /u01/app/oracle/product/21c/dbhome_1 + DB_BASE: /u01/app/oracle + INVENTORY: /u01/app/oraInventory + STAGING_SOFTWARE_LOC: ${STAGING_SOFTWARE_LOC} + GRID_SW_ZIP_FILE: LINUX.X64_213000_grid_home.zip + DB_SW_ZIP_FILE: LINUX.X64_213000_db_home.zip + DB_NAME: ORCLCDB + DB_PWD_FILE: pwdsecret + PWD_KEY: keysecret + CMAN_HOST: ${CMAN_HOST_NAME} + CMAN_PORT: 1521 + DB_SERVICE: ${DB_SERVICE} + sysctls: + - kernel.shmall=2097152 + - kernel.shmmax=8589934592 + - kernel.shmmni=4096 + - 'kernel.sem=250 32000 100 128' + - 'net.ipv4.conf.eth1.rp_filter=2' + - 'net.ipv4.conf.eth2.rp_filter=2' + ulimits: + rtprio: 99 + cap_add: + - SYS_RESOURCE + - NET_ADMIN + - SYS_NICE + - AUDIT_WRITE + - AUDIT_CONTROL + - NET_RAW + devices: + - "${ASM_DISK1}:${ASM_DEVICE1}" + - "${ASM_DISK2}:${ASM_DEVICE2}" + networks: + - rac_pub1_nw + - rac_priv1_nw + - rac_priv2_nw + healthcheck: + test: ["CMD", "/bin/python3", "/opt/scripts/startup/scripts/main.py", "--checkracstatus"] + interval: ${HEALTHCHECK_INTERVAL} + timeout: ${HEALTHCHECK_TIMEOUT} + retries: ${HEALTHCHECK_RETRIES} + racnodep2: + container_name: ${RACNODE2_CONTAINER_NAME} + hostname: ${RACNODE2_HOST_NAME} + image: ${RAC_IMAGE_NAME} + restart: always + dns: ${DNS_PUBLIC_IP} + dns_search: ${DNS_DOMAIN} + shm_size: 4G + secrets: + - pwdsecret + - keysecret + volumes: + - /scratch/rac/cluster01/node2:/u01 + - /scratch:/scratch + environment: + DNS_SERVERS: ${DNS_PUBLIC_IP} + CRS_PRIVATE_IP1: ${RACNODE2_CRS_PRIVATE_IP1} + CRS_PRIVATE_IP2: ${RACNODE2_CRS_PRIVATE_IP2} + OP_TYPE: setuprac + INSTALL_NODE: ${INSTALL_NODE} + CRS_NODES: ${CRS_NODES} + SCAN_NAME: ${SCAN_NAME} + CRS_ASM_DEVICE_LIST: ${CRS_ASM_DEVICE_LIST} + INIT_SGA_SIZE: 3G + INIT_PGA_SIZE: 2G + GRID_HOME: /u01/app/21c/grid + GRID_BASE: /u01/app/grid + DB_HOME: /u01/app/oracle/product/21c/dbhome_1 + DB_BASE: /u01/app/oracle + INVENTORY: /u01/app/oraInventory + STAGING_SOFTWARE_LOC: ${STAGING_SOFTWARE_LOC} + GRID_SW_ZIP_FILE: LINUX.X64_213000_grid_home.zip + DB_SW_ZIP_FILE: LINUX.X64_213000_db_home.zip + DB_NAME: ORCLCDB + DB_PWD_FILE: pwdsecret + PWD_KEY: keysecret + CMAN_HOST: ${CMAN_HOST_NAME} + CMAN_PORT: 1521 + DB_SERVICE: ${DB_SERVICE} + sysctls: + - kernel.shmall=2097152 + - kernel.shmmax=8589934592 + - kernel.shmmni=4096 + - 'kernel.sem=250 32000 100 128' + - 'net.ipv4.conf.eth1.rp_filter=2' + - 'net.ipv4.conf.eth2.rp_filter=2' + ulimits: + rtprio: 99 + cap_add: + - SYS_RESOURCE + - NET_ADMIN + - SYS_NICE + - AUDIT_WRITE + - AUDIT_CONTROL + - NET_RAW + devices: + - "${ASM_DISK1}:${ASM_DEVICE1}" + - "${ASM_DISK2}:${ASM_DEVICE2}" + networks: + - rac_pub1_nw + - rac_priv1_nw + - rac_priv2_nw + healthcheck: + test: ["CMD", "/bin/python3", "/opt/scripts/startup/scripts/main.py", "--checkracstatus"] + interval: ${HEALTHCHECK_INTERVAL} + timeout: ${HEALTHCHECK_TIMEOUT} + retries: ${HEALTHCHECK_RETRIES} + racnodepc1-cman: + container_name: ${CMAN_CONTAINER_NAME} + hostname: ${CMAN_HOST_NAME} + image: ${CMAN_IMAGE_NAME} + dns_search: ${DNS_DOMAIN} + dns: ${DNS_PUBLIC_IP} + environment: + DOMAIN_NAME: ${DNS_DOMAIN} + PUBLIC_IP: ${CMAN_PUBLIC_IP} + PUBLIC_HOSTNAME: ${CMAN_PUBLIC_HOSTNAME} + DB_HOSTDETAILS: ${DB_HOSTDETAILS} + privileged: false + ports: + - 1521:1521 + networks: + rac_pub1_nw: + ipv4_address: ${CMAN_PUBLIC_IP} + cap_add: + - AUDIT_WRITE + - NET_RAW + healthcheck: + test: ["CMD-SHELL", "pgrep -f 'cmadmin'"] + interval: ${HEALTHCHECK_INTERVAL} + timeout: ${HEALTHCHECK_TIMEOUT} + retries: ${HEALTHCHECK_RETRIES} \ No newline at end of file diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/samples/rac-compose/racslimimage/withoutresponsefiles/nfsdevices/addition/podman-compose.yml b/OracleDatabase/RAC/OracleRealApplicationClusters/samples/rac-compose/racslimimage/withoutresponsefiles/nfsdevices/addition/podman-compose.yml new file mode 100644 index 0000000000..c6227aa8d3 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/samples/rac-compose/racslimimage/withoutresponsefiles/nfsdevices/addition/podman-compose.yml @@ -0,0 +1,89 @@ +--- +version: "3" +networks: + rac_pub1_nw: + name: ${PUBLIC_NETWORK_NAME} + external: true + rac_priv1_nw: + name: ${PRIVATE1_NETWORK_NAME} + external: true + rac_priv2_nw: + name: ${PRIVATE2_NETWORK_NAME} + external: true +secrets: + pwdsecret: + file: ${PWD_SECRET_FILE} + keysecret: + file: ${KEY_SECRET_FILE} +volumes: + racstorage: + external: true +services: + racnodep3: + container_name: ${RACNODE3_CONTAINER_NAME} + hostname: ${RACNODE3_HOST_NAME} + image: ${RAC_IMAGE_NAME} + restart: always + dns: ${DNS_PUBLIC_IP} + dns_search: ${DNS_DOMAIN} + shm_size: 4G + secrets: + - pwdsecret + - keysecret + volumes: + - /scratch/rac/cluster01/node3:/u01 + - /scratch:/scratch + - racstorage:/oradata + environment: + DNS_SERVERS: ${DNS_PUBLIC_IP} + CRS_PRIVATE_IP1: ${RACNODE3_CRS_PRIVATE_IP1} + CRS_PRIVATE_IP2: ${RACNODE3_CRS_PRIVATE_IP2} + OP_TYPE: racaddnode + INSTALL_NODE: ${INSTALL_NODE} + EXISTING_CLS_NODE: ${EXISTING_CLS_NODE} + CRS_NODES: ${CRS_NODES} + SCAN_NAME: ${SCAN_NAME} + DB_PWD_FILE: pwdsecret + PWD_KEY: keysecret + CRS_ASM_DEVICE_LIST: ${CRS_ASM_DEVICE_LIST} + CRS_ASM_DISCOVERY_DIR: ${CRS_ASM_DISCOVERY_DIR} + INIT_SGA_SIZE: 3G + INIT_PGA_SIZE: 2G + GRID_HOME: /u01/app/21c/grid + GRID_BASE: /u01/app/grid + DB_HOME: /u01/app/oracle/product/21c/dbhome_1 + DB_BASE: /u01/app/oracle + INVENTORY: /u01/app/oraInventory + STAGING_SOFTWARE_LOC: ${STAGING_SOFTWARE_LOC} + GRID_SW_ZIP_FILE: LINUX.X64_213000_grid_home.zip + DB_SW_ZIP_FILE: LINUX.X64_213000_db_home.zip + DB_NAME: ORCLCDB + CMAN_HOST: ${CMAN_HOST_NAME} + CMAN_PORT: 1521 + ASM_ON_NAS: True + DB_SERVICE: ${DB_SERVICE} + sysctls: + - kernel.shmall=2097152 + - kernel.shmmax=8589934592 + - kernel.shmmni=4096 + - 'kernel.sem=250 32000 100 128' + - 'net.ipv4.conf.eth1.rp_filter=2' + - 'net.ipv4.conf.eth2.rp_filter=2' + ulimits: + rtprio: 99 + cap_add: + - SYS_RESOURCE + - NET_ADMIN + - SYS_NICE + - AUDIT_WRITE + - AUDIT_CONTROL + - NET_RAW + networks: + - rac_pub1_nw + - rac_priv1_nw + - rac_priv2_nw + healthcheck: + test: ["CMD", "/bin/python3", "/opt/scripts/startup/scripts/main.py", "--checkracstatus"] + interval: ${HEALTHCHECK_INTERVAL} + timeout: ${HEALTHCHECK_TIMEOUT} + retries: ${HEALTHCHECK_RETRIES} \ No newline at end of file diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/samples/rac-compose/racslimimage/withoutresponsefiles/nfsdevices/podman-compose.yml b/OracleDatabase/RAC/OracleRealApplicationClusters/samples/rac-compose/racslimimage/withoutresponsefiles/nfsdevices/podman-compose.yml new file mode 100644 index 0000000000..5950c162da --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/samples/rac-compose/racslimimage/withoutresponsefiles/nfsdevices/podman-compose.yml @@ -0,0 +1,221 @@ +--- +version: "3" +networks: + rac_pub1_nw: + external: true + rac_priv1_nw: + external: true + rac_priv2_nw: + external: true +secrets: + pwdsecret: + file: ${PWD_SECRET_FILE} + keysecret: + file: ${KEY_SECRET_FILE} +volumes: + racstorage: + external: true +services: + rac-dnsserver: + container_name: ${DNS_CONTAINER_NAME} + hostname: ${DNS_HOST_NAME} + image: ${DNS_IMAGE_NAME} + dns_search: ${DNS_DOMAIN} + environment: + SETUP_DNS_CONFIG_FILES: "setup_true" + DOMAIN_NAME: ${DNS_DOMAIN} + RAC_NODE_NAME_PREFIXP: ${RAC_NODE_NAME_PREFIXP} + WEBMIN_ENABLED: false + SETUP_DNS_CONFIG_FILES: "setup_true" + cap_add: + - AUDIT_WRITE + healthcheck: + test: ["CMD-SHELL", "pgrep named"] + interval: ${HEALTHCHECK_INTERVAL} + timeout: ${HEALTHCHECK_TIMEOUT} + retries: ${HEALTHCHECK_RETRIES} + privileged: false + networks: + rac_pub1_nw: + ipv4_address: ${DNS_PUBLIC_IP} + racnode-storage: + container_name: ${STORAGE_CONTAINER_NAME} + hostname: ${STORAGE_HOST_NAME} + image: ${STORAGE_IMAGE_NAME} + dns_search: ${DNS_DOMAIN} + volumes: + - ${NFS_STORAGE_VOLUME}:/oradata + cap_add: + - SYS_ADMIN + - AUDIT_WRITE + - NET_ADMIN + restart: always + healthcheck: + test: + - CMD-SHELL + - /bin/bash -c "ls -lrt /oradata/ && showmount -e | grep '/oradata'" + interval: ${HEALTHCHECK_INTERVAL} + timeout: ${HEALTHCHECK_TIMEOUT} + retries: ${HEALTHCHECK_RETRIES} + networks: + rac_pub1_nw: + ipv4_address: ${STORAGE_PUBLIC_IP} + racnodep1: + container_name: ${RACNODE1_CONTAINER_NAME} + hostname: ${RACNODE1_HOST_NAME} + image: ${RAC_IMAGE_NAME} + restart: always + dns: ${DNS_PUBLIC_IP} + dns_search: ${DNS_DOMAIN} + shm_size: 4G + secrets: + - pwdsecret + - keysecret + volumes: + - /scratch/rac/cluster01/node1:/u01 + - /scratch:/scratch + - racstorage:/oradata + environment: + DNS_SERVERS: ${DNS_PUBLIC_IP} + CRS_PRIVATE_IP1: ${RACNODE1_CRS_PRIVATE_IP1} + CRS_PRIVATE_IP2: ${RACNODE1_CRS_PRIVATE_IP2} + OP_TYPE: setuprac + INSTALL_NODE: ${INSTALL_NODE} + CRS_NODES: ${CRS_NODES} + SCAN_NAME: ${SCAN_NAME} + CRS_ASM_DEVICE_LIST: ${CRS_ASM_DEVICE_LIST} + CRS_ASM_DISCOVERY_STRING: "/oradata" + INIT_SGA_SIZE: 3G + INIT_PGA_SIZE: 2G + GRID_HOME: /u01/app/21c/grid + GRID_BASE: /u01/app/grid + DB_HOME: /u01/app/oracle/product/21c/dbhome_1 + DB_BASE: /u01/app/oracle + INVENTORY: /u01/app/oraInventory + STAGING_SOFTWARE_LOC: ${STAGING_SOFTWARE_LOC} + GRID_SW_ZIP_FILE: LINUX.X64_213000_grid_home.zip + DB_SW_ZIP_FILE: LINUX.X64_213000_db_home.zip + DB_NAME: ORCLCDB + DB_PWD_FILE: pwdsecret + PWD_KEY: keysecret + CMAN_HOST: ${CMAN_HOST_NAME} + CMAN_PORT: 1521 + ASM_ON_NAS: True + DB_SERVICE: ${DB_SERVICE} + sysctls: + - kernel.shmall=2097152 + - kernel.shmmax=8589934592 + - kernel.shmmni=4096 + - 'kernel.sem=250 32000 100 128' + - 'net.ipv4.conf.eth1.rp_filter=2' + - 'net.ipv4.conf.eth2.rp_filter=2' + ulimits: + rtprio: 99 + cap_add: + - SYS_RESOURCE + - NET_ADMIN + - SYS_NICE + - AUDIT_WRITE + - AUDIT_CONTROL + - NET_RAW + networks: + - rac_pub1_nw + - rac_priv1_nw + - rac_priv2_nw + healthcheck: + test: ["CMD", "/bin/python3", "/opt/scripts/startup/scripts/main.py", "--checkracstatus"] + interval: ${HEALTHCHECK_INTERVAL} + timeout: ${HEALTHCHECK_TIMEOUT} + retries: ${HEALTHCHECK_RETRIES} + racnodep2: + container_name: ${RACNODE2_CONTAINER_NAME} + hostname: ${RACNODE2_HOST_NAME} + image: ${RAC_IMAGE_NAME} + restart: always + dns: ${DNS_PUBLIC_IP} + dns_search: ${DNS_DOMAIN} + shm_size: 4G + secrets: + - pwdsecret + - keysecret + volumes: + - /scratch/rac/cluster01/node2:/u01 + - /scratch:/scratch + - racstorage:/oradata + environment: + DNS_SERVERS: ${DNS_PUBLIC_IP} + CRS_PRIVATE_IP1: ${RACNODE2_CRS_PRIVATE_IP1} + CRS_PRIVATE_IP2: ${RACNODE2_CRS_PRIVATE_IP2} + OP_TYPE: setuprac + INSTALL_NODE: ${INSTALL_NODE} + CRS_NODES: ${CRS_NODES} + SCAN_NAME: ${SCAN_NAME} + CRS_ASM_DEVICE_LIST: ${CRS_ASM_DEVICE_LIST} + CRS_ASM_DISCOVERY_STRING: "/oradata" + INIT_SGA_SIZE: 3G + INIT_PGA_SIZE: 2G + GRID_HOME: /u01/app/21c/grid + GRID_BASE: /u01/app/grid + DB_HOME: /u01/app/oracle/product/21c/dbhome_1 + DB_BASE: /u01/app/oracle + INVENTORY: /u01/app/oraInventory + STAGING_SOFTWARE_LOC: ${STAGING_SOFTWARE_LOC} + GRID_SW_ZIP_FILE: LINUX.X64_213000_grid_home.zip + DB_SW_ZIP_FILE: LINUX.X64_213000_db_home.zip + DB_NAME: ORCLCDB + DB_PWD_FILE: pwdsecret + PWD_KEY: keysecret + CMAN_HOST: ${CMAN_HOST_NAME} + CMAN_PORT: 1521 + ASM_ON_NAS: True + DB_SERVICE: ${DB_SERVICE} + sysctls: + - kernel.shmall=2097152 + - kernel.shmmax=8589934592 + - kernel.shmmni=4096 + - 'kernel.sem=250 32000 100 128' + - 'net.ipv4.conf.eth1.rp_filter=2' + - 'net.ipv4.conf.eth2.rp_filter=2' + ulimits: + rtprio: 99 + cap_add: + - SYS_RESOURCE + - NET_ADMIN + - SYS_NICE + - AUDIT_WRITE + - AUDIT_CONTROL + - NET_RAW + networks: + - rac_pub1_nw + - rac_priv1_nw + - rac_priv2_nw + healthcheck: + test: ["CMD", "/bin/python3", "/opt/scripts/startup/scripts/main.py", "--checkracstatus"] + interval: ${HEALTHCHECK_INTERVAL} + timeout: ${HEALTHCHECK_TIMEOUT} + retries: ${HEALTHCHECK_RETRIES} + racnodepc1-cman: + container_name: ${CMAN_CONTAINER_NAME} + hostname: ${CMAN_HOST_NAME} + image: ${CMAN_IMAGE_NAME} + dns_search: ${DNS_DOMAIN} + dns: ${DNS_PUBLIC_IP} + environment: + DOMAIN_NAME: ${DNS_DOMAIN} + PUBLIC_IP: ${CMAN_PUBLIC_IP} + PUBLIC_HOSTNAME: ${CMAN_PUBLIC_HOSTNAME} + DB_HOSTDETAILS: ${DB_HOSTDETAILS} + privileged: false + ports: + - 1521:1521 + networks: + rac_pub1_nw: + ipv4_address: ${CMAN_PUBLIC_IP} + cap_add: + - AUDIT_WRITE + - NET_RAW + healthcheck: + test: ["CMD-SHELL", "pgrep -f 'cmadmin'"] + interval: ${HEALTHCHECK_INTERVAL} + timeout: ${HEALTHCHECK_TIMEOUT} + retries: ${HEALTHCHECK_RETRIES} \ No newline at end of file diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/samples/rac-compose/racslimimage/withresponsefiles/blockdevices/grid_setup_new_21c.rsp b/OracleDatabase/RAC/OracleRealApplicationClusters/samples/rac-compose/racslimimage/withresponsefiles/blockdevices/grid_setup_new_21c.rsp new file mode 100644 index 0000000000..c7ffe19d4a --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/samples/rac-compose/racslimimage/withresponsefiles/blockdevices/grid_setup_new_21c.rsp @@ -0,0 +1,64 @@ +oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v21.0.0 +INVENTORY_LOCATION=/u01/app/oraInventory +oracle.install.option=CRS_CONFIG +ORACLE_BASE=/u01/app/grid +oracle.install.asm.OSDBA=dba +oracle.install.asm.OSOPER= +oracle.install.asm.OSASM=asmadmin +oracle.install.crs.config.scanType=LOCAL_SCAN +oracle.install.crs.config.SCANClientDataFile= +oracle.install.crs.config.gpnp.scanName=racnodepc1-scan +oracle.install.crs.config.gpnp.scanPort=1521 +oracle.install.crs.config.ClusterConfiguration=STANDALONE +oracle.install.crs.config.configureAsExtendedCluster=false +oracle.install.crs.config.memberClusterManifestFile= +oracle.install.crs.config.clusterName=rac01cluster +oracle.install.crs.config.gpnp.configureGNS= +oracle.install.crs.config.autoConfigureClusterNodeVIP=false +oracle.install.crs.config.gpnp.gnsOption= +oracle.install.crs.config.gpnp.gnsClientDataFile= +oracle.install.crs.config.gpnp.gnsSubDomain= +oracle.install.crs.config.gpnp.gnsVIPAddress= +oracle.install.crs.config.sites= +oracle.install.crs.config.clusterNodes=racnodep1:racnodep1-vip:HUB,racnodep2:racnodep2-vip:HUB +oracle.install.crs.config.networkInterfaceList=eth0:10.0.20.0:1,eth1:192.168.17.0:5,eth2:192.168.18.0:5 +oracle.install.asm.configureGIMRDataDG=false +oracle.install.crs.config.storageOption= +oracle.install.crs.config.useIPMI=false +oracle.install.crs.config.ipmi.bmcUsername= +oracle.install.crs.config.ipmi.bmcPassword= +oracle.install.asm.storageOption=ASM +oracle.install.asmOnNAS.ocrLocation= +oracle.install.asmOnNAS.configureGIMRDataDG=false +oracle.install.asmOnNAS.gimrLocation= +oracle.install.asm.SYSASMPassword=ORacle__21c +oracle.install.asm.diskGroup.name=DATA +oracle.install.asm.diskGroup.redundancy=EXTERNAL +oracle.install.asm.diskGroup.AUSize=4 +oracle.install.asm.diskGroup.FailureGroups= +oracle.install.asm.diskGroup.disksWithFailureGroupNames=/dev/asm-disk1,,/dev/asm-disk2, +oracle.install.asm.diskGroup.disks=/dev/asm-disk1,/dev/asm-disk2 +oracle.install.asm.diskGroup.quorumFailureGroupNames= +oracle.install.asm.diskGroup.diskDiscoveryString=/dev/asm* +oracle.install.asm.monitorPassword=ORacle__21c +oracle.install.asm.gimrDG.name= +oracle.install.asm.gimrDG.redundancy= +oracle.install.asm.gimrDG.AUSize=1 +oracle.install.asm.gimrDG.FailureGroups= +oracle.install.asm.gimrDG.disksWithFailureGroupNames= +oracle.install.asm.gimrDG.disks= +oracle.install.asm.gimrDG.quorumFailureGroupNames= +oracle.install.asm.configureAFD=false +oracle.install.crs.configureRHPS=false +oracle.install.crs.config.ignoreDownNodes=false +oracle.install.config.managementOption=NONE +oracle.install.config.omsHost= +oracle.install.config.omsPort=0 +oracle.install.config.emAdminUser= +oracle.install.config.emAdminPassword= +oracle.install.crs.rootconfig.executeRootScript=false +oracle.install.crs.rootconfig.configMethod=ROOT +oracle.install.crs.rootconfig.sudoPath= +oracle.install.crs.rootconfig.sudoUserName= +oracle.install.crs.config.batchinfo= +oracle.install.crs.deleteNode.nodes= \ No newline at end of file diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/samples/rac-compose/racslimimage/withresponsefiles/blockdevices/podman-compose.yml b/OracleDatabase/RAC/OracleRealApplicationClusters/samples/rac-compose/racslimimage/withresponsefiles/blockdevices/podman-compose.yml new file mode 100644 index 0000000000..02fcc6b43c --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/samples/rac-compose/racslimimage/withresponsefiles/blockdevices/podman-compose.yml @@ -0,0 +1,190 @@ +--- +version: "3" +networks: + rac_pub1_nw: + external: true + rac_priv1_nw: + external: true + rac_priv2_nw: + external: true +secrets: + pwdsecret: + file: ${PWD_SECRET_FILE} + keysecret: + file: ${KEY_SECRET_FILE} +services: + rac-dnsserver: + container_name: ${DNS_CONTAINER_NAME} + hostname: ${DNS_HOST_NAME} + image: ${DNS_IMAGE_NAME} + dns_search: ${DNS_DOMAIN} + environment: + SETUP_DNS_CONFIG_FILES: "setup_true" + DOMAIN_NAME: ${DNS_DOMAIN} + RAC_NODE_NAME_PREFIXP: ${RAC_NODE_NAME_PREFIXP} + WEBMIN_ENABLED: false + SETUP_DNS_CONFIG_FILES: "setup_true" + cap_add: + - AUDIT_WRITE + healthcheck: + test: ["CMD-SHELL", "pgrep named"] + interval: ${HEALTHCHECK_INTERVAL} + timeout: ${HEALTHCHECK_TIMEOUT} + retries: ${HEALTHCHECK_RETRIES} + privileged: false + networks: + rac_pub1_nw: + ipv4_address: ${DNS_PUBLIC_IP} + racnodep1: + container_name: ${RACNODE1_CONTAINER_NAME} + hostname: ${RACNODE1_HOST_NAME} + image: ${RAC_IMAGE_NAME} + restart: always + dns: ${DNS_PUBLIC_IP} + dns_search: ${DNS_DOMAIN} + shm_size: 4G + secrets: + - pwdsecret + - keysecret + volumes: + - /scratch/rac/cluster01/node1:/u01 + - /scratch:/scratch + - /scratch/common_scripts/podman/rac/dbca_21c.rsp:/tmp/dbca_21c.rsp + - /scratch/common_scripts/podman/rac/grid_setup_new_21c.rsp:/tmp/grid_21c.rsp + environment: + DNS_SERVERS: ${DNS_PUBLIC_IP} + CRS_PRIVATE_IP1: ${RACNODE1_CRS_PRIVATE_IP1} + CRS_PRIVATE_IP2: ${RACNODE1_CRS_PRIVATE_IP2} + OP_TYPE: setuprac + INSTALL_NODE: ${INSTALL_NODE} + SCAN_NAME: ${SCAN_NAME} + INIT_SGA_SIZE: 3G + INIT_PGA_SIZE: 2G + GRID_HOME: /u01/app/21c/grid + STAGING_SOFTWARE_LOC: ${STAGING_SOFTWARE_LOC} + GRID_SW_ZIP_FILE: LINUX.X64_213000_grid_home.zip + DB_SW_ZIP_FILE: LINUX.X64_213000_db_home.zip + GRID_RESPONSE_FILE: /tmp/grid_21c.rsp + DBCA_RESPONSE_FILE: /tmp/dbca_21c.rsp + DB_PWD_FILE: pwdsecret + PWD_KEY: keysecret + CMAN_HOST: ${CMAN_HOST_NAME} + CMAN_PORT: 1521 + DB_SERVICE: ${DB_SERVICE} + sysctls: + - kernel.shmall=2097152 + - kernel.shmmax=8589934592 + - kernel.shmmni=4096 + - 'kernel.sem=250 32000 100 128' + - 'net.ipv4.conf.eth1.rp_filter=2' + - 'net.ipv4.conf.eth2.rp_filter=2' + ulimits: + rtprio: 99 + cap_add: + - SYS_RESOURCE + - NET_ADMIN + - SYS_NICE + - AUDIT_WRITE + - AUDIT_CONTROL + - NET_RAW + devices: + - "${ASM_DISK1}:${ASM_DEVICE1}" + - "${ASM_DISK2}:${ASM_DEVICE2}" + networks: + - rac_pub1_nw + - rac_priv1_nw + - rac_priv2_nw + healthcheck: + test: ["CMD", "/bin/python3", "/opt/scripts/startup/scripts/main.py", "--checkracstatus"] + interval: ${HEALTHCHECK_INTERVAL} + timeout: ${HEALTHCHECK_TIMEOUT} + retries: ${HEALTHCHECK_RETRIES} + racnodep2: + container_name: ${RACNODE2_CONTAINER_NAME} + hostname: ${RACNODE2_HOST_NAME} + image: ${RAC_IMAGE_NAME} + restart: always + dns: ${DNS_PUBLIC_IP} + dns_search: ${DNS_DOMAIN} + shm_size: 4G + secrets: + - pwdsecret + - keysecret + volumes: + - /scratch/rac/cluster01/node2:/u01 + - /scratch:/scratch + - /scratch/common_scripts/podman/rac/dbca_21c.rsp:/tmp/dbca_21c.rsp + - /scratch/common_scripts/podman/rac/grid_setup_new_21c.rsp:/tmp/grid_21c.rsp + environment: + DNS_SERVERS: ${DNS_PUBLIC_IP} + CRS_PRIVATE_IP1: ${RACNODE2_CRS_PRIVATE_IP1} + CRS_PRIVATE_IP2: ${RACNODE2_CRS_PRIVATE_IP2} + OP_TYPE: setuprac + INSTALL_NODE: ${INSTALL_NODE} + SCAN_NAME: ${SCAN_NAME} + INIT_SGA_SIZE: 3G + INIT_PGA_SIZE: 2G + GRID_HOME: /u01/app/21c/grid + STAGING_SOFTWARE_LOC: ${STAGING_SOFTWARE_LOC} + GRID_SW_ZIP_FILE: LINUX.X64_213000_grid_home.zip + DB_SW_ZIP_FILE: LINUX.X64_213000_db_home.zip + GRID_RESPONSE_FILE: /tmp/grid_21c.rsp + DBCA_RESPONSE_FILE: /tmp/dbca_21c.rsp + DB_PWD_FILE: pwdsecret + PWD_KEY: keysecret + CMAN_HOST: ${CMAN_HOST_NAME} + CMAN_PORT: 1521 + DB_SERVICE: ${DB_SERVICE} + sysctls: + - kernel.shmall=2097152 + - kernel.shmmax=8589934592 + - kernel.shmmni=4096 + - 'kernel.sem=250 32000 100 128' + - 'net.ipv4.conf.eth1.rp_filter=2' + - 'net.ipv4.conf.eth2.rp_filter=2' + ulimits: + rtprio: 99 + cap_add: + - SYS_RESOURCE + - NET_ADMIN + - SYS_NICE + - AUDIT_WRITE + - AUDIT_CONTROL + - NET_RAW + devices: + - "${ASM_DISK1}:${ASM_DEVICE1}" + - "${ASM_DISK2}:${ASM_DEVICE2}" + networks: + - rac_pub1_nw + - rac_priv1_nw + - rac_priv2_nw + healthcheck: + test: ["CMD", "/bin/python3", "/opt/scripts/startup/scripts/main.py", "--checkracstatus"] + interval: ${HEALTHCHECK_INTERVAL} + timeout: ${HEALTHCHECK_TIMEOUT} + retries: ${HEALTHCHECK_RETRIES} + racnodepc1-cman: + container_name: ${CMAN_CONTAINER_NAME} + hostname: ${CMAN_HOST_NAME} + image: ${CMAN_IMAGE_NAME} + dns_search: ${DNS_DOMAIN} + dns: ${DNS_PUBLIC_IP} + environment: + DOMAIN_NAME: ${DNS_DOMAIN} + PUBLIC_IP: ${CMAN_PUBLIC_IP} + PUBLIC_HOSTNAME: ${CMAN_PUBLIC_HOSTNAME} + DB_HOSTDETAILS: ${DB_HOSTDETAILS} + privileged: false + ports: + - 1521:1521 + networks: + rac_pub1_nw: + ipv4_address: ${CMAN_PUBLIC_IP} + cap_add: + - AUDIT_WRITE + - NET_RAW + healthcheck: + test: ["CMD-SHELL", "pgrep -f 'cmadmin'"] + interval: ${HEALTHCHECK_INTERVAL} + timeout: ${HEALTHCHECK_TIMEOUT} + retries: ${HEALTHCHECK_RETRIES} \ No newline at end of file diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/samples/rac-compose/racslimimage/withresponsefiles/nfsdevices/grid_setup_new_21c.rsp b/OracleDatabase/RAC/OracleRealApplicationClusters/samples/rac-compose/racslimimage/withresponsefiles/nfsdevices/grid_setup_new_21c.rsp new file mode 100644 index 0000000000..16062dd6cb --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/samples/rac-compose/racslimimage/withresponsefiles/nfsdevices/grid_setup_new_21c.rsp @@ -0,0 +1,64 @@ +oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v21.0.0 +INVENTORY_LOCATION=/u01/app/oraInventory +oracle.install.option=CRS_CONFIG +ORACLE_BASE=/u01/app/grid +oracle.install.asm.OSDBA=dba +oracle.install.asm.OSOPER= +oracle.install.asm.OSASM=asmadmin +oracle.install.crs.config.scanType=LOCAL_SCAN +oracle.install.crs.config.SCANClientDataFile= +oracle.install.crs.config.gpnp.scanName=racnodepc1-scan +oracle.install.crs.config.gpnp.scanPort=1521 +oracle.install.crs.config.ClusterConfiguration=STANDALONE +oracle.install.crs.config.configureAsExtendedCluster=false +oracle.install.crs.config.memberClusterManifestFile= +oracle.install.crs.config.clusterName=rac01cluster +oracle.install.crs.config.gpnp.configureGNS= +oracle.install.crs.config.autoConfigureClusterNodeVIP=false +oracle.install.crs.config.gpnp.gnsOption= +oracle.install.crs.config.gpnp.gnsClientDataFile= +oracle.install.crs.config.gpnp.gnsSubDomain= +oracle.install.crs.config.gpnp.gnsVIPAddress= +oracle.install.crs.config.sites= +oracle.install.crs.config.clusterNodes=racnodep1:racnodep1-vip:HUB,racnodep2:racnodep2-vip:HUB +oracle.install.crs.config.networkInterfaceList=eth0:10.0.20.0:1,eth1:192.168.17.0:5,eth2:192.168.18.0:5 +oracle.install.asm.configureGIMRDataDG=false +oracle.install.crs.config.storageOption= +oracle.install.crs.config.useIPMI=false +oracle.install.crs.config.ipmi.bmcUsername= +oracle.install.crs.config.ipmi.bmcPassword= +oracle.install.asm.storageOption=ASM +oracle.install.asmOnNAS.ocrLocation= +oracle.install.asmOnNAS.configureGIMRDataDG=false +oracle.install.asmOnNAS.gimrLocation= +oracle.install.asm.SYSASMPassword=ORacle__21c +oracle.install.asm.diskGroup.name=DATA +oracle.install.asm.diskGroup.redundancy=EXTERNAL +oracle.install.asm.diskGroup.AUSize=4 +oracle.install.asm.diskGroup.FailureGroups= +oracle.install.asm.diskGroup.disksWithFailureGroupNames=/oradata/asm_disk01.img,,/oradata/asm_disk02.img,,/oradata/asm_disk03.img,,/oradata/asm_disk04.img,,/oradata/asm_disk05.im +oracle.install.asm.diskGroup.disks=/oradata/asm_disk01.img,/oradata/asm_disk02.img,/oradata/asm_disk03.img,/oradata/asm_disk04.img,/oradata/asm_disk05.img +oracle.install.asm.diskGroup.quorumFailureGroupNames= +oracle.install.asm.diskGroup.diskDiscoveryString=/oradata/asm_disk* +oracle.install.asm.monitorPassword=ORacle__21c +oracle.install.asm.gimrDG.name= +oracle.install.asm.gimrDG.redundancy= +oracle.install.asm.gimrDG.AUSize=1 +oracle.install.asm.gimrDG.FailureGroups= +oracle.install.asm.gimrDG.disksWithFailureGroupNames= +oracle.install.asm.gimrDG.disks= +oracle.install.asm.gimrDG.quorumFailureGroupNames= +oracle.install.asm.configureAFD=false +oracle.install.crs.configureRHPS=false +oracle.install.crs.config.ignoreDownNodes=false +oracle.install.config.managementOption=NONE +oracle.install.config.omsHost= +oracle.install.config.omsPort=0 +oracle.install.config.emAdminUser= +oracle.install.config.emAdminPassword= +oracle.install.crs.rootconfig.executeRootScript=false +oracle.install.crs.rootconfig.configMethod=ROOT +oracle.install.crs.rootconfig.sudoPath= +oracle.install.crs.rootconfig.sudoUserName= +oracle.install.crs.config.batchinfo= +oracle.install.crs.deleteNode.nodes= \ No newline at end of file diff --git a/OracleDatabase/RAC/OracleRealApplicationClusters/samples/rac-compose/racslimimage/withresponsefiles/nfsdevices/podman-compose.yml b/OracleDatabase/RAC/OracleRealApplicationClusters/samples/rac-compose/racslimimage/withresponsefiles/nfsdevices/podman-compose.yml new file mode 100644 index 0000000000..240c1ed983 --- /dev/null +++ b/OracleDatabase/RAC/OracleRealApplicationClusters/samples/rac-compose/racslimimage/withresponsefiles/nfsdevices/podman-compose.yml @@ -0,0 +1,219 @@ +--- +version: "3" +networks: + rac_pub1_nw: + external: true + rac_priv1_nw: + external: true + rac_priv2_nw: + external: true +secrets: + pwdsecret: + file: ${PWD_SECRET_FILE} + keysecret: + file: ${KEY_SECRET_FILE} +volumes: + racstorage: + external: true +services: + rac-dnsserver: + container_name: ${DNS_CONTAINER_NAME} + hostname: ${DNS_HOST_NAME} + image: ${DNS_IMAGE_NAME} + dns_search: ${DNS_DOMAIN} + environment: + SETUP_DNS_CONFIG_FILES: "setup_true" + DOMAIN_NAME: ${DNS_DOMAIN} + RAC_NODE_NAME_PREFIXP: ${RAC_NODE_NAME_PREFIXP} + WEBMIN_ENABLED: false + SETUP_DNS_CONFIG_FILES: "setup_true" + cap_add: + - AUDIT_WRITE + healthcheck: + test: ["CMD-SHELL", "pgrep named"] + interval: ${HEALTHCHECK_INTERVAL} + timeout: ${HEALTHCHECK_TIMEOUT} + retries: ${HEALTHCHECK_RETRIES} + privileged: false + networks: + rac_pub1_nw: + ipv4_address: ${DNS_PUBLIC_IP} + racnode-storage: + container_name: ${STORAGE_CONTAINER_NAME} + hostname: ${STORAGE_HOST_NAME} + image: ${STORAGE_IMAGE_NAME} + dns_search: ${DNS_DOMAIN} + dns: ${DNS_PUBLIC_IP} + volumes: + - ${NFS_STORAGE_VOLUME}:/oradata + environment: + DNS_SERVER: ${DNS_PUBLIC_IP} + DOMAIN: ${DNS_DOMAIN} + cap_add: + - SYS_ADMIN + - AUDIT_WRITE + - NET_ADMIN + restart: always + healthcheck: + test: + - CMD-SHELL + - /bin/bash -c "ls -lrt /oradata/ && showmount -e | grep '/oradata'" + interval: ${HEALTHCHECK_INTERVAL} + timeout: ${HEALTHCHECK_TIMEOUT} + retries: ${HEALTHCHECK_RETRIES} + networks: + rac_pub1_nw: + ipv4_address: ${STORAGE_PUBLIC_IP} + racnodep1: + container_name: ${RACNODE1_CONTAINER_NAME} + hostname: ${RACNODE1_HOST_NAME} + image: ${RAC_IMAGE_NAME} + restart: always + dns: ${DNS_PUBLIC_IP} + dns_search: ${DNS_DOMAIN} + shm_size: 4G + secrets: + - pwdsecret + - keysecret + volumes: + - /scratch/rac/cluster01/node1:/u01 + - /scratch:/scratch + - /scratch/common_scripts/podman/rac/dbca_21c.rsp:/tmp/dbca_21c.rsp + - racstorage:/oradata + - /scratch/common_scripts/podman/rac/grid_setup_new_21c.rsp:/tmp/grid_21c.rsp + environment: + DNS_SERVERS: ${DNS_PUBLIC_IP} + CRS_PRIVATE_IP1: ${RACNODE1_CRS_PRIVATE_IP1} + CRS_PRIVATE_IP2: ${RACNODE1_CRS_PRIVATE_IP2} + OP_TYPE: setuprac + INSTALL_NODE: ${INSTALL_NODE} + SCAN_NAME: ${SCAN_NAME} + INIT_SGA_SIZE: 3G + INIT_PGA_SIZE: 2G + GRID_HOME: /u01/app/21c/grid + STAGING_SOFTWARE_LOC: ${STAGING_SOFTWARE_LOC} + GRID_SW_ZIP_FILE: LINUX.X64_213000_grid_home.zip + DB_SW_ZIP_FILE: LINUX.X64_213000_db_home.zip + GRID_RESPONSE_FILE: /tmp/grid_21c.rsp + DBCA_RESPONSE_FILE: /tmp/dbca_21c.rsp + DB_PWD_FILE: pwdsecret + PWD_KEY: keysecret + CMAN_HOST: ${CMAN_HOST_NAME} + CMAN_PORT: 1521 + ASM_ON_NAS: True + DB_SERVICE: ${DB_SERVICE} + RESET_FAILED_SYSTEMD: true + sysctls: + - kernel.shmall=2097152 + - kernel.shmmax=8589934592 + - kernel.shmmni=4096 + - 'kernel.sem=250 32000 100 128' + - 'net.ipv4.conf.eth1.rp_filter=2' + - 'net.ipv4.conf.eth2.rp_filter=2' + ulimits: + rtprio: 99 + cap_add: + - SYS_RESOURCE + - NET_ADMIN + - SYS_NICE + - AUDIT_WRITE + - AUDIT_CONTROL + - NET_RAW + networks: + - rac_pub1_nw + - rac_priv1_nw + - rac_priv2_nw + healthcheck: + test: ["CMD", "/bin/python3", "/opt/scripts/startup/scripts/main.py", "--checkracstatus"] + interval: ${HEALTHCHECK_INTERVAL} + timeout: ${HEALTHCHECK_TIMEOUT} + retries: ${HEALTHCHECK_RETRIES} + racnodep2: + container_name: ${RACNODE2_CONTAINER_NAME} + hostname: ${RACNODE2_HOST_NAME} + image: ${RAC_IMAGE_NAME} + restart: always + dns: ${DNS_PUBLIC_IP} + dns_search: ${DNS_DOMAIN} + shm_size: 4G + secrets: + - pwdsecret + - keysecret + volumes: + - /scratch/rac/cluster01/node2:/u01 + - /scratch:/scratch + - /scratch/common_scripts/podman/rac/dbca_21c.rsp:/tmp/dbca_21c.rsp + - /scratch/common_scripts/podman/rac/grid_setup_new_21c.rsp:/tmp/grid_21c.rsp + - racstorage:/oradata + environment: + DNS_SERVERS: ${DNS_PUBLIC_IP} + CRS_PRIVATE_IP1: ${RACNODE2_CRS_PRIVATE_IP1} + CRS_PRIVATE_IP2: ${RACNODE2_CRS_PRIVATE_IP2} + OP_TYPE: setuprac + INSTALL_NODE: ${INSTALL_NODE} + SCAN_NAME: ${SCAN_NAME} + INIT_SGA_SIZE: 3G + INIT_PGA_SIZE: 2G + GRID_HOME: /u01/app/21c/grid + STAGING_SOFTWARE_LOC: ${STAGING_SOFTWARE_LOC} + GRID_SW_ZIP_FILE: LINUX.X64_213000_grid_home.zip + DB_SW_ZIP_FILE: LINUX.X64_213000_db_home.zip + GRID_RESPONSE_FILE: /tmp/grid_21c.rsp + DBCA_RESPONSE_FILE: /tmp/dbca_21c.rsp + DB_PWD_FILE: pwdsecret + PWD_KEY: keysecret + CMAN_HOST: ${CMAN_HOST_NAME} + CMAN_PORT: 1521 + ASM_ON_NAS: True + DB_SERVICE: ${DB_SERVICE} + RESET_FAILED_SYSTEMD: true + sysctls: + - kernel.shmall=2097152 + - kernel.shmmax=8589934592 + - kernel.shmmni=4096 + - 'kernel.sem=250 32000 100 128' + - 'net.ipv4.conf.eth1.rp_filter=2' + - 'net.ipv4.conf.eth2.rp_filter=2' + ulimits: + rtprio: 99 + cap_add: + - SYS_RESOURCE + - NET_ADMIN + - SYS_NICE + - AUDIT_WRITE + - AUDIT_CONTROL + - NET_RAW + networks: + - rac_pub1_nw + - rac_priv1_nw + - rac_priv2_nw + healthcheck: + test: ["CMD", "/bin/python3", "/opt/scripts/startup/scripts/main.py", "--checkracstatus"] + interval: ${HEALTHCHECK_INTERVAL} + timeout: ${HEALTHCHECK_TIMEOUT} + retries: ${HEALTHCHECK_RETRIES} + racnodepc1-cman: + container_name: ${CMAN_CONTAINER_NAME} + hostname: ${CMAN_HOST_NAME} + image: ${CMAN_IMAGE_NAME} + dns_search: ${DNS_DOMAIN} + dns: ${DNS_PUBLIC_IP} + environment: + DOMAIN_NAME: ${DNS_DOMAIN} + PUBLIC_IP: ${CMAN_PUBLIC_IP} + PUBLIC_HOSTNAME: ${CMAN_PUBLIC_HOSTNAME} + DB_HOSTDETAILS: ${DB_HOSTDETAILS} + privileged: false + ports: + - 1521:1521 + networks: + rac_pub1_nw: + ipv4_address: ${CMAN_PUBLIC_IP} + cap_add: + - AUDIT_WRITE + - NET_RAW + healthcheck: + test: ["CMD-SHELL", "pgrep -f 'cmadmin'"] + interval: ${HEALTHCHECK_INTERVAL} + timeout: ${HEALTHCHECK_TIMEOUT} + retries: ${HEALTHCHECK_RETRIES} \ No newline at end of file