diff --git a/Makefile b/Makefile index bd332ca..a26f553 100644 --- a/Makefile +++ b/Makefile @@ -1,16 +1,21 @@ -.PHONY: default all OpenShift OCS CNV + +.PHONY: default all OpenShift OCS CNV prep + default: OpenShift bell all: OpenShift OCS CNV bell +prep: + pushd OpenShift; make pre_install; popd + OpenShift: pushd OpenShift; make; popd OCS: OpenShift - pushd OCS; ./customize-ocs.sh; popd + pushd OCS; make; popd CNV: OpenShift - pushd CNV; ./deploy-cnv.sh; popd + pushd CNV; make; popd bell: @echo "Done!" $$'\a' diff --git a/OpenShift/Makefile b/OpenShift/Makefile index 2e32074..b5df877 100644 --- a/OpenShift/Makefile +++ b/OpenShift/Makefile @@ -1,7 +1,8 @@ -.PHONY: default all requirements configure -default: requirements configure +.PHONY: default all requirements configure create clean host_cleanup bell post_install pre_install -all: default +default: create post_install bell + +all: pre_install create post_install bell requirements: ./01_install_requirements.sh @@ -9,6 +10,11 @@ requirements: configure: ./02_configure_host.sh +pre_install: requirements configure + +create: + ./03_create_cluster.sh + post_install: ./99_post_install.sh diff --git a/README.md b/README.md index e3924b8..59bff42 100644 --- a/README.md +++ b/README.md @@ -16,7 +16,7 @@ KNI clusters consist of: [KubeVirt](https://kubevirt.io/) and using the [Hyperconverged Cluster Operator (HCO)](https://github.com/kubevirt/hyperconverged-cluster-operator). -* 4x Dell PowerEdge R640 nodes, each with 2x Mellanox NICs, and 2x +* 4x Dell PowerEdge R640 nodes, each with 2x Mellanox 25G NICs, and 2x Mellanox ethernet switches, all in a 12U rack. 1 node is used as a "provisioning host", while the other 3 nodes are OpenShift control plane machines. @@ -29,26 +29,40 @@ will need to support both published releases and [pre-release versions (#12)](https://github.com/openshift-kni/install-scripts/issues/12) of each of these. -The scripts will: +### Preparation + +To ease installation, a [prepared ISO](https://github.com/openshift-kni/install-scripts/issues/20) +can be used to install the "provisioning host". Using the prepared ISO addresses +the following: 1. [Creates an admin user (#21)](https://github.com/openshift-kni/install-scripts/issues/21) with passwordless sudo on the provisioning host. 1. Ensure the provisioning host has all [required software - installed](https://github.com/openshift-kni/install-scripts/blob/master/01_install_requirements.sh). This - script will also be used to [prepare an ISO image - (#20)](https://github.com/openshift-kni/install-scripts/issues/20) - to speed up this part of the installation process. -1. [Validate any environment requirements + installed](https://github.com/openshift-kni/install-scripts/blob/master/01_install_requirements.sh). (#22)](https://github.com/openshift-kni/install-scripts/issues/22) - the [bare metal IPI network requirements](https://github.com/openshift/installer/blob/master/docs/user/metal/install_ipi.md#network-requirements) are a good example of environment requirements. 1. Apply any [configuration changes to the provisioning host](https://github.com/openshift-kni/install-scripts/blob/master/02_configure_host.sh) - that are required for the OpenShift installer - for example, + that are required for the OpenShift installer. For example, creating the `default` libvirt storage pool and the `baremetal` and `provisioning` bridges. + +Note: Optional scripts that handle the above prerequisites may be executed +if not using the prepared ISO that handle the above. + + +### Installation and Validation + +The deployment process will use scripts to perform the following on the configuration: + +1. [Validate any environment requirements +(#22)](https://github.com/openshift-kni/install-scripts/issues/22) - + the [bare metal IPI network + requirements](https://github.com/openshift/installer/blob/master/docs/user/metal/install_ipi.md#network-requirements) + are a good example of environment requirements. 1. [Prepare the node information (#19)](https://github.com/openshift-kni/install-scripts/issues/19) required for the [bare metal IPI @@ -58,7 +72,7 @@ The scripts will: 1. Complete some post-install configuration - including [machine/node linkage (#14)](https://github.com/openshift-kni/install-scripts/issues/14), - and [configuring a storage VLAN on the `provisioning` interface on + and [configuring a tagged storage VLAN on the interface connected to the `Internal` network on the OpenShift nodes (#4)](https://github.com/openshift-kni/install-scripts/issues/4). 1. [Deploy OCS @@ -70,14 +84,18 @@ The scripts will: for image storage. 1. [Deploy CNV](https://github.com/openshift-kni/install-scripts/blob/master/CNV/deploy-cnv.sh). [Configure - a bridge on the `baremetal` interface on OpenShift nodes + a bridge on the `External` interface on OpenShift nodes (#18)](https://github.com/openshift-kni/install-scripts/issues/18) to allow VMs access this network. 1. Temporarily install Ripsaw, carry out some performance tests, and capture the results. + + + The following environment-specific information will be required for -each installation: +each installation. On a properly configured and prepared cluster, +the following items will be discovered: 1. A pull secret - used to access OpenShift content - and an SSH key that will be used to authenticate SSH access to the control plane @@ -91,34 +109,36 @@ each installation: for API, Ingress, and DNS access. 1. The BMC IPMI addresses and credentials for the 3 control plane machines. -1. Optionally, a Network Time Protocol (NTP) server where the default - public server is not accessible +1. If detected that the provisioning host is not sync with a time source, configure the 25G switch as a source via the DHCP service on the `Storage` network. An optional script to set a source for the switch, will be provided. ## Provisioning Host Setup The provisioning host must be a RHEL-8 machine. -Make a copy of `config_example.sh` and set the required variables in -there. +### For a host not installed using the ISO: +In the OpenShift subdirectory, create a copy of `config_example.sh` using the existing +user as part of the file name. For example, `config_.sh`. Once the file has +been created, set the required PULL_SECRET variable within the shell script -To install some required packages and the `oc` client: +To install some required packages, configure `libvirt`, `provisioning` and `baremetal` bridges, from the top directory: ```sh -make requirements +make prep ``` -Note: - -1. This ensures that a recent 4.2 build of `oc` is installed. The - minimum required version is hardcoded in the script. - -To configure libvirt, and prepare the `provisioning` and `baremetal` -bridges: - +### For all nodes, Create the cluster ```sh -make configure +make OpenShift ``` +Note: +In order to increase the log level ouput of `openshift-install`, a `LOGLEVEL` environment variable can be used as: +``` +export LOGLEVEL="debug" +make OpenShift +``` + + ## Continer Native Virtualization (CNV) The installation of CNV related operators is managed by a *meta operator* called the [HyperConverged Cluster Operator](https://github.com/kubevirt/hyperconverged-cluster-operator) (HCO). @@ -126,6 +146,11 @@ Deploying with the *meta operator* will launch operators for KubeVirt, Containerized Data Imported (CDI), Cluster Network Addons (CNA), Common templates (SSP), Node Maintenance Operator (NMO) and Node Labeller Operator. +### Deploy OCS via operator + +_Coming Soon_ + + ### Deploy the HCO through the OperatorHub The HyperConverged Cluster Operator is listed in the Red Hat registry, @@ -135,3 +160,6 @@ and selecting the HyperConverged Cluster Operator. If you want to use the CLI, we provide the a [script](CNV/deploy-cnv.sh) that automates all the steps to the point of having a fully functional CNV deployment. + + +