Skip to content
This repository was archived by the owner on Mar 23, 2020. It is now read-only.

Readme updates #54

Merged
merged 9 commits into from
Sep 3, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 8 additions & 3 deletions Makefile
Original file line number Diff line number Diff line change
@@ -1,16 +1,21 @@
.PHONY: default all OpenShift OCS CNV
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The script in the CNV dir has changed, so this file needs an update for that. Because of the script names changing, I was thinking a Makefile per directory could make sense, then the root level Makefile just pushd and calls make. So if you raise a PR to change a script name, you need only mess with the Makefile in your directory instead of worrying what other Makefiles call your scripts


.PHONY: default all OpenShift OCS CNV prep

default: OpenShift bell

all: OpenShift OCS CNV bell

prep:
pushd OpenShift; make pre_install; popd

OpenShift:
pushd OpenShift; make; popd

OCS: OpenShift
pushd OCS; ./customize-ocs.sh; popd
pushd OCS; make; popd

CNV: OpenShift
pushd CNV; ./deploy-cnv.sh; popd
pushd CNV; make; popd

bell:
@echo "Done!" $$'\a'
12 changes: 9 additions & 3 deletions OpenShift/Makefile
Original file line number Diff line number Diff line change
@@ -1,14 +1,20 @@
.PHONY: default all requirements configure
default: requirements configure
.PHONY: default all requirements configure create clean host_cleanup bell post_install pre_install

all: default
default: create post_install bell

all: pre_install create post_install bell

requirements:
./01_install_requirements.sh

configure:
./02_configure_host.sh

pre_install: requirements configure

create:
./03_create_cluster.sh

post_install:
./99_post_install.sh

Expand Down
80 changes: 54 additions & 26 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ KNI clusters consist of:
[KubeVirt](https://kubevirt.io/) and using the [Hyperconverged
Cluster Operator
(HCO)](https://github.com/kubevirt/hyperconverged-cluster-operator).
* 4x Dell PowerEdge R640 nodes, each with 2x Mellanox NICs, and 2x
* 4x Dell PowerEdge R640 nodes, each with 2x Mellanox 25G NICs, and 2x
Mellanox ethernet switches, all in a 12U rack. 1 node is used as a
"provisioning host", while the other 3 nodes are OpenShift control
plane machines.
Expand All @@ -29,26 +29,40 @@ will need to support both published releases and [pre-release versions
(#12)](https://github.com/openshift-kni/install-scripts/issues/12) of
each of these.

The scripts will:
### Preparation

To ease installation, a [prepared ISO](https://github.com/openshift-kni/install-scripts/issues/20)
can be used to install the "provisioning host". Using the prepared ISO addresses
the following:

1. [Creates an admin user
(#21)](https://github.com/openshift-kni/install-scripts/issues/21)
with passwordless sudo on the provisioning host.
1. Ensure the provisioning host has all [required software
installed](https://github.com/openshift-kni/install-scripts/blob/master/01_install_requirements.sh). This
script will also be used to [prepare an ISO image
(#20)](https://github.com/openshift-kni/install-scripts/issues/20)
to speed up this part of the installation process.
1. [Validate any environment requirements
installed](https://github.com/openshift-kni/install-scripts/blob/master/01_install_requirements.sh).
(#22)](https://github.com/openshift-kni/install-scripts/issues/22) -
the [bare metal IPI network
requirements](https://github.com/openshift/installer/blob/master/docs/user/metal/install_ipi.md#network-requirements)
are a good example of environment requirements.
1. Apply any [configuration changes to the provisioning
host](https://github.com/openshift-kni/install-scripts/blob/master/02_configure_host.sh)
that are required for the OpenShift installer - for example,
that are required for the OpenShift installer. For example,
creating the `default` libvirt storage pool and the `baremetal` and
`provisioning` bridges.

Note: Optional scripts that handle the above prerequisites may be executed
if not using the prepared ISO that handle the above.


### Installation and Validation

The deployment process will use scripts to perform the following on the configuration:

1. [Validate any environment requirements
(#22)](https://github.com/openshift-kni/install-scripts/issues/22) -
the [bare metal IPI network
requirements](https://github.com/openshift/installer/blob/master/docs/user/metal/install_ipi.md#network-requirements)
are a good example of environment requirements.
1. [Prepare the node information
(#19)](https://github.com/openshift-kni/install-scripts/issues/19)
required for the [bare metal IPI
Expand All @@ -58,7 +72,7 @@ The scripts will:
1. Complete some post-install configuration - including [machine/node
linkage
(#14)](https://github.com/openshift-kni/install-scripts/issues/14),
and [configuring a storage VLAN on the `provisioning` interface on
and [configuring a tagged storage VLAN on the interface connected to the `Internal` network on
the OpenShift nodes
(#4)](https://github.com/openshift-kni/install-scripts/issues/4).
1. [Deploy OCS
Expand All @@ -70,14 +84,18 @@ The scripts will:
for image storage.
1. [Deploy
CNV](https://github.com/openshift-kni/install-scripts/blob/master/CNV/deploy-cnv.sh). [Configure
a bridge on the `baremetal` interface on OpenShift nodes
a bridge on the `External` interface on OpenShift nodes
(#18)](https://github.com/openshift-kni/install-scripts/issues/18)
to allow VMs access this network.
1. Temporarily install Ripsaw, carry out some performance tests, and
capture the results.




The following environment-specific information will be required for
each installation:
each installation. On a properly configured and prepared cluster,
the following items will be discovered:

1. A pull secret - used to access OpenShift content - and an SSH key
that will be used to authenticate SSH access to the control plane
Expand All @@ -91,41 +109,48 @@ each installation:
for API, Ingress, and DNS access.
1. The BMC IPMI addresses and credentials for the 3 control plane
machines.
1. Optionally, a Network Time Protocol (NTP) server where the default
public server is not accessible
1. If detected that the provisioning host is not sync with a time source, configure the 25G switch as a source via the DHCP service on the `Storage` network. An optional script to set a source for the switch, will be provided.

## Provisioning Host Setup

The provisioning host must be a RHEL-8 machine.

Make a copy of `config_example.sh` and set the required variables in
there.
### For a host not installed using the ISO:
In the OpenShift subdirectory, create a copy of `config_example.sh` using the existing
user as part of the file name. For example, `config_<username>.sh`. Once the file has
been created, set the required PULL_SECRET variable within the shell script

To install some required packages and the `oc` client:
To install some required packages, configure `libvirt`, `provisioning` and `baremetal` bridges, from the top directory:

```sh
make requirements
make prep
```

Note:

1. This ensures that a recent 4.2 build of `oc` is installed. The
minimum required version is hardcoded in the script.

To configure libvirt, and prepare the `provisioning` and `baremetal`
bridges:

### For all nodes, Create the cluster
```sh
make configure
make OpenShift
```

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add in the README file a note about using a LOGLEVEL environment variable to increase the loglevel output of the openshift-install command? Such as:

Note: In order to increase the log level ouput of openshift-install, a LOGLEVEL environment variable can be used as:

export LOGLEVEL="debug"
make OpenShift

This will fix Russel's comment in #51

I can do a follow up PR if you don't want to add it into this big PR tho :)

Thanks!

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll attempt to get it in prior to @markmc review else may ask you to do the follow up

Note:
In order to increase the log level ouput of `openshift-install`, a `LOGLEVEL` environment variable can be used as:
```
export LOGLEVEL="debug"
make OpenShift
```


## Continer Native Virtualization (CNV)
The installation of CNV related operators is managed by a *meta operator*
called the [HyperConverged Cluster Operator](https://github.com/kubevirt/hyperconverged-cluster-operator) (HCO).
Deploying with the *meta operator* will launch operators for KubeVirt,
Containerized Data Imported (CDI), Cluster Network Addons (CNA),
Common templates (SSP), Node Maintenance Operator (NMO) and Node Labeller Operator.

### Deploy OCS via operator

_Coming Soon_


### Deploy the HCO through the OperatorHub

The HyperConverged Cluster Operator is listed in the Red Hat registry,
Expand All @@ -135,3 +160,6 @@ and selecting the HyperConverged Cluster Operator.
If you want to use the CLI, we provide the a [script](CNV/deploy-cnv.sh)
that automates all the steps to the point of having a fully functional
CNV deployment.