Skip to content

Commit 6fe7149

Browse files
committed
Add template for local storage
1 parent 2638776 commit 6fe7149

File tree

2 files changed

+252
-0
lines changed

2 files changed

+252
-0
lines changed
Lines changed: 166 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,166 @@
1+
# OpenShift Local Volume Examples [WIP]
2+
3+
OpenShift allows for using local devices as PersistentVolumes.
4+
This feature is alpha in 3.7 and must be explicitly enabled on all OpenShift
5+
masters, controllers and nodes (see below).
6+
7+
## Alpha disclaimer
8+
9+
Local Volumes are alpha feature in 3.7. It requires several manual steps to
10+
enable, configure and deplouy the feature. It may be reworked in the furute and
11+
it will be probably automated by openshift-ansible.
12+
13+
14+
## Overview
15+
Local volumes are PersistentVolumes representing local mounted filesystems.
16+
In future they may be extended to raw block devices.
17+
18+
The main difference between HostPath and Local volume is that Local
19+
PersistentVolumes have special annotation that makes any Pod that uses the PV
20+
to be scheduled on the same node where the local volume is mounted.
21+
22+
In addition, Local volume comes with a provisioner that automatically creates
23+
PVs for locally mounted devices. This provisioner is currently very limited
24+
and just scans pre-configured directories. It cannot dynamically provision
25+
volumes, it may be implemented in a future release.
26+
27+
## Enabling Local Volumes
28+
29+
All OpenShift masters and nodes must run with enabled feature
30+
`PersistentLocalVolumes=true`. Edit `master-config.yaml` on all master hosts and
31+
make sure that `apiServerArguments` and `controllerArguments` enable the feature:
32+
33+
```yaml
34+
apiServerArguments:
35+
feature-gates:
36+
- PersistentLocalVolumes=true
37+
...
38+
39+
controllerArguments:
40+
feature-gates:
41+
- PersistentLocalVolumes=true
42+
...
43+
```
44+
45+
Similarly, the feature needs to be enabled on all nodes. Edit `node-config.yaml`
46+
on all nodes:
47+
48+
```yaml
49+
kubeletArguments:
50+
feature-gates:
51+
- PersistentLocalVolumes=true
52+
...
53+
```
54+
55+
## Mounting Local Volumes
56+
57+
While the feature is in alpha all local volumes must be manually mounted before
58+
they can be consumed by Kubernetes as PersistentVolumes.
59+
60+
All volumes must be mounted into
61+
`/mnt/local-storage/<storage-class-name>/<volume>`. It's up to the administrator
62+
to create the local devices as needed (using any method such as disk partition,
63+
LVM, ...), create suitable filesystems on them and mount them, either by a
64+
script or `/etc/fstab` entries.
65+
66+
Example of `/etc/fstab`:
67+
```
68+
# device name # mount point # FS # options # extra
69+
/dev/sdb1 /mount/local-storage/ssd/disk1 ext4 defaults 1 2
70+
/dev/sdb2 /mount/local-storage/ssd/disk2 ext4 defaults 1 2
71+
/dev/sdb3 /mount/local-storage/ssd/disk3 ext4 defaults 1 2
72+
/dev/sdc1 /mount/local-storage/hdd/disk1 ext4 defaults 1 2
73+
/dev/sdc2 /mount/local-storage/hdd/disk2 ext4 defaults 1 2
74+
```
75+
76+
## Prerequisites
77+
78+
While not strictly required, it's desirable to create a standalone namespace
79+
for local volume provisioner and its configuration:
80+
81+
```bash
82+
oc new-project local-storage
83+
```
84+
85+
## Local provisioner configuration
86+
87+
OpenShift depends on an external provisioner to create PersistentVolumes for
88+
local devices and to clean them up when they're not needed so they can be used
89+
again.
90+
91+
This external provisioner needs to be configured via an ConfigMap to know what
92+
directory represents which StorageClass:
93+
94+
```yaml
95+
kind: ConfigMap
96+
metadata:
97+
name: local-volume-config
98+
data:
99+
"local-ssd": | <1>
100+
{
101+
"hostDir": "/mnt/local-storage/ssd", <2>
102+
"mountDir": "/mnt/local-storage/ssd" <3>
103+
}
104+
"local-hdd": |
105+
{
106+
"hostDir": "/mnt/local-storage/hdd",
107+
"mountDir": "/mnt/local-storage/hdd"
108+
}
109+
```
110+
* <1> Name of the StorageClass.
111+
* <2> Path to the directory on the host. It must be a subdirectory of `/mnt/local-storage`.
112+
* <3> Path to the directory in the provisioner pod. The same directory structure
113+
as on the host is strongly suggested.
114+
115+
With this configuration the provisioner will create:
116+
* One PersistentVolume with StorageClass `local-ssd` for every subdirectory in `/mnt/local-storage/ssd`.
117+
* One PersistentVolume with StorageClass `local-hdd` for every subdirectory in `/mnt/local-storage/hdd`.
118+
119+
This configuration must be created before the provisioner is deployed by the
120+
template below!
121+
122+
## Local provisioner deployment
123+
124+
Note that all local devices must be mounted and ConfigMap with storage classes
125+
and their respective directories must be created before starting the
126+
provisioner!
127+
128+
The provisioner is installed from OpenShift template that's available at https://raw.githubusercontent.com/jsafrane/origin/local-storage/examples/storage-examples/local-examples/local-storage-provisioner-template.yaml.
129+
130+
1. Prepare a service account that will be able to run pods as root user and use
131+
HostPath volumes:
132+
```bash
133+
oc create serviceaccount local-storage-admin
134+
oc adm policy add-scc-to-user hostmount-anyuid -z local-storage-admin
135+
```
136+
Root privileges are necessary for the provisioner pod so it can delete any
137+
content on the local volumes. HostPath is necessary to access
138+
`/mnt/local-storage` on the host.
139+
140+
2. Install the template:
141+
```bash
142+
oc create -f https://raw.githubusercontent.com/jsafrane/origin/local-storage/examples/storage-examples/local-examples/local-storage-provisioner-template.yaml
143+
```
144+
3. Instantiate the template. Specify value of "configmap" and "account"
145+
parameters:
146+
```bash
147+
oc new-app -p CONFIGMAP=local-volume-config -p SERVICE_ACCOUNT=local-storage-admin -p NAMESPACE=local-storage local-storage-provisioner
148+
```
149+
See the template for other configurable options.
150+
The template creates a DaemonSet that runs a Pod on every node. The Pod
151+
watches directories specified in the ConfigMap and creates PersistentVolumes
152+
for them automatically.
153+
154+
Note that the provisioner runs as root to be able to clean up the directories
155+
when respective PersistentVolume is released and all data need to be removed.
156+
157+
## Adding new devices
158+
159+
Adding a new device requires several manual steps:
160+
161+
1. Stop DaemonSet with the provisioner.
162+
2. Create a subdirectory in the right directory on the node with the new device
163+
and mount it there.
164+
3. Start the DaemonSet with the provisioner.
165+
166+
Omitting any of these steps may result in a wrong PV being created!
Lines changed: 86 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,86 @@
1+
apiVersion: v1
2+
kind: Template
3+
metadata:
4+
name: "local-storage-provisioner"
5+
objects:
6+
7+
# $SERVICE_ACCOUNT must be able to manipulate with PVs
8+
- apiVersion: v1
9+
kind: ClusterRoleBinding
10+
metadata:
11+
name: local-storage:provisioner-pv-binding
12+
roleRef:
13+
apiGroup: rbac.authorization.k8s.io
14+
kind: ClusterRole
15+
name: system:persistent-volume-provisioner
16+
subjects:
17+
- kind: ServiceAccount
18+
name: ${SERVICE_ACCOUNT}
19+
namespace: ${NAMESPACE}
20+
21+
# $SERVICE_ACCOUNT must be able to list nodes
22+
- apiVersion: v1
23+
kind: ClusterRoleBinding
24+
metadata:
25+
name: local-storage:provisioner-node-binding
26+
roleRef:
27+
apiGroup: rbac.authorization.k8s.io
28+
kind: ClusterRole
29+
name: system:node
30+
subjects:
31+
- kind: ServiceAccount
32+
name: ${SERVICE_ACCOUNT}
33+
namespace: ${NAMESPACE}
34+
35+
# DaemonSet with provisioners
36+
- apiVersion: extensions/v1beta1
37+
kind: DaemonSet
38+
metadata:
39+
name: local-volume-provisioner
40+
spec:
41+
template:
42+
metadata:
43+
labels:
44+
app: local-volume-provisioner
45+
spec:
46+
containers:
47+
- env:
48+
- name: MY_NODE_NAME
49+
valueFrom:
50+
fieldRef:
51+
apiVersion: v1
52+
fieldPath: spec.nodeName
53+
- name: MY_NAMESPACE
54+
valueFrom:
55+
fieldRef:
56+
apiVersion: v1
57+
fieldPath: metadata.namespace
58+
- name: VOLUME_CONFIG_NAME
59+
value: ${CONFIGMAP}
60+
image: quay.io/external_storage/local-volume-provisioner:v1.0.1
61+
name: provisioner
62+
securityContext:
63+
runAsUser: 0
64+
volumeMounts:
65+
- mountPath: /mnt/local-storage
66+
name: local-storage
67+
serviceAccountName: "${SERVICE_ACCOUNT}"
68+
volumes:
69+
- hostPath:
70+
path: /mnt/local-storage
71+
name: local-storage
72+
73+
74+
parameters:
75+
- name: SERVICE_ACCOUNT
76+
description: Name of service account that is able to run pods as root and use HostPath volumes.
77+
required: true
78+
value: local-storage-admin
79+
- name: NAMESPACE
80+
description: Name of namespace where local provisioners run
81+
required: true
82+
value: local-storage
83+
- name: CONFIGMAP
84+
description: Name of ConfigMap with local provisioner configuration.
85+
required: true
86+
value: local-storage-admin

0 commit comments

Comments
 (0)