Managing Custom Machine Config Pool in RHOCP 4
Environment
- Red Hat OpenShift Container Platform (RHOCP)
- 4
- Machine Config Pool
Issue
- Is there any way to create a custom Machine Config Pool without inheriting worker MCP?
- Is it possible to use the newly created custom Machine Config Pool without having a worker role?
- How to create a custom Machine Config Pool?
- How to delete a custom Machine Config Pool?
- How to remove a label from node?
Resolution
Disclaimer: Links contained herein to external website(s) are provided for convenience only. Red Hat has not reviewed the links and is not responsible for the content or its availability. The inclusion of any link to an external website does not imply endorsement by Red Hat of the website or their entities, products or services. You agree that Red Hat is not responsible or liable for any loss or expenses that may result due to your use of (or reliance on) the external site or content.
Custom pools
Custom pools are pools that inherit from the worker pool. They use any MachineConfig targeted for the worker pool but add the ability to deploy changes only targeted at the custom pool. Since a custom pool inherits from the worker pool, any change to the worker pool is going to roll out to the custom pool as well (like an OS update during an upgrade).
NOTES
- Custom pools on control plane nodes are not supported by Machine Config Operator (MCO).
- Custom pools inherits from the
workerpool, and use theMachineConfigstargeted for theworkerpool. If a configuration needed for theworkerpool is not valid for any other custom pool, it is possible to configure 2 (or more) custom pools for each different configuration, and leave theworkerpool as generic as possible.
- A node can be part of at most one pool. Doing a configuration like the one described above it is possible to have 0 nodes assigned to the
workerMCP, and that is OK as longs as all nodes are assigned to an MCP that inherits from theworkerone.
Read more about custom pools in the upstream documentation: Content from github.com is not included.custom pools
Creating a custom pool
>**Note:** an `infra` custom pool is usually desired to configure a custom pool for specific infrastructure workloads. For additional information about infrastructure nodes, please refer to [Infrastructure Nodes in OpenShift 4](https://access.redhat.com/solutions/5034771).
For creating additional custom pools:
-
The first thing to do to create a custom pool is labeling the node with a custom role, in our example, it will be
custom:$ oc label node ip-192-168-130-218.example.internal node-role.kubernetes.io/custom=$ oc get nodes NAME STATUS ROLES AGE VERSION ip-192-168-130-218.example.internal Ready custom,worker 37m v1.14.0+e020ea5b3 ip-192-168-131-9.example.internal Ready master 43m v1.14.0+e020ea5b3 ip-192-168-134-237.example.internal Ready master 43m v1.14.0+e020ea5b3 ip-192-168-138-167.example.internal Ready worker 37m v1.14.0+e020ea5b3 ip-192-168-151-146.example.internal Ready master 43m v1.14.0+e020ea5b3 ip-192-168-152-59.example.internal Ready worker 37m v1.14.0+e020ea5b3 -
When creating custom nodes, apply them to existing worker nodes first. To use the node as a purely
customnode, it is possible to remove theworkerpool as follows:$ oc label node ip-192-168-130-218.example.internal node-role.kubernetes.io/worker- -
This will not change the files on the node itself: the custom pool inherits from worker pool by default. This means that if adding a new
MachineConfigto update workers, a purelycustomnode will still get updated. However, this will mean that workloads scheduled for workers will no longer be scheduled on this node, as it no longer has theworkerlabel. -
Next, it is needed to create a
MachineConfigPoolthat contains both theworkerrole and thecustomone asMachineConfigselector as follows:$ cat custom.mcp.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: custom spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,custom]} nodeSelector: matchLabels: node-role.kubernetes.io/custom: ""$ oc create -f custom.mcp.yaml -
Check that the
customMCP has now been created:$ oc get mcp NAME CONFIG UPDATED UPDATING DEGRADED custom rendered-custom-6db67f47c0b205c26561b1c5ab74d79b True False False master rendered-master-7053d8fc3619388accc12c7759f8241a True False False worker rendered-worker-6db67f47c0b205c26561b1c5ab74d79b True False False -
The example above makes a
custompool that contains all of theMachineConfigsused by theworkerpool.
Deploy changes to a custom pool (optional)
-
Deploying changes to a custom pool is just a matter of creating a
MachineConfigthat uses the custom pool name as the label (customin the example). Adjust the ignition version accordingly (refer to the most recentMachineConfigversion in the cluster):$ cat custom.mc.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: custom name: 51-custom spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:,custom filesystem: root mode: 0644 path: /etc/customtest$ oc create -f custom.mc.yaml -
The custom
custompool has now deployed the changes with thecustom-targetedMachineConfig:$ oc get mcp NAME CONFIG UPDATED UPDATING DEGRADED custom rendered-custom-dfdfdf7e006f18cd5d29cae03f77948b True False False master rendered-master-7053d8fc3619388accc12c7759f8241a True False False worker rendered-worker-6db67f47c0b205c26561b1c5ab74d79b True False False -
Now check that the file landed on the
customnode:$ oc get pods -n openshift-machine-config-operator -l k8s-app=machine-config-daemon --field-selector "spec.nodeName=ip-192-168-130-218.example.internal" NAME READY STATUS RESTARTS AGE machine-config-daemon-vxb4c 1/1 Running 2 43m$ oc rsh -n openshift-machine-config-operator machine-config-daemon-vxb4c chroot /rootfs cat /etc/customtest custom
Removing a custom pool
-
Removing a custom pool requires first to un-label each node:
$ oc label node ip-192-168-130-218.example.internal node-role.kubernetes.io/custom-Note: A node must have a role at any given time to be properly functioning. If having custom-only nodes, it is needed to first relabel the node as a worker and only then proceed to unlabeled it from the custom role.
$ oc get nodes NAME STATUS ROLES AGE VERSION ip-192-168-130-218.example.internal Ready worker 50m v1.14.0+e020ea5b3 ip-192-168-131-9.example.internal Ready master 56m v1.14.0+e020ea5b3 ip-192-168-134-237.example.internal Ready master 56m v1.14.0+e020ea5b3 ip-192-168-138-167.example.internal Ready worker 50m v1.14.0+e020ea5b3 ip-192-168-151-146.example.internal Ready master 56m v1.14.0+e020ea5b3 ip-192-168-152-59.example.internal Ready worker 50m v1.14.0+e020ea5b3 -
The MCO is then going to reconcile the node to the
workerpool configuration:$ oc get mcp NAME CONFIG UPDATED UPDATING DEGRADED custom rendered-custom-dfdfdf7e006f18cd5d29cae03f77948b True False False master rendered-master-7053d8fc3619388accc12c7759f8241a True False False worker rendered-worker-6db67f47c0b205c26561b1c5ab74d79b False True False -
As soon as the
workerpool reconciles, it is possible to remove thecustomMCP and any MC created:$ oc get mcp NAME CONFIG UPDATED UPDATING DEGRADED custom rendered-custom-dfdfdf7e006f18cd5d29cae03f77948b True False False master rendered-master-7053d8fc3619388accc12c7759f8241a True False False worker rendered-worker-6db67f47c0b205c26561b1c5ab74d79b True False False $ oc delete mc 51-custom machineconfig.machineconfiguration.openshift.io "51-custom" deleted $ oc delete mcp custom machineconfigpool.machineconfiguration.openshift.io "custom" deleted
Custom pool on control plane nodes
Custom pool on a node having control plane/master role is not supported. oc label node will apply the new custom role to the target master node but MCO will not apply changes specific to the custom pool. The error can be seen in the Machine Config Controller pod logs. This behavior is to make sure that control plane nodes remain stable.
Understanding custom pool updates
A node can be part of at most one pool. The MCO will roll out updates for pools independently; for example, if there is an OS update or other change that affects all pools, normally 1 node from the master and worker pool would update at the same time. If adding a custom pool for example, then 1 node from that pool will also try to roll out concurrently with the master and worker.
Root Cause
Custom pools needs to inherit from the worker pool and cannot be applied to control plane.
This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.