What is the correct method for adding a persistent rule to iptables on an OpenShift 3.x Master or Node?
Environment
- OpenShift Container Platform 3.x
Issue
- The OpenShift-SDN adds iptables rules that are not meant to be persistent. How do I add a rule without using the
iptables-savecommand? - What are the best steps to add a rule to the iptables chain OS_FIREWALL_ALLOW that OpenShift configures during installation?
Resolution
-
Working with iptables
Administrators should work with iptables by adding rules manually to the/etc/sysconfig/iptablesfile. These steps are needed because the openshift-sdn adds the necessary iptables rules based on endpoints and services that constantly change in an OpenShift Cluster.-
Create and add the rule to memory using the
iptablescommand.# iptables -A OS_FIREWALL_ALLOW -p udp -m state --state NEW -m udp --dport 9000 -j ACCEPT -
Manually add the rule to
/etc/sysconfig/iptablesso that the rule is persistent across reboots.- Add the following in line with other rules saved for the "OS_FIREWALL_ALLOW" chain.
-A OS_FIREWALL_ALLOW -p udp -m state --state NEW -m udp --dport 9000 -j ACCEPT
NOTE: The steps above show adding a rule to the "OS_FIREWALL_ALLOW" chain. It is only an example of what steps are needed to add an iptables rule without using the
iptables-savecommand.-
If you are updating an OpenShift Node (non-master) then follow these steps:
- Ensure the host is unschedulable, meaning that no new pods will be placed onto the host:
# oc adm manage-node <node_name> --schedulable=false- Migrate the pods from the host:
# oc adm drain <node_name> --force --delete-local-data --ignore-daemonsets- Restart following services:
# systemctl restart iptables.service # systemctl restart docker # systemctl restart atomic-openshift-node.service- Configure the host to be schedulable again:
# oc adm manage-node <node_name> --schedulable=true
-
-
Working with the OpenShift advanced installer up to OpenShift Container Platform 3.6 (after 3.6 roles have been changed and the below no longer works)
The second way an iptables rule could be added to "OS_FIREWALL_ALLOW" chain is to use Ansible, creating a wrapper playbook that uses the Ansible role "os_firewall" (already included in the openshift ansible package) to add the rules and includes the OpenShift installer playbook. The following is a playbook example that allows traffic over TCP port 9091:- name: Open firewall ports hosts: all become: yes vars: os_firewall_allow: - service: My monitoring agent at port 9091 port: 9091/tcp roles: - { role: '/usr/share/ansible/openshift-ansible/playbooks/byo/roles/os_firewall' }
Assuming the file has been saved to /root/my_openshift_playbook_wrapper.yml and the Ansible inventory file is /etc/ansible/hosts, the playbook can be used as follows:
```
# ansible-playbook /root/my_openshift_playbook_wrapper.yml
```
To verify that it worked correctly, you can list rules on any of the cluster members as follows:
```
# iptables -L OS_FIREWALL_ALLOW -n
Chain OS_FIREWALL_ALLOW (1 references)
target prot opt source destination
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:9091
...
```
NOTE: Running the wrapper playbook will add the iptables rule to all members in the cluster.
Root Cause
OpenShift SDN needs to have full control of iptables rules in order for it to work correctly. However, it provides the OS_FIREWALL_ALLOW chain in the right place so that users can add custom rules.
IMPORTANT: This solution is valid only for RHOCP 3.x. It is not valid for OCP 4.x
This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.