Attach the default NIC to a bridge while using OVN Kubernetes

Solution Unverified - Updated

Environment

  • OpenShift Container Platform 4.11
  • OpenShift Virtualization 4.11

An OpenShift cluster with one or more nodes, where each node has only a single usable NIC that's currently used for management traffic and by OVN Kubernetes (the default CNI in 4.11).

Issue

Given a node with a single NIC, we want to use this NIC for both management of the node and to connect VMs hosted on this node to outside networks.

While typically, this could be done by connecting the port to a Linux bridge, and moving the original IP configuration of the NIC onto the bridge, this is not possible on OpenShift when OVN Kubernetes is used as the default CNI.

This article describes the problem, offers a few known workarounds, and suggests a long-term solution.

Resolution

There are several ways to work around this limitation, each with its own pros and cons. The end of this section describes a long-term plan for a proper solution of this scenario.

Workaround 1: Second NIC

The cleanest solution would be to connect a second NIC to the host, and use that for the Linux bridge. To do so, follow the regular process: Connecting to the network through the network attachment definition

Workaround 2: VLAN interfaces

While the single NIC cannot be attached to another bridge directly, it is possible to create VLAN sub-interfaces on them without affecting the functionality.

Say that the setup looks like this:

[br-ex]--[eth0]

Then by using VLAN sub-interfaces, we can achieve this:

           [br-ex]--[eth0]
[br10]--[eth0.100]--/ / /
[br20]--[eth0.200]---/ /
[br30]--[eth0.300]----/

Bridges br10, br20, and br30 can be then referenced from a regular bridge NetworkAttachmentDefinition. Note that with this setup, the vlan attribute must not be set again in the NetworkAttachmentDefinition, the traffic already gets tagged by the VLAN sub-interface.

Drawback of this workaround are that each VLAN will require its own VLAN sub-interface and bridge, which may have a performance impact in bigger scale. Compared to regular vlan-filtering done on the bridge, this may also impact latency. Finally, this setup does not allow connecting VMs to the native network (without any VLAN tagging).

To configure VLAN sub-interface and its associated bridge, create the following NodeNetworkConfigurationPolicy:

apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
  name: br10-policy 
spec:
  desiredState:
    interfaces:
      - name: br10
        type: linux-bridge 
        state: up 
        ipv4:
          enabled: false 
        bridge:
          options:
            stp:
              enabled: false 
          port:
            - name: eth0.100
      - name: eth0.100 
        type: vlan 
        state: up 
        vlan:
          base-iface: eth0
          id: 100 

Then reference the bridge br10 from a regular NetworkAttachmentDefinition.

Native OVN Kubernetes solution

In long-term, we'd like to introduce a native support for secondary networks to OVN Kubernetes. With this, it would be possible to request additional networks through NetworkAttachmentDefinition. It would be possible to either have this network tunneled over the SDN, or to connect it to a physical network available on the host. One of such physical networks would be the one behind br-ex. This connection to a local physical network could be done with or without VLAN tagging.

This solution is tracked via This content is not included.This content is not included.https://issues.redhat.com/browse/SDN-3534.

Root Cause

OVN Kubernetes attaches to the default NIC directly, to handle north-south traffic. Since a NIC can be attached to only a single bridge at the time, this prevents us from using this NIC for another bridge.

In more detail: When an OpenShift node is being initialized, OVN Kubernetes would find its default management interface (a NIC, bonding or a VLAN), and it will create an Open vSwitch bridge br-ex on top of it. br-ex then takes over the IP configuration of the original interface, and is used for host's management traffic and for north-south traffic of the SDN managed through OVN Kubernetes.

Components
Tags

This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.