Configuring a custom domain for applications in OpenShift 4

Solution Verified - Updated

Environment

  • Red Hat OpenShift Container Platform (RHOCP)
    • 4
  • Additional domain and certificates other than the default domain

Issue

  • Support a custom domain with an additional certificate in OpenShift 4.
  • Can be the default domain for the applications be modified post installation?
  • Automatically create new routes with a custom domain, not with default.

Resolution

Starting with OpenShift 4.7, it is possible to configure an alternative cluster domain using the appsDomain option of the Ingress.


[Is it possible to update the Openshift Ingress domain?](https://access.redhat.com/solutions/5749041)
[How to change the domain name of OpenShift 4 cluster post installation?](https://access.redhat.com/solutions/4853401)
[Configure the custom ingress certificate with FQDNs in subjectAltName instead of wild-card subdomain](https://access.redhat.com/solutions/5532581)
[Configure Internal/External Ingress Controller sharding on an existing OpenShift 4 cluster](https://access.redhat.com/solutions/4981211)

Root Cause

While the default domain cannot currently be modified post-installation, it is possible to specify an alternative cluster domain for applications using the appsDomain option.

Diagnostic Steps

  • Ingress controller is degrated and shows a status error like:

      Some ingresscontrollers are degraded: ingresscontroller "custom" is degraded:
          DegradedConditions: One or more other status conditions indicate a degraded
          state: PodsScheduled=False (PodsNotScheduled: Some pods are not scheduled: Pod
          "router-custom-abc13243-sdab9" cannot be scheduled: 0/10 nodes are available:
          7 node(s) didn''t match Pod''s node affinity, 3 node(s) had taint {node-role.kubernetes.io/master:
          }, that the pod didn''t tolerate. Pod "router-custom-abc13243-xrvs2" cannot
          be scheduled: 0/25 nodes are available: 22 node(s) didn''t match Pod''s node
          affinity, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod
          didn''t tolerate. Make sure you have sufficient worker nodes.), DeploymentAvailable=False
          (DeploymentUnavailable: The deployment has Available status condition set to
          False (reason: MinimumReplicasUnavailable) with message: Deployment does not
          have minimum availability.), DeploymentReplicasMinAvailable=False (DeploymentMinimumReplicasNotMet:
          0/2 of replicas are available, max unavailable is 1)
    

    This means that the node labels assigned to the ingress controller were not assigned to a node. In the example above, this means that the label ingress-custom-controller: "true" needs to be added to the nodes.

      $ oc annotate node foo ingress-custom-controller='true'
    
Category

This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.