Configuring and deploying gateway policies

Red Hat Connectivity Link 1.2

Secure, protect, and connect APIs on OpenShift

Red Hat OpenShift Documentation Team

Abstract

This guide explains how to use Connectivity Link policies on OpenShift to secure, protect, and connect an application API exposed by a Gateway based on Kubernetes Gateway API. This includes Gateways deployed on a single OpenShift cluster or distributed across multiple clusters.

Chapter 1. Configuring and deploying gateway policies

As a platform engineer or application developer, you can secure, protect, and connect an API exposed by a gateway that uses Gateway API by using Connectivity Link.

1.1. Secure, protect, and connect APIs on OpenShift Container Platform with Connectivity Link

This guide shows how you can use Connectivity Link on OpenShift Container Platform to secure, protect, and connect an API exposed by a Gateway that uses Kubernetes Gateway API. This guide applies to the platform engineer and application developer user roles in Connectivity Link.

Important

In multi-cluster environments, you must perform the following steps in each cluster individually, unless specifically excluded.

1.1.1. Connectivity Link capabilities in multicluster environments

You can use Connectivity Link capabilities in single or multiple OpenShift Container Platform clusters. The following features are designed to work across multiple clusters and in a single-cluster environment:

  • Multicluster ingress: Connectivity Link provides multicluster ingress connectivity using DNS to bring traffic to your gateways by using a strategy defined in a DNSPolicy.
  • Global rate limiting: Connectivity Link can enable global rate limiting use cases when configured to use a shared Redis-based store for counters based on limits defined by a RateLimitPolicy.
  • Global auth: You can configure a Connectivity Link AuthPolicy to use external auth providers to ensure that different clusters exposing the same API can authenticate and allow in the same way.
  • Automatic TLS certificate generation: You can configure a TLSPolicy to automatically provision TLS certificates based on Gateway listener hosts by using integration with cert-manager and ACME providers such as Let’s Encrypt.
  • Integration with federated metrics stores: Connectivity Link has example dashboards and metrics for visualizing your gateways and observing traffic hitting those gateways across multiple clusters.

1.1.2. Connectivity Link user role workflows

  • Platform engineer: This guide shows how platform engineers can deploy gateways that provide secure communication and are protected and ready for use by application development teams to deploy APIs.

    Platform engineers can use Connectivity Link in clusters in different geographic regions to bring specific traffic to geo-located gateways. This approach reduces latency, distributes load, and protects and secures with global rate limiting and auth policies.

  • Application developer: This guide shows how application developers can override the Gateway-level global auth and rate limiting policies to configure application-level auth and rate limiting requirements for specific users.

1.2. Set up your environment

This section shows how you can set up your environment variables and deploy the example Toystore application on your OpenShift Container Platform cluster.

Prerequisites

  • Connectivity Link is installed on the OpenShift Container Platform cluster you are working with.
  • The OpenShift CLI (oc) is installed.
  • You are logged in to the OpenShift Container Platform cluster with write access to the namespaces you need to use.
  • You have a DNS zone available for use.
  • Optional. For rate limiting in a multicluster environment, you have installed Connectivity Link on more than one cluster. You also have a shared and accessible Redis-based datastore.
  • Optional. For observability, OpenShift Container Platform user workload monitoring is configured to remote-write to a central storage system.

Procedure

  1. Optional: Set the following environment variables:

    $ export KUADRANT_GATEWAY_NS=api-gateway \
      export KUADRANT_GATEWAY_NAME=ingress-gateway \
      export KUADRANT_DEVELOPER_NS=toystore \
      export KUADRANT_AWS_ACCESS_KEY_ID=xxxx \
      export KUADRANT_AWS_SECRET_ACCESS_KEY=xxxx \
      export KUADRANT_AWS_DNS_PUBLIC_ZONE_ID=xxxx \
      export KUADRANT_ZONE_ROOT_DOMAIN=example.com
      export KUADRANT_CLUSTER_ISSUER_NAME=self-signed

    These environment variables are described as follows:

    • KUADRANT_GATEWAY_NS: Namespace for your gateway in OpenShift Container Platform.
    • KUADRANT_GATEWAY_NAME: Name of your ingress gateway in OpenShift Container Platform.
    • KUADRANT_DEVELOPER_NS: Namespace for the example Toystore app in OpenShift Container Platform.
    • KUADRANT_AWS_ACCESS_KEY_ID: AWS key ID with access to manage your DNS zone.
    • KUADRANT_AWS_SECRET_ACCESS_KEY: AWS secret access key with permissions to manage your DNS zone.
    • KUADRANT_AWS_DNS_PUBLIC_ZONE_ID: AWS Route 53 zone ID for the Gateway. This is the ID of the hosted zone that is displayed in the AWS Route 53 console.
    • KUADRANT_ZONE_ROOT_DOMAIN: Root domain in AWS Route 53 associated with your DNS zone ID.
    • KUADRANT_CLUSTER_ISSUER_NAME: Name of the certificate authority or issuer TLS certificates.

      Note

      If you know your environment variable values, you can set up the required YAML files to suit your environment.

  1. Create the namespace for the Toystore app as follows:

    $ oc create ns ${KUADRANT_DEVELOPER_NS}
  1. Deploy the Toystore app to the developer namespace:

    $ oc apply -f https://raw.githubusercontent.com/Kuadrant/Kuadrant-operator/main/examples/toystore/toystore.yaml -n ${KUADRANT_DEVELOPER_NS}

1.3. Set up a DNS provider secret

Your DNS provider supplies credentials to access the DNS zones that Connectivity Link can use to set up your DNS configuration. You must ensure that these credentials have access to only the DNS zones that you want Connectivity Link to manage with your DNSPolicy.

Note

You must apply the following Secret resource to each cluster. If you are adding an additional cluster, add it to the new cluster.

Prerequisites

  • You installed Connectivity Link on one or more clusters.
  • If you plan to use rate-limiting in a multicluster environment, you have a shared Redis-based datastore.
  • You installed the OpenShift CLI (oc).
  • You have write access to the OpenShift Container Platform namespaces you need to work with.
  • You have access to external or on-premise DNS.
  • You created a gateway.
  • You configured your gateway policies and HTTP routes.

Procedure

  1. Create the namespace that the Gateway will be deployed in as follows:

    $ oc create ns ${KUADRANT_GATEWAY_NS}
  2. Create the secret credentials in the same namespace as the Gateway as follows:

    $ oc -n ${KUADRANT_GATEWAY_NS} create secret generic aws-credentials \
      --type=kuadrant.io/aws \
      --from-literal=AWS_ACCESS_KEY_ID=$KUADRANT_AWS_ACCESS_KEY_ID \
      --from-literal=AWS_SECRET_ACCESS_KEY=$KUADRANT_AWS_SECRET_ACCESS_KEY
  3. Before adding a TLS certificate issuer, create the secret credentials in the cert-manager namespace as follows:

    $ oc -n cert-manager create secret generic aws-credentials \
      --type=kuadrant.io/aws \
      --from-literal=AWS_ACCESS_KEY_ID=$KUADRANT_AWS_ACCESS_KEY_ID \
      --from-literal=AWS_SECRET_ACCESS_KEY=$KUADRANT_AWS_SECRET_ACCESS_KEY

1.4. Add a TLS certificate issuer

To secure communication to your Gateways, you must define a certification authority as an issuer for TLS certificates.

Note

This example uses the Let’s Encrypt TLS certificate issuer for simplicity, but you can use any certificate issuer supported by cert-manager. In multicluster environments, you must add your TLS issuer in each OpenShift Container Platform cluster.

Prerequisites

  • You installed Connectivity Link on one or more clusters.
  • If you plan to use rate-limiting in a multicluster environment, you have a shared Redis-based datastore.
  • You installed the OpenShift CLI (oc).
  • You have write access to the OpenShift Container Platform namespaces you need to work with.
  • You have access to external or on-premise DNS.
  • You created a gateway.
  • You configured your gateway policies and HTTP routes.

Procedure

  1. Enter the following command to define a TLS certificate issuer:

    $ oc apply -f - <<EOF
    apiVersion: cert-manager.io/v1
    kind: ClusterIssuer
    metadata:
      name: ${KUADRANT_CLUSTER_ISSUER_NAME}
    spec:
      selfSigned: {}
    EOF
  2. Wait for the ClusterIssuer to become ready as follows:

    $ oc wait clusterissuer/${KUADRANT_CLUSTER_ISSUER_NAME} --for=condition=ready=true

1.5. Creating a gateway

As a platform engineer, create a gateway to your OpenShift Container Platform cluster to set up the infrastructure used by application developers. The following example assumes that you are using the OpenShift Container Platform Cluster Ingress Operator (CIO).

Important

When using the Gateway API custom resource definitions (CRDs) provided in OpenShift Container Platform 4.19 or newer, you must create a GatewayClass named openshift-default and specify a controllerName of openshift.io/gateway-controller/v1. For more details, see the Getting started with Gateway API for the Ingress Operator (OpenShift Container Platform documentation).

If you are using OpenShift Service Mesh on OpenShift Container Platform 4.19 and newer and you set the ISTIO_GATEWAY_CONTROLLER_NAMES variable to istio.io/gateway-controller during your Connectivity Link installation, then you can use the GatewayClass custom resource (CR) created by default by OpenShift Service Mesh. Make sure you use the corresponding spec.gatewayClassName value in your Gateway CR.

Prerequisites

  • Connectivity Link is installed on the OpenShift Container Platform cluster you are working with.
  • You set the ISTIO_GATEWAY_CONTROLLER_NAMES environment variable value to openshift.io/gateway-controller/v1 during your Connectivity Link installation.
  • You created a GatewayClass named openshift-default and specified a controllerName of openshift.io/gateway-controller/v1.
  • The OpenShift CLI (oc) is installed.
  • You are logged in to the OpenShift Container Platform cluster with write access to the namespaces you need to use.
  • You have a DNS zone available for use.

Procedure

  • Create a gateway that uses the OpenShift Container Platform CIO by running the following command:

    $ oc apply -f - <<EOF
    apiVersion: gateway.networking.k8s.io/v1
    kind: Gateway
    metadata:
      name: ${KUADRANT_GATEWAY_NAME}
      namespace: ${KUADRANT_GATEWAY_NS}
      labels:
        kuadrant.io/gateway: "true"
    spec:
        gatewayClassName: openshift-default
        listeners:
        - allowedRoutes:
            namespaces:
              from: All
          hostname: "api.${KUADRANT_ZONE_ROOT_DOMAIN}"
          name: api
          port: 443
          protocol: HTTPS
          tls:
            certificateRefs:
            - group: ""
              kind: Secret
              name: api-${KUADRANT_GATEWAY_NAME}-tls
            mode: Terminate
    EOF
    Important

    In a multicluster environment, for Connectivity Link to balance traffic by using DNS across clusters, you must specify a gateway with a shared hostname. You can define this by using an HTTPS listener with a wildcard hostname based on the root domain.

Verification

  1. Check the status of your gateway by running the following command:

    $ oc get gateway ${KUADRANT_GATEWAY_NAME} -n ${KUADRANT_GATEWAY_NS} -o=jsonpath='{.status.conditions[?(@.type=="Accepted")].message}{"\n"}{.status.conditions[?(@.type=="Programmed")].message}'

    The statuses Accepted and Programmed mean that your gateway is valid and assigned an external address.

  2. Check the status of your HTTPS listener by running the following command:

    $ oc get gateway ${KUADRANT_GATEWAY_NAME} -n ${KUADRANT_GATEWAY_NS} -o=jsonpath='{.status.listeners[0].conditions[?(@.type=="Programmed")].message}'

Next steps

  • Configure a TLS policy so that the The HTTPS listener can accept traffic.

1.6. Configure your Gateway policies and HTTP route

While your Gateway is now deployed, it has no exposed endpoints and your HTTPS listener is not programmed. Next, you can do take the following steps:

  • Define a TLSPolicy that leverages your CertificateIssuer to set up your HTTPS listener certificates.
  • Define an HTTPRoute for your Gateway to communicate with your backend application API.
  • Define an AuthPolicy to set up a default HTTP 403 response for any unprotected endpoints
  • Define and a RateLimitPolicy to set up a default artificially low global limit to further protect any endpoints exposed by the Gateway.
  • Define a DNSPolicy with a load balancing strategy for your Gateway.
Important

In multicluster environments, you must perform the following steps in each cluster individually, unless specifically excluded.

Prerequisites

  • You installed Connectivity Link on one or more clusters.
  • If plan to use rate-limiting in a multicluster environment, you have a shared Redis-based datastore.
  • You installed the OpenShift CLI (oc).
  • You have write access to the OpenShift Container Platform namespaces you need to work with.
  • You have access to external or on-premise DNS.
  • You created a gateway.

Procedure

  1. Set the TLSPolicy for your Gateway as follows:

    $ oc apply -f - <<EOF
    apiVersion: kuadrant.io/v1
    kind: TLSPolicy
    metadata:
      name: ${KUADRANT_GATEWAY_NAME}-tls
      namespace: ${KUADRANT_GATEWAY_NS}
    spec:
      targetRef:
        name: ${KUADRANT_GATEWAY_NAME}
        group: gateway.networking.k8s.io
        kind: Gateway
      issuerRef:
        group: cert-manager.io
        kind: ClusterIssuer
        name: ${KUADRANT_CLUSTER_ISSUER_NAME}
    EOF
  2. Check that your TLS policy has an Accepted and Enforced status as follows:

    $ oc get tlspolicy ${KUADRANT_GATEWAY_NAME}-tls -n ${KUADRANT_GATEWAY_NS} -o=jsonpath='{.status.conditions[?(@.type=="Accepted")].message}{"\n"}{.status.conditions[?(@.type=="Enforced")].message}'

    This may take a few minutes depending on the TLS provider, for example, Let’s Encrypt.

1.6.1. Create an HTTP route for your application

Procedure

  1. Create an HTTPRoute for the example Toystore application as follows:

    $ oc apply -f - <<EOF
    apiVersion: gateway.networking.k8s.io/v1
    kind: HTTPRoute
    metadata:
      name: toystore
      namespace: ${KUADRANT_DEVELOPER_NS}
      labels:
        deployment: toystore
        service: toystore
    spec:
      parentRefs:
      - name: ${KUADRANT_GATEWAY_NAME}
        namespace: ${KUADRANT_GATEWAY_NS}
      hostnames:
      - "api.${KUADRANT_ZONE_ROOT_DOMAIN}"
      rules:
      - matches:
        - method: GET
          path:
            type: PathPrefix
            value: "/cars"
        - method: GET
          path:
            type: PathPrefix
            value: "/health"
        backendRefs:
        - name: toystore
          port: 80
    EOF

1.6.2. Set the default AuthPolicy

Procedure

  1. Set a default AuthPolicy with a deny-all setting for your Gateway as follows:

    $ oc apply -f - <<EOF
    apiVersion: kuadrant.io/v1
    kind: AuthPolicy
    metadata:
      name: ${KUADRANT_GATEWAY_NAME}-auth
      namespace: ${KUADRANT_GATEWAY_NS}
    spec:
      targetRef:
        group: gateway.networking.k8s.io
        kind: Gateway
        name: ${KUADRANT_GATEWAY_NAME}
      defaults:
       when:
         - predicate: "request.path != '/health'"
       rules:
        authorization:
          deny-all:
            opa:
              rego: "allow = false"
        response:
          unauthorized:
            headers:
              "content-type":
                value: application/json
            body:
              value: |
                {
                  "error": "Forbidden",
                  "message": "Access denied by default by the gateway operator. If you are the administrator of the service, create a specific auth policy for the route."
                }
    EOF
  2. Check that your AuthPolicy has Accepted and Enforced status as follows:

    $ oc get authpolicy ${KUADRANT_GATEWAY_NAME}-auth -n ${KUADRANT_GATEWAY_NS} -o=jsonpath='{.status.conditions[?(@.type=="Accepted")].message}{"\n"}{.status.conditions[?(@.type=="Enforced")].message}'

1.6.3. Set the default RateLimitPolicy

Procedure

  1. Set the default RateLimitPolicy with a low-limit setting for your Gateway as follows:

    $ oc apply -f  - <<EOF
    apiVersion: kuadrant.io/v1
    kind: RateLimitPolicy
    metadata:
      name: ${KUADRANT_GATEWAY_NAME}-rlp
      namespace: ${KUADRANT_GATEWAY_NS}
    spec:
      targetRef:
        group: gateway.networking.k8s.io
        kind: Gateway
        name: ${KUADRANT_GATEWAY_NAME}
      defaults:
        limits:
          "low-limit":
            rates:
            - limit: 1
              window: 10s
    EOF

    It might take a few minutes for the RateLimitPolicy to be applied depending on your cluster. The limit in this example is artificially low to show it working easily.

  2. Check that your RateLimitPolicy has Accepted and Enforced status as follows:

    $ oc get ratelimitpolicy ${KUADRANT_GATEWAY_NAME}-rlp -n ${KUADRANT_GATEWAY_NS} -o=jsonpath='{.status.conditions[?(@.type=="Accepted")].message}{"\n"}{.status.conditions[?(@.type=="Enforced")].message}'

1.6.4. Set the DNS policy

Procedure

  1. Set the DNSPolicy for your Gateway as follows:

    $ oc apply -f - <<EOF
    apiVersion: kuadrant.io/v1
    kind: DNSPolicy
    metadata:
      name: ${KUADRANT_GATEWAY_NAME}-dnspolicy
      namespace: ${KUADRANT_GATEWAY_NS}
    spec:
      healthCheck:
        failureThreshold: 3
        interval: 1m
        path: /health
      loadBalancing:
        defaultGeo: true
        geo: GEO-NA
        weight: 120
      targetRef:
        name: ${KUADRANT_GATEWAY_NAME}
        group: gateway.networking.k8s.io
        kind: Gateway
      providerRefs:
      - name: aws-credentials # Secret created earlier
    EOF

    The DNSPolicy uses the DNS Provider Secret that you defined earlier. The geo in this example is GEO-NA, but you can change this to suit your requirements.

  2. Check that your DNSPolicy has status of Accepted and Enforced as follows:

    $ oc get dnspolicy ${KUADRANT_GATEWAY_NAME}-dnspolicy -n ${KUADRANT_GATEWAY_NS} -o=jsonpath='{.status.conditions[?(@.type=="Accepted")].message}{"\n"}{.status.conditions[?(@.type=="Enforced")].message}'

    This might take a few minutes.

  3. Check the status of the DNS health checks that are enabled on your DNSPolicy as follows:

    $ oc get dnspolicy ${KUADRANT_GATEWAY_NAME}-dnspolicy -n ${KUADRANT_GATEWAY_NS} -

    These health checks flag a published endpoint as healthy or unhealthy based on defined configuration. When unhealthy, an endpoint will not be published if it has not already been published to the DNS provider. An endpoint will only be unpublished if it is part of a multi-value A record, and in all cases can be observed in the DNSPolicy status.

    Additional resources

1.6.5. Test your default rate limit and auth policies

You can use a curl command to test the default low-limit and deny-all policies for your Gateway.

Procedure

  • Enter the following curl command:

    $ while :; do curl -k --write-out '%{http_code}\n' --silent --output /dev/null  "https://api.$KUADRANT_ZONE_ROOT_DOMAIN/cars" | grep -E --color "\b(429)\b|$"; sleep 1; done

    You should see a HTTP 403 responses.

1.7. Configure on-premise DNS with CoreDNS (Technology Preview)

Important

CoreDNS is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Red Hat Connectivity Link uses a DNSPolicy to manage DNS records based on Gateway API resources. For on-premise DNS servers such as CoreDNS, direct integration might require custom controllers or elevated permissions, which can be complex and pose security risks.

To address this challenge, Connectivity Link supports DNS delegation. Instead of directly managing records on the authoritative on-premise DNS server, you configure that server to delegate a specific subdomain (for example, kuadrant.example.local) to CoreDNS instances managed by Connectivity Link.

The DNSPolicy can then interact with the CoreDNS provider within the OpenShift Container Platform cluster. This CoreDNS instance becomes authoritative for the delegated subdomain and manages the necessary DNS records (A, CNAME, and so on) for gateways within that subdomain.

The delegate field within the DNSPolicy configuration specifies which DNS provider (in this case, CoreDNS) handles the records for the targeted gateways.

This guide describes how to set up CoreDNS as a DNS provider for Connectivity Link in a multi-cluster, on-premise environment. This integration allows Connectivity Link to manage DNS entries within your internal network infrastructure.

Prerequisites

  • Red Hat Connectivity Link is installed on two separate OpenShift Container Platform clusters (primary and secondary).
  • The kubectl or oc command-line interface is installed and configured for access to both clusters.
  • You have administrator privileges on both OpenShift Container Platformclusters.
  • Your OpenShift Container Platform clusters have support for the loadbalanced service type that allows UDP traffic on port 53, such as MetalLB. For more information, see Load balancing with MetalLB.
  • You have access to configure your authoritative on-premise DNS server to delegate a subdomain.
  • Kustomize is installed.

Procedure

  1. Set up the primary cluster. Set the following environment variables for your primary cluster context:

    $ export CTX_PRIMARY=<primary_cluster_context_name> # e.g., kind-primary \
      export KUBECONFIG=~/.kube/config # Adjust path if necessary \
      export PRIMARY_CLUSTER_NAME=<primary_cluster_name> # e.g., primary \
      export ONPREM_DOMAIN=<your_onprem_domain> # e.g., example.local \
      export KUADRANT_SUBDOMAIN=kuadrant # Subdomain to delegate
  2. Install CoreDNS using the Connectivity Link kustomization, which includes the required kuadrant plugin. Apply the following configuration to the primary cluster, replacing <kuadrant_coredns_kustomize_url> with the actual URL for the Kuadrant CoreDNS kustomization.

    $ kustomize build --enable-helm github.com/kuadrant/dns-operator/config/coredns?ref=v0.15.0 | kubectl apply --context ${CTX_PRIMARY} -f -
    Note

    The default CoreDNS Helm chart does not include the kuadrant plugin. You must use the Connectivity Link-provided kustomization which bundles a customized CoreDNS build.

  3. Wait for the CoreDNS service to get an external IP address and store it:

    $ export COREDNS_IP_PRIMARY=$(kubectl --context $CTX_PRIMARY -n kuadrant-system get service <coredns-service-name> -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
    echo "CoreDNS Primary IP: ${COREDNS_IP_PRIMARY}"

    You need this IP address later to configure delegation on your authoritative on-premises DNS server.

  4. Create a ConfigMap to define the authoritative zone for CoreDNS on the primary cluster. This minimal configuration enables the kuadrant plugin and GeoIP features.

    cat | kubectl --context $CTX_PRIMARY apply -f -
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: coredns-kuadrant-config
      namespace: kuadrant-system
    data:
      Corefile: |
        ${KUADRANT_SUBDOMAIN}.${ONPREM_DOMAIN}:53 {
            debug
            errors
            health {
                lameduck 5s
            }
            ready
            log
            geoip GeoLite2-City-demo.mmdb {
                edns-subnet
            }
            metadata
            kuadrant
        }
    Note

    The geoip plugin in this example uses the GeoLite2-City-demo.mmdb database included for demonstration purposes. For production or accurate GeoIP routing, mount your licensed MaxMind GeoIP database into the CoreDNS pod and update the filename in the Corefile.

  5. Update the CoreDNS deployment to use the new configuration:

    $ kubectl --context $CTX_PRIMARY -n kuadrant-system patch deployment <coredns-deployment-name> --patch '{"spec":{"template":{"spec":{"volumes":[{"name":"config-volume","configMap":{"name":"coredns-kuadrant-config","items":[{"key":"Corefile","path":"Corefile"}]}}]}}}}'
  6. Wait for the deployment rollout to complete:

    $ kubectl --context $CTX_PRIMARY -n kuadrant-system rollout status deployment/<coredns-deployment-name>
  7. Create the Kubernetes Secret that Connectivity Link uses to interact with CoreDNS. This secret specifies the zones this provider instance is authoritative for.

    $ kubectl create secret generic coredns-credentials \
      --namespace=kuadrant-system \
      --type=kuadrant.io/coredns \
      --from-literal=ZONES="${KUADRANT_SUBDOMAIN}.${ONPREM_DOMAIN}" \
      --context ${CTX_PRIMARY}
  8. On your authoritative on-premises DNS server, configure delegation for the ${KUADRANT_SUBDOMAIN}.${ONPREM_DOMAIN} subdomain to the external IP addresses of the CoreDNS services running on your primary and secondary clusters ($COREDNS_IP_PRIMARY and $COREDNS_IP_SECONDARY). The specific steps depend on your DNS server software (for example, BIND, Windows DNS Server). You typically need to add NS (Name Server) records pointing the subdomain to the CoreDNS IP addresses. For example:

    ; Delegate kuadrant.example.local to CoreDNS instances
    $ORIGIN ${KUADRANT_SUBDOMAIN}.${ONPREM_DOMAIN}.
    @       IN      SOA     ns1.${ONPREM_DOMAIN}. hostmaster.${ONPREM_DOMAIN}. (
                            2023102601 ; serial
                            7200       ; refresh (2 hours)
                            3600       ; retry (1 hour)
                            1209600    ; expire (2 weeks)
                            3600       ; minimum (1 hour)
                            )
            IN      NS      coredns-primary.${KUADRANT_SUBDOMAIN}.${ONPREM_DOMAIN}.
    
    coredns-primary   IN A ${COREDNS_IP_PRIMARY}

Verification

After configuring delegation, you can test that DNS resolution for the delegated subdomain works correctly by querying your authoritative DNS server for a record within the kuadrant subdomain. The query should be referred to, and answered by, one of the CoreDNS instances.

Next steps

Create DNSPolicy resources in your OpenShift Container Platform clusters, referencing the coredns-credentials secret as the provider. Connectivity Link manages DNS records within the delegated ${KUADRANT_SUBDOMAIN}.${ONPREM_DOMAIN} zone through the CoreDNS instances.

1.8. About token-based rate limiting with TokenRateLimitPolicy

Red Hat Connectivity Link provides the TokenRateLimitPolicy custom resource to enforce rate limits based on token consumption rather than the number of requests. This policy extends the Envoy Rate Limit Service (RLS) protocol with automatic token usage extraction. It is particularly useful for protecting Large Language Model (LLM) APIs, where the cost and resource usage correlate more closely with the number of tokens processed.

Unlike the standard RateLimitPolicy which counts requests, TokenRateLimitPolicy counts tokens by extracting usage metrics in the body of the AI inference API call, allowing for finer-grained control over API usage based on actual workload.

1.8.1. How token rate limiting works

The TokenRateLimitPolicy tracks cumulative token usage per client. Before forwarding a request, it checks if the client has already exceeded their limit from previous usage. After the upstream responds, it extracts the actual token cost and updates the client’s counter.

The flow is as follows:

  1. On an incoming request, the gateway evaluates the matching rules and predicates from the TokenRateLimitPolicy resources.
  2. If the request matches, the gateway prepares the necessary rate limit descriptors and monitors the response.
  3. After receiving the response, the gateway extracts the usage.total_tokens field from the JSON response body.
  4. The gateway then sends a RateLimitRequest to Limitador, including the actual token count as a hits_addend.
  5. Limitador tracks the cumulative token usage and responds to the gateway with OK or OVER_LIMIT.

1.8.2. Key features and use cases

  • Enforces limits based on token usage by extracting the usage.total_tokens field from an OpenAI-style inference JSON response body.
  • Suitable for consumption-based APIs such as LLMs where the cost is tied to token counts.
  • Allows defining different limits based on criteria such as user identity, API endpoints, or HTTP methods.
  • Works with AuthPolicy to apply specific limits to authenticated users or groups.
  • Inherits functionalities from RateLimitPolicy, including defining multiple limits with different durations and using Redis for shared counters in multi-cluster environments.

1.8.3. Integrating with AuthPolicy

You can combine TokenRateLimitPolicy with AuthPolicy to apply token limits based on authenticated user identity. When an AuthPolicy successfully authenticates a request, it injects identity information that is used by the TokenRateLimitPolicy to select the appropriate limit.

For example, you can define different token limits for users belonging to 'free-tier' compared to 'premium-tier' groups, identified using claims in a JWT validated by AuthPolicy.

1.9. Configure token-based rate limiting with TokenRateLimitPolicy

Red Hat Connectivity Link provides the TokenRateLimitPolicy custom resource to enforce rate limits based on token consumption rather than the number of requests. This policy extends the Envoy Rate Limit Service (RLS) protocol with automatic token usage extraction. It is particularly useful for protecting Large Language Model (LLM) APIs, where the cost and resource usage correlate more closely with the number of tokens processed.

Unlike the standard RateLimitPolicy which counts requests, TokenRateLimitPolicy counts tokens by extracting usage metrics in the body of the AI inference API call, allowing for finer-grained control over API usage based on actual workload.

1.9.1. How token rate limiting works

The TokenRateLimitPolicy tracks cumulative token usage per client. Before forwarding a request, it checks if the client has already exceeded their limit from previous usage. After the upstream responds, it extracts the actual token cost and updates the client’s counter.

The flow is as follows:

  1. On an incoming request, the gateway evaluates the matching rules and predicates from the TokenRateLimitPolicy resources.
  2. If the request matches, the gateway prepares the necessary rate limit descriptors and monitors the response.
  3. After receiving the response, the gateway extracts the usage.total_tokens field from the JSON response body.
  4. The gateway then sends a RateLimitRequest to Limitador, including the actual token count as a hits_addend.
  5. Limitador tracks the cumulative token usage and responds to the gateway with OK or OVER_LIMIT.

1.9.2. Key features and use cases

Token-based rate limiting means you complete the following tasks:

  • Enforces limits based on token usage by extracting the usage.total_tokens field from an OpenAI-style inference JSON response body.
  • Suitable for consumption-based APIs such as LLMs where the cost is tied to token counts.
  • Allows defining different limits based on criteria such as user identity, API endpoints, or HTTP methods.
  • Works with AuthPolicy to apply specific limits to authenticated users or groups.
  • Inherits functionalities from RateLimitPolicy, including defining multiple limits with different durations and using Redis for shared counters in multi-cluster environments.

1.9.3. Integrating with AuthPolicy

You can combine TokenRateLimitPolicy with AuthPolicy to apply token limits based on authenticated user identity. When an AuthPolicy successfully authenticates a request, it injects identity information which can then be used by the TokenRateLimitPolicy to select the appropriate limit.

For example, you can define different token limits for users belonging to 'free-tier' versus 'premium-tier' groups, identified using claims in a JWT validated by AuthPolicy.

1.9.4. Configure token-based rate limiting for LLM APIs

This guide shows how to configure TokenRateLimitPolicy to You can protect a hypothetical LLM API deployed on OpenShift Container Platform, integrated with AuthPolicy for user-specific limits.

Prerequisites

  • Connectivity Link is installed on your OpenShift Container Platform cluster.
  • A Gateway and an HTTPRoute are configured to expose your service.
  • An AuthPolicy is configured for authentication (for example, using API keys or OIDC).
  • Redis is configured for Limitador if running in a multi-cluster setup or requiring persistent counters.
  • Your upstream service is configured to return an OpenAI-compatible JSON response containing a usage.total_tokens field in the response body.

Procedure

  1. Create a TokenRateLimitPolicy resource. This example defines two limits: one for free users on a 10,000 tokens per day request limit, and one for pro users with a 100,000 tokens per day request limit.

    apiVersion: kuadrant.io/v1alpha1
    kind: TokenRateLimitPolicy
    metadata:
      name: llm-protection
    spec:
      targetRef:
        group: gateway.networking.k8s.io
        kind: Gateway
        name: ai-gateway
      limits:
        free-users:
          rates:
            - limit: 10000 # 10k tokens per day for free tier
              window: 24h
          when:
            - predicate: request.path == "/v1/chat/completions" # Inference traffic only
            - predicate: |
                auth.identity.groups.split(",").exists(g, g == "free")
          counters:
            - expression: auth.identity.userid
        pro-users:
          rates:
            - limit: 100000 # 200 tokens per minute for pro users
              window: 24h
          when:
            - predicate: request.path == "/v1/chat/completions" # Inference traffic only
            - predicate: |
                auth.identity.groups.split(",").exists(g, g == "pro")
          counters:
            - expression: auth.identity.userid
  2. Apply the policy:

    $ oc apply -f your-tokenratelimitpolicy.yaml -n my-api-namespace
  3. Check the status of the policy to ensure it has been accepted and enforced on the target HTTPRoute. Look for conditions with type: Accepted and type: Enforced with status: "True".

    $ oc get tokenratelimitpolicy llm-protection -n my-api-namespace -o jsonpath='{.status.conditions}'
  4. Send requests to your API endpoint, including the required authentication details.

    $ curl -H "Authorization: <auth-details>" \
         -d '{"model": "gpt-4", "messages": [{"role": "user", "content": "Hello"}]}' \
         <your-api-endpoint>

Verification

  • Ensure that your upstream service responds with an OpenAI-compatible JSON body containing the usage.total_tokens field.
  • Requests made when the client is within their token limits should receive a 200 OK response or other success status and their token counter is updated.
  • Requests made when the client has already exceeded their token limits should receive a 429 Too Many Requests response.

1.10. Override your gateway policies for auth and rate limiting

As an application developer, you can override your existing gateway-level policies to configure your application-level auth and rate limiting requirements.

You can allow authenticated access to the Toystore API by defining a new AuthPolicy that targets the HTTPRoute resource created in the previous section.

Important

Any new HTTPRoutes are affected by the existing gateway-level policy. Because you want users to now access this API, you must override that gateway policy. For simplicity, you can use API keys to authenticate the requests, but other options such as OpenID Connect are also available.

Prerequisites

  • Connectivity Link is installed.
  • You configured Connectivity Link policies.
  • You installed the OpenShift CLI (oc).
  • You are logged into OpenShift Container Platform as a cluster administrator.

Procedure

  1. Ensure that your Connectivity Link system namespace is set correctly by running the following command:

    $ export KUADRANT_SYSTEM_NS=$(oc get kuadrant -A -o jsonpath="{.items[0].metadata.namespace}")
  2. Define API keys for bob and alice users as follows:

    $ oc apply -f - <<EOF
    apiVersion: v1
    kind: Secret
    metadata:
      name: bob-key
      namespace: ${KUADRANT_SYSTEM_NS}
      labels:
        authorino.kuadrant.io/managed-by: authorino
        app: toystore
      annotations:
        secret.kuadrant.io/user-id: bob
    stringData:
      api_key: IAMBOB
    type: Opaque
    ---
    apiVersion: v1
    kind: Secret
    metadata:
      name: alice-key
      namespace: ${KUADRANT_SYSTEM_NS}
      labels:
        authorino.kuadrant.io/managed-by: authorino
        app: toystore
      annotations:
        secret.kuadrant.io/user-id: alice
    stringData:
      api_key: IAMALICE
    type: Opaque
    EOF
  3. Create a new AuthPolicy in a different namespace that overrides the deny-all policy created earlier and accepts the API keys as follows:

    $ oc apply -f - <<EOF
    apiVersion: kuadrant.io/v1
    kind: AuthPolicy
    metadata:
      name: toystore-auth
      namespace: ${KUADRANT_DEVELOPER_NS}
    spec:
      targetRef:
        group: gateway.networking.k8s.io
        kind: HTTPRoute
        name: toystore
      defaults:
       when:
         - predicate: "request.path != '/health'"
       rules:
        authentication:
          "api-key-users":
            apiKey:
              selector:
                matchLabels:
                  app: toystore
            credentials:
              authorizationHeader:
                prefix: APIKEY
        response:
          success:
            filters:
              "identity":
                json:
                  properties:
                    "userid":
                      selector: auth.identity.metadata.annotations.secret\.kuadrant\.io/user-id
    EOF

1.11. Overriding the low-limit RateLimitPolicy for specific users

The configured Gateway limits provide a good set of limits for the general case. However, as the developer of the Toystore API, you might want to only allow a certain number of requests for specific users, and a general limit for all other users.

Important

Any new HTTPRoutes are affected by the existing Gateway-level policy. Because you want users to now access this API, you must override that Gateway policy. For simplicity, you can use API keys to authenticate the requests, but other options such as OpenID Connect are also available.

Prerequisites

  • You installed Connectivity Link on one or more clusters.
  • If you plan to use rate-limiting in a multicluster environment, you have a shared Redis-based datastore.
  • You installed the OpenShift CLI (oc).
  • You have write access to the OpenShift Container Platform namespaces you need to work with.
  • You have access to external or on-premise DNS.
  • You created a gateway.
  • You configured your gateway policies and HTTP routes.

Procedure

  1. Create a new RateLimitPolicy in a different namespace to override the default low-limit policy created previously and set rate limits for specific users as follows:

    $ oc apply -f - <<EOF
    apiVersion: kuadrant.io/v1
    kind: RateLimitPolicy
    metadata:
      name: toystore-rlp
      namespace: ${KUADRANT_DEVELOPER_NS}
    spec:
      targetRef:
        group: gateway.networking.k8s.io
        kind: HTTPRoute
        name: toystore
      limits:
        "general-user":
          rates:
    
          - limit: 5
            window: 10s
          counters:
          - expression: auth.identity.userid
          when:
          - predicate: "auth.identity.userid != 'bob'"
        "bob-limit":
          rates:
          - limit: 2
            window: 10s
          when:
          - predicate: "auth.identity.userid == 'bob'"
    EOF

    It might take a few minutes for the RateLimitPolicy to be applied, depending on your cluster.

  2. Check that the RateLimitPolicy has a status of Accepted and Enforced as follows:

    $ oc get ratelimitpolicy -n ${KUADRANT_DEVELOPER_NS} toystore-rlp -o=jsonpath='{.status.conditions[?(@.type=="Accepted")].message}{"\n"}{.status.conditions[?(@.type=="Enforced")].message}'
  3. Check that the status of the HTTPRoute is now affected by the RateLimitPolicy in the same namespace:

    $ oc get httproute toystore -n ${KUADRANT_DEVELOPER_NS} -o=jsonpath='{.status.parents[0].conditions[?(@.type=="kuadrant.io/RateLimitPolicyAffected")].message}'

Verification

  1. Send requests as user alice as follows:

    $ while :; do curl -k --write-out '%{http_code}\n' --silent --output /dev/null -H 'Authorization: APIKEY IAMALICE' "https://api.$KUADRANT_ZONE_ROOT_DOMAIN/cars" | grep -E --color "\b(429)\b|$"; sleep 1; done

    You should see HTTP status 200 every second for 5 seconds, followed by HTTP status 429 every second for 5 seconds.

  2. Send requests as user bob as follows:

    $ while :; do curl -k --write-out '%{http_code}\n' --silent --output /dev/null -H 'Authorization: APIKEY IAMBOB' "https://api.$KUADRANT_ZONE_ROOT_DOMAIN/cars" | grep -E --color "\b(429)\b|$"; sleep 1; done

    You should see HTTP status 200 every second for 2 seconds, followed by HTTP status 429 every second for 8 seconds.

1.12. Additional resources

Legal Notice

Copyright © Red Hat.
Except as otherwise noted below, the text of and illustrations in this documentation are licensed by Red Hat under the Creative Commons Attribution–Share Alike 3.0 Unported license . If you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, the Red Hat logo, JBoss, Hibernate, and RHCE are trademarks or registered trademarks of Red Hat, LLC. or its subsidiaries in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS is a trademark or registered trademark of Hewlett Packard Enterprise Development LP or its subsidiaries in the United States and other countries.
The OpenStack® Word Mark and OpenStack logo are trademarks or registered trademarks of the Linux Foundation, used under license.
All other trademarks are the property of their respective owners.