Configuring and deploying gateway policies
Secure, protect, and connect APIs on OpenShift
Abstract
Chapter 1. Configuring and deploying gateway policies
You can use Connectivity Link to connect and secure an API exposed by a Gateway object.
1.1. Secure, protect, and connect APIs
You can use Connectivity Link on OpenShift Container Platform to connect an API that you expose by a applying Gateway object. Ingress is handled by Gateway API. You must also add a DNS provider secret, TLS and other policies to secure your connections, and use an HTTP route object to define the flow of traffic.
Connectivity Link draws on the user-role concepts of Gateway API, for example:
-
Platform engineers: Generally in charge of OpenShift Container Platform infrastructure, platform engineers create and secure
Gatewayobjects with associated policies that application developers use to deploy APIs. - Application developers: Application developers create the applications used on OpenShift Container Platform, and can override the gateway-level global authorization and rate-limiting policies to configure application-level requirements for specific users.
1.1.1. Set up your environment
You can set up your environment variables and deploy an application on your OpenShift Container Platform cluster. In this example, a demonstration application is used.
The Toystore application is an example only and is not intended for production use.
Prerequisites
- You installed Connectivity Link on at least one OpenShift Container Platform cluster.
-
You installed the OpenShift CLI (
oc). - You have write access to the OpenShift Container Platform namespaces you need to work with.
- You have access to external or on-premise DNS.
- You know the name and namespace of the gateway you want to connect your application to.
Procedure
Set the following environment by running the following command:
$ export KUADRANT_GATEWAY_NS=api-gateway \ export KUADRANT_GATEWAY_NAME=ingress-gateway \ export KUADRANT_DEVELOPER_NS=toystore \ export KUADRANT_AWS_ACCESS_KEY_ID=xxxx \ export KUADRANT_AWS_SECRET_ACCESS_KEY=xxxx \ export KUADRANT_ZONE_ROOT_DOMAIN=example.com \ export KUADRANT_CLUSTER_ISSUER_NAME=self-signed
-
KUADRANT_GATEWAY_NS: Namespace for your gateway in OpenShift Container Platform. -
KUADRANT_GATEWAY_NAME: Name of your gateway in OpenShift Container Platform. -
KUADRANT_DEVELOPER_NS: Namespace for the example Toystore app in OpenShift Container Platform. You can replace this value with the name of the application you want to use. -
KUADRANT_AWS_ACCESS_KEY_ID: DNS provider access key ID. In this example, AWS is used. You can replace this value with your DNS provider information. -
KUADRANT_AWS_SECRET_ACCESS_KEY: DNS provider secret access key with permissions to manage your DNS zone. In this example, AWS is used. You can replace this value with your DNS provider information. -
KUADRANT_ZONE_ROOT_DOMAIN: The root domain associated with your DNS zone ID. In this example, the OpenShift Container Platform secret containing the credentials for the DNS provider is AWS Route53. You can replace this value with your DNS provider information. -
KUADRANT_CLUSTER_ISSUER_NAME: Name of the certificate authority or issuer TLS certificates.
-
Create the namespace for the application by running the following command:
$ oc create ns ${KUADRANT_DEVELOPER_NS}Deploy your application to the namespace you specified by running the following command:
$ oc apply -f https://raw.githubusercontent.com/Kuadrant/Kuadrant-operator/main/examples/toystore/toystore.yaml -n ${KUADRANT_DEVELOPER_NS}You can replace the Toystore application with the path to the one you want to use.
1.1.2. Setting up a DNS provider secret
As a platform engineer, you can create access to the DNS zones that Connectivity Link can use by configuring your external DNS provider credentials. After setting up your Secret custom resource (CR), restrict access to only the DNS zones that you want Connectivity Link to manage by setting a DNSPolicy CR.
You must apply the following Secret CR to each cluster.
Prerequisites
- You installed Connectivity Link on one or more clusters.
-
You installed the OpenShift CLI (
oc). - You have write access to the OpenShift Container Platform namespaces you need to work with.
- You have access to external or on-premise DNS.
Procedure
Create the namespace you want your
GatewayCR deployed in by running the following command:$ oc create ns ${KUADRANT_GATEWAY_NS}Create the secret credentials in the same namespace as the gateway namespace by running the following command:
$ oc -n ${KUADRANT_GATEWAY_NS} create secret generic <aws-credentials> \ --type=kuadrant.io/aws \ --from-literal=AWS_ACCESS_KEY_ID=$KUADRANT_AWS_ACCESS_KEY_ID \ --from-literal=AWS_SECRET_ACCESS_KEY=$KUADRANT_AWS_SECRET_ACCESS_KEYReplace
<aws-credentials>with the secret you want to use.
Next step
- Configure your TLS issuer and policy.
1.1.3. Create your Gateway object
As a platform engineer or cluster administrator, you must deploy a Gateway object in your OpenShift Container Platform cluster to begin setting up the infrastructure used by application developers. The Gateway is the instantiation of your entry point. It tells the controller to provision load balancer with specific ports and security credentials.
In a multicluster environment, for Connectivity Link to balance traffic by using DNS across clusters, you must define your Gateway object with a shared hostname. You can define this by using an HTTPS listener with a wildcard hostname based on the root domain. You must apply these resources to each cluster that you want to use them.
Prerequisites
- You installed Connectivity Link on one or more clusters.
-
You installed the OpenShift CLI (
oc). - You have write access to the OpenShift Container Platform namespaces you need to work with.
- You have access to external or on-premise DNS.
Procedure
Create a
Gatewaycustom resource (CR), such as{KUADRANT_GATEWAY_NAME}.yaml, that has the following information:Example
GatewayCRapiVersion: gateway.networking.k8s.io/v1 kind: Gateway metadata: name: ${KUADRANT_GATEWAY_NAME} namespace: ${KUADRANT_GATEWAY_NS} labels: kuadrant.io/gateway: "true" spec: gatewayClassName: openshift-default listeners: - allowedRoutes: namespaces: from: All hostname: "api.${KUADRANT_ZONE_ROOT_DOMAIN}" name: api port: 443 protocol: HTTPS tls: certificateRefs: - group: "" kind: Secret name: api-${KUADRANT_GATEWAY_NAME}-tls mode: TerminateImportantIn a multicluster environment, for Connectivity Link to balance traffic by using DNS across clusters, you must specify a gateway with a shared hostname. You can define this by using an HTTPS listener with a wildcard hostname based on the root domain.
Apply the
GatewayCR by running the following command:$ oc apply -f {KUADRANT_GATEWAY_NAME}.yaml
Verification
Check the status of your
Gatewayobject by running the following command:$ oc get gateway ${KUADRANT_GATEWAY_NAME} -n ${KUADRANT_GATEWAY_NS} -o=jsonpath='{.status.conditions[?(@.type=="Accepted")].message}{"\n"}{.status.conditions[?(@.type=="Programmed")].message}'Example output
Resource accepted Resource programmed, assigned to service(s) ${KUADRANT_GATEWAY_NAME}.${KUADRANT_GATEWAY_NS}.svc.cluster.local:443Check the status of your HTTPS listener by running the following command:
$ oc get gateway ${KUADRANT_GATEWAY_NAME} -n ${KUADRANT_GATEWAY_NS} -o=jsonpath='{.status.listeners[0].conditions[?(@.type=="Programmed")].message}'The HTTPS listener exists, but is not programmed or ready to accept traffic because you do not have valid certificates available.
Next step
-
Create a
TLSPolicyCR to program your HTTPS listener to accept traffic.
1.2. About configuring your Gateway policies and HTTP route
As a platform engineer or cluster administrator, you must expose endpoints and program your HTTPS listener after you deploy your Gateway object. Next, take the following steps:
-
Create and apply a
TLSPolicycustom resource (CR) that uses yourCertificateIssuerobject to set up your HTTPS listener certificates. -
Create and apply an
HTTPRouteCR for yourGatewayobject to communicate with your backend application API. -
Create and apply an
AuthPolicyCR to set up a default HTTP403response for any unprotected endpoints -
Create and apply a
RateLimitPolicyCR to set up a default artificially low global limit to further protect any endpoints exposed by theGatewayobject. -
Create and apply a
DNSPolicyCR with a load balancing strategy for yourGatewayobject.
In multicluster environments, you must perform all of the steps in each cluster individually, unless specifically excluded.
1.2.1. Add a TLS certificate issuer
To secure communication to your Gateway object, you must define a certificate authority (CA) as an issuer for TLS certificates. Configure a TLSPolicy CR to automatically provision TLS certificates based on Gateway object listener hosts by using integration with cert-manager Operator for Red Hat OpenShift and ACME providers.
You can use any certificate issuer supported by cert-manager. In multicluster environments, you must add your TLS issuer in each OpenShift Container Platform cluster.
If you set up your Gateway CR to use HTTP instead of than HTTPS, setting up TLS and a TLSPolicy CR are not required.
Prerequisites
- You installed Connectivity Link on one or more clusters.
-
You installed the OpenShift CLI (
oc). - You have write access to the OpenShift Container Platform namespaces you need to work with.
- You have access to external or on-premise DNS.
-
You created and applied a
Gatewayobject. -
You created and applied an
HTTPRouteobject.
Procedure
Before adding a TLS certificate issuer, create the secret credentials in the
cert-managernamespace by running the following command:$ oc -n cert-manager create secret generic <dns-provider-credentials> \ --type=kuadrant.io/aws \ --from-literal=AWS_ACCESS_KEY_ID=$KUADRANT_AWS_ACCESS_KEY_ID \ --from-literal=AWS_SECRET_ACCESS_KEY=$KUADRANT_AWS_SECRET_ACCESS_KEYReplace
<dns-provider-credentials>with the secret you want to use. This example uses Amazon Web Services (AWS).Create a TLS certificate issuer resource, such as
{KUADRANT_CLUSTER_ISSUER_NAME}.yaml, that has the following information:apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: ${KUADRANT_CLUSTER_ISSUER_NAME} spec: selfSigned: {}Apply the
ClusterIssuerCR by running the following command:$ oc apply -f {KUADRANT_CLUSTER_ISSUER_NAME}.yaml
Verification
Verify that the
ClusterIssuerobject is ready by running the following command:$ oc wait clusterissuer/${KUADRANT_CLUSTER_ISSUER_NAME} --for=condition=ready=true
1.2.2. Setting a TLS policy
Create a TLSPolicy custom resource (CR) for your gateway to regulate the ciphers a client can use when connecting to the server. This ensures that Connectivity Link components use cryptographic libraries that do not allow known insecure protocols, ciphers, or algorithms.
If you set up your Gateway CR to use HTTP instead of than HTTPS, setting up TLS and a TLSPolicy CR are not required.
Prerequisites
- You installed Connectivity Link on one or more clusters.
-
You installed the OpenShift CLI (
oc). - You have write access to the OpenShift Container Platform namespaces you need to work with.
- You have access to external or on-premise DNS.
-
You created and applied a
Gatewayobject.
Procedure
Create a
TLSPolicycustom resource (CR) such as<toystore_tls.yaml>, that has the following information:Example
TLSPolicyCRapiVersion: kuadrant.io/v1 kind: TLSPolicy metadata: name: ${KUADRANT_GATEWAY_NAME}-tls namespace: ${KUADRANT_GATEWAY_NS} spec: targetRef: name: ${KUADRANT_GATEWAY_NAME} group: gateway.networking.k8s.io kind: Gateway issuerRef: group: cert-manager.io kind: ClusterIssuer name: ${KUADRANT_CLUSTER_ISSUER_NAME}Apply your
TLSPolicyCR by running the following command:$ oc apply -f <toystore_tls.yaml>Replace
<toystore>with the name of yourTLSPolicyobject.
Verification
Verify that your
TLSPolicyobject has anAcceptedandEnforcedstatus by running the following command:$ oc get tlspolicy ${KUADRANT_GATEWAY_NAME}-tls -n ${KUADRANT_GATEWAY_NS} -o=jsonpath='{.status.conditions[?(@.type=="Accepted")].message}{"\n"}{.status.conditions[?(@.type=="Enforced")].message}'This might take a few minutes.
Example output
TLSPolicy has been accepted TLSPolicy has been successfully enforced
1.2.3. Setting the DNS policy
Control multicluster ingress by using DNS to bring traffic to your gateways with DNSPolicy custom resources (CRs). Setting a DNS policy automates the link between your Gateway IP address and a human-readable hostname.
Create and apply a default DNSPolicy custom resource (CR) to ensure consistency and cleanup. You can standardized naming by creating a pattern for developers who need URLs, manage time-to-live values, and set up automatic cleanup of unused DNS records.
Prerequisites
- You installed Connectivity Link on at least one OpenShift Container Platform cluster.
-
You installed the OpenShift CLI (
oc). - You have write access to the OpenShift Container Platform namespaces you need to work with.
- You have access to external or on-premise DNS.
-
You created a DNS-provider
Secretobject. -
You created a
Gatewayobject.
Procedure
Create a
DNSPolicycustom resource (CR), for example,{KUADRANT_GATEWAY_NAME}-dnspolicy.yaml, that includes the following information:apiVersion: kuadrant.io/v1 kind: DNSPolicy metadata: name: <${KUADRANT_GATEWAY_NAME}-dnspolicy> namespace: <${KUADRANT_GATEWAY_NS}> spec: healthCheck: failureThreshold: 3 interval: 1m path: /health loadBalancing: defaultGeo: true geo: GEO-NA weight: 120 targetRef: name: <${KUADRANT_GATEWAY_NAME}> group: gateway.networking.k8s.io kind: Gateway providerRefs: - name: <dns_provider_credentials>
-
Replace
<{KUADRANT_GATEWAY_NAME}-dnspolicy.yaml>with the environment variable you defined. -
Replace
<{KUADRANT_GATEWAY_NS}>with the environment variable you defined. -
spec.loadBalancing.geo: Defines a geographically relevant load balancer. In this example,GEO-NAis used. Change this to match your requirements. -
spec.providerRefs: Replace<dns_provider_credentials>with a reference to the OpenShift Container Platform secret containing the credentials for your DNS provider.
-
Replace
Apply the
DNSPolicyCR by running the following command:$ oc apply -f <{KUADRANT_GATEWAY_NAME}-dnspolicy.yaml> -n <gateway-namespace>
-
Replace
<{KUADRANT_GATEWAY_NAME}-dnspolicy.yaml>with the filename you used. -
Replace
<gateway-namespace>with name of the OpenShift Container Platform namespace that contains the gateway.
-
Replace
Verification
Verify that your
DNSPolicyobject has a status ofAcceptedandEnforcedby running the following command:$ oc get dnspolicy <${KUADRANT_GATEWAY_NAME}-dnspolicy> -n <${KUADRANT_GATEWAY_NS}> -o=jsonpath='{.status.conditions[?(@.type=="SubResourcesHealthy")].message}'
-
Replace
<{KUADRANT_GATEWAY_NAME}-dnspolicy.yaml>with the filename you used. -
Replace
<{KUADRANT_GATEWAY_NS}>with the environment variable you used. - This process might take a few minutes.
-
Replace
Check the status of the DNS health checks that are enabled on your DNS policy by running the following command:
$ oc get dnspolicy <${KUADRANT_GATEWAY_NAME}-dnspolicy> -n <${KUADRANT_GATEWAY_NS}> -o=jsonpath='{.status.conditions[?(@.type=="SubResourcesHealthy")].message}'
-
Replace
<{KUADRANT_GATEWAY_NAME}-dnspolicy.yaml>with the filename you used. -
Replace
<{KUADRANT_GATEWAY_NS}>with the environment variable you used. -
These health checks flag a published endpoint as healthy or unhealthy based on the defined configuration. When unhealthy, an endpoint is not published if it has not already been published to the DNS provider. An endpoint is only unpublished if it is part of an
Arecord that has multiple values. You can see endpoint status in all cases in theDNSPolicyobject status.
-
Replace
1.2.4. Setting the default AuthPolicy
As a platform engineer you can use Auth policy objects to define who you allow to connect. Configure a Connectivity Link AuthPolicy custom resource (CR) to use external auth providers. You can ensure that different clusters exposing the same API can authenticate with the same permissions.
Apply an AuthPolicy custom resource (CR) with a deny-all policy to create a zero-trust environment. Using a zero-trust AuthPolicy object means that no traffic flows unless a specific allow rule is set. Every connection request must be authenticated. You can prevent the accidental exposing of services by using a deny-all policy.
In a zero-trust environment, every application with an HTTPRoute exposed by an application developer must also have an attached route-level AuthPolicy CR. You can attach multiple AuthPolicy objects to your Gateway and HTTPRoute CRs.
Prerequisites
- You installed Connectivity Link on one or more clusters.
-
You installed the OpenShift CLI (
oc). - You have write access to the OpenShift Container Platform namespaces you need to work with.
- You have access to external or on-premise DNS.
-
You created a
Gatewayobject. -
You created an
HTTPRouteobject.
Procedure
Create a default
AuthPolicyCR, such asgateway_name-auth.yaml, with adeny-allsetting for yourGatewayobject:Example AuthPolicy CR
apiVersion: kuadrant.io/v1 kind: AuthPolicy metadata: name: ${KUADRANT_GATEWAY_NAME}-auth namespace: ${KUADRANT_GATEWAY_NS} spec: targetRef: group: gateway.networking.k8s.io kind: Gateway name: ${KUADRANT_GATEWAY_NAME} defaults: when: - predicate: "request.path != '/health'" rules: authorization: deny-all: opa: rego: "allow = false" response: unauthorized: headers: "content-type": value: application/json body: value: | { "error": "Forbidden", "message": "Access denied by default by the gateway operator. If you are the administrator of the service, create a specific auth policy for the route." }Apply your
HTTPRouteCR by running the following command:$ oc apply -f <gateway_name-auth.yaml>-
Replace
<gateway_name-auth.yaml>with the name of yourHTTPRouteCR.
-
Replace
Verification
Check that your
AuthPolicyhasAcceptedandEnforcedstatus by running the following command:$ oc get authpolicy ${KUADRANT_GATEWAY_NAME}-auth -n ${KUADRANT_GATEWAY_NS} -o=jsonpath='{.status.conditions[?(@.type=="Accepted")].message}{"\n"}{.status.conditions[?(@.type=="Enforced")].message}'Example output
AuthPolicy has been accepted AuthPolicy has been successfully enforced
Test your
AuthPolicyby running the followingcurlcommand:$ curl -H 'Authorization: APIKEY ndyBzreUzF4zqDQsqSPMHkRhriEOtcRx' http://talker-api.127.0.0.1.nip.io:8000/hello
Example output
HTTP/1.1 200 OK
1.2.5. Setting a default rate-limit policy
As a platform engineer, set up rate limiting to ensure that you are defining how much any one service can use the Gateway object’s connection resources. Rate limits also protect backend services from excessive requests. You can attach multiple RateLimitPolicy objects to your Gateway and HTTPRoute CRs.
Prerequisites
- You installed Connectivity Link on one or more clusters.
- You have a shared Redis-based datastore.
-
You installed the OpenShift CLI (
oc). - You have write access to the OpenShift Container Platform namespaces you need to work with.
- You have access to external or on-premise DNS.
-
You created a
Gatewayobject. -
You created an
HTTPRouteobject.
Procedure
Create a default
RateLimitPolicycustom resource (CR), for example,gateway_name-rlp.yaml, that has the following information:Example
RateLimitPolicyCRapiVersion: kuadrant.io/v1 kind: RateLimitPolicy metadata: name: ${KUADRANT_GATEWAY_NAME}-rlp namespace: ${KUADRANT_GATEWAY_NS} spec: targetRef: group: gateway.networking.k8s.io kind: Gateway name: ${KUADRANT_GATEWAY_NAME} defaults: limits: "low-limit": rates: - limit: 1 window: 10s-
A
low-limitvalue is used in this example for the ease of testing the CR. Configure thespec.defaults.limits:values that makes in your use case.
-
A
Apply your
RateLimitPolicyCR by running the following command:$ oc apply -f <gateway_name-rlp.yaml>-
Replace
<gateway_name-rlp.yaml>with the name of yourRateLimitPolicyYAML.
-
Replace
Verification
Check that your
RateLimitPolicyhasAcceptedandEnforcedstatus by running the following command:$ oc get ratelimitpolicy ${KUADRANT_GATEWAY_NAME}-rlp -n ${KUADRANT_GATEWAY_NS} -o=jsonpath='{.status.conditions[?(@.type=="Accepted")].message}{"\n"}{.status.conditions[?(@.type=="Enforced")].message}'Example output
RateLimitPolicy has been accepted RateLimitPolicy has been successfully enforced
Test your rate-limiting by running the following
curlcommand:$ while :; do curl -k --write-out '%{http_code}\n' --silent --output /dev/null "https://api.$KUADRANT_ZONE_ROOT_DOMAIN/cars" | grep -E --color "\b(429)\b|$"; sleep 1; doneHTTP
403responses are expected.
1.3. About token-based rate limiting with TokenRateLimitPolicy
As an application developer, you can use a TokenRateLimitPolicy custom resource (CR) to enforce rate limits based on token consumption rather than the number of requests.
This policy extends the Envoy Rate Limit Service (RLS) protocol with automatic token usage extraction. It is particularly useful for protecting Large Language Model (LLM) APIs, where the cost and resource usage correlate more closely with the number of tokens processed.
TokenRateLimitPolicy counts tokens by extracting usage metrics in the body of the artificial intelligence (AI) inference API call, allowing for finer-grained control over API usage based on actual workload.
1.3.1. How token rate limiting works
The TokenRateLimitPolicy object tracks cumulative token usage per client. Before forwarding a request, it checks whether the client has already exceeded their limit. After the upstream responds, the policy extracts the actual token cost and updates the client’s counter.
For example:
-
On an incoming request, the gateway evaluates the matching rules and predicates from the
TokenRateLimitPolicyresources. - If the request matches, the gateway prepares the necessary rate limit descriptors and monitors the response.
-
After receiving the response, the gateway extracts the
usage.total_tokensfield from the JSON response body. -
The gateway then sends a
RateLimitRequestto Limitador, including the actual token count as ahits_addend. -
Limitador tracks the cumulative token usage and responds to the gateway with
OKorOVER_LIMIT.
1.3.2. Key features and use cases
-
Enforces limits based on token usage by extracting the
usage.total_tokensfield from an OpenAI-style inference JSON response body. - Suitable for consumption-based APIs such as LLMs where the cost is tied to token counts.
- Allows defining different limits based on criteria such as user identity, API endpoints, or HTTP methods.
-
Works with
AuthPolicyto apply specific limits to authenticated users or groups. -
Inherits functionalities from
RateLimitPolicy, including defining multiple limits with different durations and using Redis for shared counters in multicluster environments.
1.3.3. Integrating with AuthPolicy
You can combine TokenRateLimitPolicy with AuthPolicy to apply token limits based on authenticated user identity. When an AuthPolicy successfully authenticates a request, it injects identity information that is used by the TokenRateLimitPolicy to select the appropriate limit.
For example, you can define different token limits for users belonging to differently tiered groups, identified using claims in a JWT-validated by AuthPolicy.
1.3.4. Configuring token-based rate limiting for LLM APIs
You can protect Large Language Model (LLM) APIs deployed on OpenShift Container Platform by configuring a TokenRateLimitPolicy custom resource (CR) that you integrate with an AuthPolicy object for user-specific limits.
Prerequisites
- You installed Connectivity Link on the OpenShift Container Platform cluster you are working with.
-
GatewayandHTTPRouteobjects are both configured to expose your service. -
You created and applied an
AuthPolicycustom resource (CR) that overrides the defaultdeny-allsetting. - If you are running in a multicluster setup, or requiring persistent counters, Redis is configured for the Limitador Operator component.
-
Your configured your upstream service to return an OpenAI-compatible JSON response containing a
usage.total_tokensfield in the response body.
Procedure
Create a
TokenRateLimitPolicyYAML, for example,tokenratelimitpolicy.yamlthat includes the following information:Example TokenRateLimitPolicy YAML
apiVersion: kuadrant.io/v1alpha1 kind: TokenRateLimitPolicy metadata: name: llm-protection spec: targetRef: group: gateway.networking.k8s.io kind: Gateway name: ai-gateway limits: free-users: rates: - limit: 10000 window: 24h when: - predicate: request.path == "/v1/chat/completions" - predicate: | auth.identity.groups.split(",").exists(g, g == "free") counters: - expression: auth.identity.userid pro-users: rates: - limit: 100000 window: 24h when: - predicate: request.path == "/v1/chat/completions" - predicate: | auth.identity.groups.split(",").exists(g, g == "pro") counters: - expression: auth.identity.userid- Choose a filename for the policy that makes sense in your environment.
-
spec.limits.free-users.rates.limit: 10,000 tokens per day for free tier. -
spec.limits.free-users.when.predicate: Set to inference traffic only. -
pro-users.rates.limit: 100,000 per day for the pro user. -
spec.limits.pro-users.when.predicate: Set to inference traffic only.
Apply the
TokenRateLimitPolicypolicy by running the following command:$ oc apply -f <tokenratelimitpolicy.yaml> -n <gateway_namespace>Replace
<tokenratelimitpolicy.yaml>with the filename you used. Replace<gateway-namespace>with your gateway namespace.Check the status of the policy to ensure it was accepted and enforced on the target
HTTPRoute. Look for conditions withtype: Acceptedandtype: Enforcedwithstatus: "True".$ oc get <tokenratelimitpolicy.yaml> llm-protection -n <gateway-namespace> -o jsonpath='{.status.conditions}'
Replace
<tokenratelimitpolicy.yaml>with the filename you used. Replace<gateway_namespace>with your gateway namespace.Send requests to your API endpoint, including the required authentication details, by running the following command:
$ curl -H "Authorization: <auth_policy>" \ -d '{"model": "gpt-4", "messages": [{"role": "user", "content": "Hello"}]}' \ <api_endpoint>
Replace
<auth_policy>and<api_endpoint>with your values.
Verification
-
Ensure that your upstream service responds with an OpenAI-compatible JSON body containing the
usage.total_tokensfield. -
Requests made when the client is within their token limits should receive a
200 OKresponse or other success status and their token counter is updated. -
Requests made when the client has already exceeded their token limits should receive a
429 Too Many Requestsresponse.
1.4. Override your gateway policies for auth and rate limiting
As an application developer, you can allow users access to your API by overriding existing deny-all gateway-level policies. You must attach application-level AuthPolicy objects and rate-limiting CRs to your HTTPRoute objects.
The following example allows two users authenticated access to the Toystore API. The following example uses API keys to authenticate the requests. Other options, such as OpenID Connect, might be more appropriate for production use. Use the "least-access" approach that is best for your use case.
Prerequisites
- Connectivity Link is installed.
- You configured Connectivity Link policies.
-
You installed the OpenShift CLI (
oc). - You are logged into OpenShift Container Platform as a cluster administrator.
Procedure
Set the
KUADRANT_SYSTEM_NSenvironment variable based on where you created theKuadrantcustom resource (CR) by running the following command:$ export KUADRANT_SYSTEM_NS=$(oc get kuadrant -A -o jsonpath="{.items[0].metadata.namespace}")Create the
Secretcustom resources (CRs), or API keys, for bob and alice users that contain the following information:Example user Secret CRs
apiVersion: v1 kind: Secret metadata: name: bob-key namespace: ${KUADRANT_SYSTEM_NS} labels: authorino.kuadrant.io/managed-by: authorino app: toystore annotations: secret.kuadrant.io/user-id: bob stringData: api_key: IAMBOB type: Opaque --- apiVersion: v1 kind: Secret metadata: name: alice-key namespace: ${KUADRANT_SYSTEM_NS} labels: authorino.kuadrant.io/managed-by: authorino app: toystore annotations: secret.kuadrant.io/user-id: alice stringData: api_key: IAMALICE type: Opaque EOFApply the API keys by running the following commands:
$ oc apply -f <bob-key.yaml>Replace
<bob-key.yaml>with the filename you used.$ oc apply -f <alice-key.yaml>Replace
<alice-key.yaml>with the filename you used.Create a new
AuthPolicyin a different namespace that overrides thedeny-allpolicy and accepts the API keys. For example, the followingtoystore-auth.yaml:Example user AuthPolicy
apiVersion: kuadrant.io/v1 kind: AuthPolicy metadata: name: toystore-auth namespace: ${KUADRANT_DEVELOPER_NS} spec: targetRef: group: gateway.networking.k8s.io kind: HTTPRoute name: toystore defaults: when: - predicate: "request.path != '/health'" rules: authentication: "api-key-users": apiKey: selector: matchLabels: app: toystore credentials: authorizationHeader: prefix: APIKEY response: success: filters: "identity": json: properties: "userid": selector: auth.identity.metadata.annotations.secret\.kuadrant\.io/user-idApply the
AuthPolicyCR by running the following commands:$ oc apply -f <bob-key.yaml>Replace
<bob-key.yaml>with the filename you used.
1.4.1. About Service and Deployment objects
Before you can create a route to host your application at a public URL, you must create a Service custom resource (CR) as a routing rule. As a best practice, you must also create a Deployment object that is the application pod.
The following is an example of both Deployment and Service CRs for the Toystore example application that you can use as reference in creating your own.
Example application Deployment and Service CRs
apiVersion: apps/v1
kind: Deployment
metadata:
name: toystore
labels:
app: toystore
spec:
selector:
matchLabels:
app: toystore
template:
metadata:
labels:
app: toystore
spec:
containers:
- name: toystore
image: quay.io/kuadrant/authorino-examples:talker-api
env:
- name: LOG_LEVEL
value: "debug"
- name: PORT
value: "3000"
ports:
- containerPort: 3000
name: http
replicas: 1
---
apiVersion: v1
kind: Service
metadata:
name: toystore
spec:
selector:
app: toystore
ports:
- name: http
port: 80
protocol: TCP
targetPort: 30001.4.2. Creating an HTTP route for an application
As an application developer, you can create a route to host your application at a public URL. In Gateway API, use the HTTPRoute custom resource (CR) to specify the routing behavior of HTTP requests from your Gateway object to your application. An HTTPRoute CR is especially useful for multiplexing HTTP or terminated HTTPS connections.
Prerequisites
- You installed Connectivity Link on one or more clusters.
-
You installed the OpenShift CLI (
oc). - You have write access to the OpenShift Container Platform namespaces you need to work with.
- You have access to external or on-premise DNS.
-
You created and applied a
Gatewayobject. - You have a web application that exposes a port and a TCP endpoint listening for traffic on the port.
-
You created a
Serviceobject for your application. - You have a local Certificate Authority (CA) bundle.
Procedure
Create an
HTTPRoutecustom resource (CR), such as<toystore>-route.yaml, that has the following information:Example
HTTPRouteCRapiVersion: gateway.networking.k8s.io/v1 kind: HTTPRoute metadata: name: <toystore> namespace: <${KUADRANT_DEVELOPER_NS}> labels: deployment: <toystore> service: <toystore> spec: parentRefs: - name: <${KUADRANT_GATEWAY_NAME}> namespace: <${KUADRANT_GATEWAY_NS}> hostnames: - api.${KUADRANT_ZONE_ROOT_DOMAIN} rules: - matches: - method: GET path: type: PathPrefix value: /cars - method: GET path: type: PathPrefix value: /health backendRefs: - name: <toystore> port: 80
-
Replace
<toystore>with the name of your application. -
metadata.namespace: The namespace in which you are deploying your application. Replace <${KUADRANT_DEVELOPER_NS}> with the environment variable you used during installation. -
spec.parentRefs.nameandspec.parentRefs.namespace: The values must match theGatewayobject. -
spec.hostnames: The hostname must match the one specified in theGatewayobject. -
hostname.rules.matches.backendRefs.name: The name of theServicefor your application.
-
Replace
Apply your
HTTPRouteCR by running the following command:$ oc apply -f <toystore>-route.yamlReplace
<toystore>with the name of your application.Example output
httproute.gateway.networking.k8s.io/toystore-route created
The output indicates that the route to the application exists.
Verification
Verify that the
HTTPRouteis created by running the following command:$ oc get httproute <toystore> -n kuadrant -o=jsonpath='{.status.parents[].conditions[?(@.type=="Accepted")].message}{"\n"}{.status.parents[].conditions[?(@.type=="ResolvedRefs")].message}'
Replace
<toystore>with the name of your application.Example output
Route was valid
If you have
DNSPolicyandTLSPolicyobjects applied, you can validate that your backend is reachable by running the following command:$ curl -k https://api.${KUADRANT_ZONE_ROOT_DOMAIN}:443/carsNote that the example
TLSPolicyCR uses a self-signedClusterIssuerobject.
1.4.3. Overriding the low-limit RateLimitPolicy for specific users
When you want to only allow a certain number of requests for specific users to an API that you are developing, and a general limit for all other users, you can override the default low-limit RateLimitPolicy custom resource (CR).
An existing Gateway-level policy affects new HTTPRoute objects. Because you want users to now access this API, you must override that Gateway policy. For simplicity, you can use API keys to authenticate the requests, but other options such as OpenID Connect are also available.
Prerequisites
- You installed Connectivity Link on one or more clusters.
- If you plan to use rate-limiting in a multicluster environment, you have a shared Redis-based datastore.
-
You installed the OpenShift CLI (
oc). - You have write access to the OpenShift Container Platform namespaces you need to work with.
- You have access to external or on-premise DNS.
-
You created a
Gatewayobject. -
You configured your gateway policies and
HTTProutes.
Procedure
Create a new
RateLimitPolicycustom resource (CR) in a different namespace to override the defaultlow-limitpolicy and set rate limits for specific users by using the following example:Example RateLimitPolicy CR for specific users
apiVersion: kuadrant.io/v1 kind: RateLimitPolicy metadata: name: toystore-rlp namespace: ${KUADRANT_DEVELOPER_NS} spec: targetRef: group: gateway.networking.k8s.io kind: HTTPRoute name: toystore limits: "general-user": rates: - limit: 5 window: 10s counters: - expression: auth.identity.userid when: - predicate: "auth.identity.userid != 'bob'" "bob-limit": rates: - limit: 2 window: 10s when: - predicate: "auth.identity.userid == 'bob'"Apply the CR by running the following command:
$ oc apply -f <toystore-rlp>-
Replace
<toystore-rlp>with the name of your YAML. -
Wait a few minutes for the
RateLimitPolicyCR to be applied.
-
Replace
Check that the
RateLimitPolicyhas a status ofAcceptedandEnforcedby running the following command:$ oc get ratelimitpolicy -n ${KUADRANT_DEVELOPER_NS} toystore-rlp -o=jsonpath='{.status.conditions[?(@.type=="Accepted")].message}{"\n"}{.status.conditions[?(@.type=="Enforced")].message}'Check that the status of the
HTTPRouteis now affected by theRateLimitPolicyCR in the same namespace:$ oc get httproute toystore -n ${KUADRANT_DEVELOPER_NS} -o=jsonpath='{.status.parents[0].conditions[?(@.type=="kuadrant.io/RateLimitPolicyAffected")].message}'
Verification
Send requests as user alice by running the following command:
$ while :; do curl -k --write-out '%{http_code}\n' --silent --output /dev/null -H 'Authorization: APIKEY IAMALICE' "https://api.$KUADRANT_ZONE_ROOT_DOMAIN/cars" | grep -E --color "\b(429)\b|$"; sleep 1; doneThe expected outcome is an HTTP status
200every second for 5 seconds, followed by HTTP status429every second for 5 seconds.Send requests as user bob by running the following command:
$ while :; do curl -k --write-out '%{http_code}\n' --silent --output /dev/null -H 'Authorization: APIKEY IAMBOB' "https://api.$KUADRANT_ZONE_ROOT_DOMAIN/cars" | grep -E --color "\b(429)\b|$"; sleep 1; doneThe expected outcome is an HTTP status
200every second for 2 seconds, followed by HTTP status429every second for 8 seconds.
1.5. Additional resources
Chapter 2. Using on-premise DNS with CoreDNS
You can secure, protect, and connect an API exposed by a gateway that uses Gateway API by using Connectivity Link.
2.1. About using on-premise DNS with CoreDNS
You can self-manage your on-premise DNS by integrating CoreDNS with your DNS infrastructure through access control and zone delegation. Connectivity Link combines the DNS Operator with CoreDNS to simplify your management and security for on-premise DNS servers. You can use CoreDNS in both single-cluster and multicluster scenarios.
CoreDNS is best used in environments that change often, where using a DNS-as-code approach makes sense. The following situations are example use cases for integrating with CoreDNS:
- You need to avoid dependency on external cloud DNS services.
- You have regulatory or compliance requirements mandating self-hosted infrastructure.
- You need to keep full control over DNS records.
- You want to delegate specific DNS zones from existing DNS servers to Kubernetes-managed CoreDNS.
- You require consistent DNS management across hybrid or multicloud environments.
- You need to reduce DNS operational costs by eliminating per-query charges.
- You do not want to directly manage DNS records on the on-premise DNS server.
- You need to keep authoritative control on edge DNS servers.
For example:
-
Configure your authoritative on-premise DNS server to delegate a specific subdomain, such as
deployment.example.local, to CoreDNS instances managed by Connectivity Link. -
Any pod-level
dnsPolicyCR can then interact with the CoreDNS provider within the OpenShift Container Platform cluster. You can specify the DNS provider that handles the records for the targeted gateways in thedelegatefield of the DNS policy. - The CoreDNS instance becomes authoritative for the delegated subdomain and manages the necessary DNS records for gateways within that subdomain.
2.2. CoreDNS integration architecture
CoreDNS is a DNS server that consists of default plugins that do several tasks, for example:
- Automatically detects when you add new services to your cluster and adds them to directories.
- Caches recent addresses to avoid the latency of repeated lookups.
- Runs health checks and skips over services that are down.
- Provides dynamic redirects by rewriting queries as they come in.
You can add plugins for observability and other services that you require by updating the CoreDNS with the DNS Operator.
With the DNS Operator, DNS is the first layer of traffic management. You can deploy the DNS Operator to multiple clusters and coordinate them all on a given zone. This means that you can use a shared domain name across clusters to balance traffic based on your requirements.
2.2.1. Technical workflow
To give you integration with CoreDNS, Connectivity Link extends the DNS Operator with the kuadrant CoreDNS plugin that sources records from the kuadrant.io/v1alpha1/DNSRecord custom resource (CR) and applies location-based and weighted response capabilities.
You can create DNS records that point to the CoreDNS secret in one of the three following ways:
- Create it manually.
-
Use a non-delegating DNS policy at a gateway with routes attached. The
kuadrant-operatorCR createsDNSRecordCRs with the secret. -
Use a delegating DNS policy at a gateway. The delegating policy results in the creation of a delegating
DNSRecordCR without a secret reference. All delegating DNS Records are combined into a single authoritative DNS Record. The authoritativeDNSRecorduses a default provider secret.
The DNS Operator reconciles authoritative records that have the CoreDNS secret referenced and applies labels only to those CRs. CoreDNS watches those records and matches the labels with zones configured in the Corefile. If there is a match, the authoritative DNSRecord CR is used to serve a DNS response.
There are no changes to the dnsPolicy API and no required changes to the policy controllers. This integration is isolated to the DNS Operator and the CoreDNS plugin.
The CoreDNS integration supports both single-cluster and multicluster deployments.
- Single cluster
Organizations that want to self-host their DNS infrastructure without the complexity of multicluster coordination can use single-cluster CoreDNS integration. Using delegation is not required.
A single cluster runs both DNS Operator and CoreDNS with the plugin. CoreDNS only serves
DNSRecordCRs that point to a CoreDNS provider secret. The CoreDNS plugin watches for DNS records labeled with the appropriate zone name and serves them directly. Any authoritativeDNSRecordCR has endpoints from the single cluster.- Multi-cluster delegation
Multiple clusters can participate in serving a single DNS zone through Kubernetes role-based delegation that enables geographic distribution of DNS services and high availability. This implementation enables workloads across multiple clusters to contribute DNS endpoints to a unified zone, with primary clusters maintaining the authoritative view. The role of a cluster is determined by the DNS Operator.
Multi-cluster delegation uses
kubeconfig-based interconnection secrets that grant read access toDNSRecordresources across clusters. This approach reuses Kubernetes role-based access (RBAC).-
Primary clusters: Run both the DNS Operator and CoreDNS and serve the DNS records that are local. The DNS Operator running on primary clusters that delegate reconciles
DNSRecordCRs by reading and merging them. Primary clusters then serve these authoritativeDNSRecordCRs. Each CoreDNS instance serves the relevant authoritativeDNSRecordfor the configured zone. Each primary cluster can independently serve the complete record set. -
Secondary clusters: Only run the DNS Operator. These clusters create delegating
DNSRecordCRs but do not interact with DNS providers directly. If the secret and subdomain are properly configured, these DNS records are automatically reconciled in the primary cluster.
-
Primary clusters: Run both the DNS Operator and CoreDNS and serve the DNS records that are local. The DNS Operator running on primary clusters that delegate reconciles
- Zone labeling
-
CoreDNS integration uses a label-based filtering mechanism. The DNS Operator applies a zone-specific label to
DNSRecordCRs when the CRs are reconciled. The CoreDNS plugin only watches forDNSRecordCRs with labels that match configured zones. This method reduces resource use and provides clear zone boundaries. - GEO and weighted routing
GEO and weighted routing use the same algorithmic approach as cloud providers. By using CoreDNS, you can have parity with cloud DNS provider capabilities and maintain full control over your DNS infrastructure.
-
GEO routing: The CoreDNS
geoipplugin uses geographical-database integration to return region-specific endpoints. - Weighted routing: Applies probabilistic selection based on endpoint weights.
- Combined routing: First applies GEO filtering, then weighted selection within the matched region.
-
GEO routing: The CoreDNS
2.3. CoreDNS DNS records security considerations
As an infrastructure engineer or business lead, you can implement several security best practices when using CoreDNS with Connectivity Link.
Zone configuration DNSRecord custom resources (CRs) have full control over a Zone’s name server (NS) records. Anyone who can create or change a DNSRecord that targets the root of the main domain name with NS records can decide where the all Zone traffic goes. Consider this as you plan your access controls.
For example, use the following access-control best practices:
-
Separate namespaces: Keep zone configuration
DNSRecordCRs in a dedicated, restricted namespace Use least-privilege policies:
-
Strict RBAC: Only grant
DNSRecordcreation permissions to trusted infrastructure engineers and cluster administrators. -
Namespace isolation: Grant application developers
DNSRecordpermissions only in their own namespaces.
-
Strict RBAC: Only grant
-
Audit logging: Enable Kubernetes audit logging to track all
DNSRecordchanges. CoreDNS audit logging is enabled by default for network troubleshooting and traffic pattern observability. -
Version control: Use a DNS-as-code approach. Store zone configuration
DNSRecordCRs in Git and use standardized review processes.
You can use the following RBAC configuration example to get you started with defining access:
apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: dns-zone-config-admin namespace: kuadrant-coredns rules: - apiGroups: ["kuadrant.io"] resources: ["dnsrecords"] verbs: ["create", "update", "patch", "delete"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: dns-zone-config-admin-binding namespace: kuadrant-coredns subjects: - kind: User name: dns-admin@example.com # Only trusted administrators apiGroup: rbac.authorization.k8s.io roleRef: kind: Role name: dns-zone-config-admin apiGroup: rbac.authorization.k8s.io
2.4. Using CoreDNS with a single cluster
You can use CoreDNS as a DNS provider for Connectivity Link in a single-cluster, on-premise environment. This integration allows Connectivity Link to manage DNS entries within your internal network infrastructure.
In a single-cluster setup, ensure that the endpoints IP address value you use is reachable from the kuadrant-system namespace. The default IP address, 10.96.0.10, is the internal cluster-wide DNS address.
Prerequisites
- Connectivity Link is installed on the OpenShift Container Platform cluster.
-
The OpenShift CLI (
oc) is installed. - You have administrator privileges on the OpenShift Container Platform cluster.
- You are logged in to the cluster you want to configure.
-
Your OpenShift Container Platform clusters have support for the
loadbalancedservice type that allows UDP and TCP traffic on port 53, such as MetalLB. - You have access to configure your authoritative on-premise DNS server.
- Podman is installed.
Procedure
Set up your cluster. Set the following environment variables for your cluster context:
$ export CTX_PRIMARY=$(oc config current-context) \ export KUBECONFIG=~/.kube/config \ export PRIMARY_CLUSTER_NAME=local-cluster \ export ONPREM_DOMAIN=<onprem-domain> \ export KUADRANT_SUBDOMAIN=""
For the
ONPREM_DOMAINvariable value, use your actual root domain. For theKUADRANT_SUBDOMAINvariable value, valid values are empty orkuadrant.Extract the CoreDNS manifests from
dns-operatorbundle by running the following commands:$ podman create --name bundle registry.redhat.io/rhcl-1/dns-operator-bundle:rhcl-1.3.0
$ podman cp bundle:/coredns/manifests.yaml ./coredns-manifests.yaml
$ podman rm bundle
Apply the manifests to the cluster by running the following command:
$ oc apply -f ./coredns-manifests.yaml
Create a
ConfigMapto define the authoritative zone for CoreDNS. This minimal configuration enables thekuadrantplugin and GeoIP features.$ cat | oc --context $CTX_PRIMARY apply -f - apiVersion: v1 kind: ConfigMap metadata: name: coredns-kuadrant-config namespace: kuadrant-coredns data: Corefile: | ${KUADRANT_SUBDOMAIN}.${ONPREM_DOMAIN}:53 { debug errors health { lameduck 5s } ready log geoip <GeoIP-database-name>.mmdb { edns-subnet } metadata kuadrant }NoteFor production or exact GeoIP routing, mount your licensed MaxMind GeoIP database into the CoreDNS pod and update the filename in the
data.Corefile.geoipparameter.Update the CoreDNS deployment to use the new configuration by running the following command:
$ oc --context $CTX_PRIMARY -n kuadrant-system patch deployment kuadrant-coredns --patch '{"spec":{"template":{"spec":{"volumes":[{"name":"config-volume","configMap":{"name":"coredns-kuadrant-config","items":[{"key":"Corefile","path":"Corefile"}]}}]}}}}'Set a watch-and-wait command for the deployment rollout to complete:
$ oc --context $CTX_PRIMARY -n kuadrant-system rollout status deployment/kuadrant-coredns
Example output
kuadrant-coredns successfully rolled out
Create the Kubernetes
Secretthat Connectivity Link uses to interact with CoreDNS. This secret specifies the zones this provider instance is authoritative for.$ oc create secret generic coredns-credentials \ --namespace=kuadrant-system \ --type=kuadrant.io/coredns \ --from-literal=ZONES="${KUADRANT_SUBDOMAIN}.${ONPREM_DOMAIN}" \ --context ${CTX_PRIMARY}
Verification
Check the status of the
DNSRecordCR by running the following commands:$ oc get dnsrecord <name> -n <namespace> -o jsonpath='{.status.conditions[?(@.type=="Ready")]}'$ NS1=$(oc get svc kuadrant-coredns -n kuadrant-coredns -o jsonpath='{.status.loadBalancer.ingress[0].ip}') ROOT_HOST=$(oc get dnsrecord <name> -n <namespace> -o jsonpath='{.spec.rootHost}') dig @${NS1} ${ROOT_HOST}Expect the
Readycondition to beTrue.
Troubleshooting
If you are having undetermined trouble, view the logs for all CoreDNS pods by running the following command:
$ oc logs -n kuadrant-coredns deployment/kuadrant-coredns
If the
DNSRecordis not appearing in the zone, verify that the record has the zone label by running the following command:$ oc get dnsrecords.kuadrant.io -n dnstest -o jsonpath='{.items[*].metadata.labels}' | grep kuadrant.io/coredns-zone-nameThe output should include the zone name, for example
kuadrant.io/coredns-zone-name: k.example.com.If the output does not show the zone name, check that the DNS Operator is running by using the following command:
$ oc get pods -n dns-operator-system
You can also check the DNS Operator logs by running the following command:
$ oc logs -n dns-operator-system deployment/dns-operator-controller-manager
A couple of common issues can be missing RBAC and GeoIP database.
-
RBAC permissions missing. Check your
ClusterRoleandClusterRoleBindingconfigurations. - GeoIP database file not found. Ensure that your database is accessible.
-
RBAC permissions missing. Check your
Next steps
-
Create
dnsPolicycustom resources in your OpenShift Container Platform pods, referencing thecoredns-credentialssecret as the provider. Connectivity Link manages DNS records within the delegated${KUADRANT_SUBDOMAIN}.${ONPREM_DOMAIN}zone through CoreDNS.
2.5. Using CoreDNS with primary and secondary clusters
You can use CoreDNS as a DNS provider for Connectivity Link in an existing multi-cluster, on-premise environment. This integration allows Connectivity Link to manage DNS entries within your internal network infrastructure.
Prerequisites
- Connectivity Link is installed on two separate OpenShift Container Platform clusters (primary and secondary).
-
OpenShift CLI (
oc) is installed and configured for access to both clusters. - You have administrator privileges on both OpenShift Container Platform clusters.
-
Your OpenShift Container Platform clusters have support for the
loadbalancedservice type that allows UDP and TCP traffic on port 53, such as MetalLB. - You have access to configure your authoritative on-premise DNS server to delegate a subdomain.
- Podman is installed.
-
jqis installed.
Procedure
Set up the primary cluster. Set the following environment variables for your primary cluster context:
$ export CTX_PRIMARY=<primary_cluster_context_name> \ export KUBECONFIG=~/.kube/config \ export PRIMARY_CLUSTER_NAME=<primary_cluster_name> \ export ONPREM_DOMAIN=<onprem-domain> \ export KUADRANT_SUBDOMAIN=<kuadrant>
-
CTX_PRIMARY: Replace<primary_cluster_context_name>with the namespace of the cluster that you are specifying as primary. -
KUBECONFIG: Adjust the path to your config directy as needed. -
PRIMARY_CLUSTER_NAME: Replace<primary_cluster_name>with the name of the cluster that you are specifying as primary. -
ONPREM_DOMAIN: Replace<onprem-domain>with your actual root domain. *` KUADRANT_SUBDOMAIN`: List the subdomain to delegate.
-
Extract the CoreDNS manifests from the
dns-operatorbundle by running the following commands:$ podman create --name bundle registry.redhat.io/rhcl-1/dns-operator-bundle:rhcl-1.3.0
$ podman cp bundle:/coredns/manifests.yaml ./coredns-manifests.yaml
$ podman rm bundle
Apply the manifests to the cluster by running the following command:
$ oc apply -f ./coredns-manifests.yaml
Wait for the CoreDNS service to get an external IP address. You need the IP address to configure delegation on your authoritative on-premise DNS server. Retrieve and store the IP address by running the following command:
$ export COREDNS_IP_PRIMARY=$(oc --context $CTX_PRIMARY -n kuadrant-system get service kuadrant-coredns -o jsonpath='{.status.loadBalancer.ingress[0].ip}') echo "CoreDNS Primary IP: ${COREDNS_IP_PRIMARY}"Create a
ConfigMapto define the authoritative zone for CoreDNS on the primary cluster. This minimal configuration enables thekuadrantplugin and GeoIP features.$ cat | oc --context $CTX_PRIMARY apply -f - apiVersion: v1 kind: ConfigMap metadata: name: coredns-kuadrant-config namespace: kuadrant-coredns data: Corefile: | ${KUADRANT_SUBDOMAIN}.${ONPREM_DOMAIN}:53 { debug errors health { lameduck 5s } ready log geoip <GeoIP-database-name>.mmdb { edns-subnet } metadata kuadrant }NoteFor production or exact GeoIP routing, mount your licensed MaxMind GeoIP database into the CoreDNS pod and update the filename in the
data.Corefile.geoipparameter.Update the CoreDNS deployment to use the new configuration by running the following command:
$ oc --context $CTX_PRIMARY -n kuadrant-system patch deployment kuadrant-coredns --patch '{"spec":{"template":{"spec":{"volumes":[{"name":"config-volume","configMap":{"name":"coredns-kuadrant-config","items":[{"key":"Corefile","path":"Corefile"}]}}]}}}}'Set a watch-and-wait command for the deployment rollout to complete by running the following command:
$ oc --context $CTX_PRIMARY -n kuadrant-system rollout status deployment/kuadrant-coredns
Example output
kuadrant-coredns successfully rolled out
Create the Kubernetes
Secretthat Connectivity Link uses to interact with CoreDNS. This secret specifies the zones this provider instance is authoritative for.$ oc create secret generic coredns-credentials \ --namespace=kuadrant-system \ --type=kuadrant.io/coredns \ --from-literal=ZONES="${KUADRANT_SUBDOMAIN}.${ONPREM_DOMAIN}" \ --context ${CTX_PRIMARY}On your authoritative on-premise DNS server, configure delegation for the
${KUADRANT_SUBDOMAIN}.${ONPREM_DOMAIN}subdomain to the external IP addresses of the CoreDNS services running on your primary and secondary clusters,$COREDNS_IP_PRIMARYand$COREDNS_IP_SECONDARY. The specific steps depend on your DNS server software, for example, BIND, Windows DNS Server. You typically need to add Name Server (NS) records pointing the subdomain to the CoreDNS IP addresses.Example delegation
; Delegate kuadrant.example.local to CoreDNS instances $ORIGIN ${KUADRANT_SUBDOMAIN}.${ONPREM_DOMAIN}. @ IN SOA ns1.${ONPREM_DOMAIN}. hostmaster.${ONPREM_DOMAIN}. ( 2023102601 ; serial 7200 ; refresh (2 hours) 3600 ; retry (1 hour) 1209600 ; expire (2 weeks) 3600 ; minimum (1 hour) ) IN NS coredns-primary.${KUADRANT_SUBDOMAIN}.${ONPREM_DOMAIN}. coredns-primary IN A ${COREDNS_IP_PRIMARY}Restart CoreDNS by running the following command:
$ oc -n kuadrant-coredns rollout restart deployment kuadrant-coredns
NoteAfter configuring delegation, you can test that the DNS resolution for the delegated subdomain works correctly by querying your authoritative DNS server for a record within the
kuadrantsubdomain. One of the CoreDNS instances is expected to refer to and answer the query.
Verification
Launch a temporary pod for testing by running the following command:
$ oc debug node/<node-name>Replace
<node-name>with the node you are testing on.Add
transferto your Corefile by running the following command:$ oc patch cm kuadrant-coredns -n kuadrant-coredns --type merge \ -p "$(oc get cm kuadrant-coredns -n kuadrant-coredns -o jsonpath='{.data.Corefile}' | \ sed 's/kuadrant/transfer {\n to *\n }\n kuadrant/' | \ jq -Rs '{data: {Corefile: .}}')"Verify zone delegation by running the following command:
$ dig @${EDGE_NS} -k config/bind9/ddns.key -t AXFR example.comExample output
example.com. 30 IN SOA example.com. root.example.com. 17 30 30 30 30 example.com. 30 IN NS ns.example.com. k.example.com. 300 IN NS ns1.k.example.com. ns1.k.example.com. 300 IN A 172.18.0.16 ns.example.com. 30 IN A 127.0.0.1 example.com. 30 IN SOA example.com. root.example.com. 17 30 30 30 30
In this example,
k.exampleis the delegated zone andns1.k.exampleis the primary zone.Optional. Remove the
transferfrom your Corefile by running the following command:$ oc patch cm kuadrant-coredns -n kuadrant-coredns --type merge \ -p "$(oc get cm kuadrant-coredns -n kuadrant-coredns -o jsonpath='{.data.Corefile}' | \ sed '/transfer {/,/}/d' | \ jq -Rs '{data: {Corefile: .}}')"Verify the start of authority (SOA) record for the delegated zone by running the following command:
$ dig @${EDGE_NS} soa k.example.comExample output
;; ANSWER SECTION: k.example.com. 60 IN SOA ns1.k.example.com. hostmaster.k.example.com. 12345 7200 1800 86400 60
The SOA record is expected to show the primary name server (NS) as confirmation that CoreDNS is responding authoritatively. In this example the primary NS is
ns1.k.example.com.
Next steps
-
Create
DNSPolicyresources in your OpenShift Container Platform clusters, referencing thecoredns-credentialssecret as the provider. Connectivity Link manages DNS records within the delegated${KUADRANT_SUBDOMAIN}.${ONPREM_DOMAIN}zone through CoreDNS.
2.6. CoreDNS Corefile configuration reference
A Corefile is organized into server blocks that define how DNS queries are handled based on the port and zone. Plugin execution order is determined at build time, not by Corefile order, so you can list plugins in any order. When making configurations by using the DNS Operator, you can check the ConfigMap for the resulting server block.
Connectivity Link includes a minimal Corefile that you can update for your uses:
Minimal Corefile
Corefile: |
. {
health
ready
}For a Corefile with configurations, see the following example:
Example configured Corefile
k.example.com {
debug
errors
log
health {
lameduck 5s
}
ready
geoip GeoLite2-City-demo.mmdb {
edns-subnet
}
metadata
transfer {
to *
}
kuadrant
prometheus 0.0.0.0:9153
}- Zone coordination
-
Each zone in the Corefile must match a zone listed in your CoreDNS provider secret’s
ZONESfield. - Required plugins
-
The
geoipandmetadataplugins are included by default with the Connectivity Link implementation of the CoreDNS Corefile. - Corefile updates
After you update your Corefile, you must always restart your pods for the CoreDNS deployment. You can use the following command:
$ oc rollout restart deployment/coredns -n kuadrant-system
You can check the status of the rollout by running the following command:
$ oc rollout status deployment/coredns -n kuadrant-system --watch
2.6.1. Default enabled plugins in CoreDNS
The following plugins are enabled by default in the Connectivity Link CoreDNS plugin. You must ensure CoreDNS compatibility and enable any other plugins that you want to add.
| Plugin | Function |
|---|---|
| acl | Enforces access control policies on source IP addresses and prevents unauthorized access to DNS servers. |
| cache | Enables a front-end cache. |
| cancel | Cancels a request’s context after 5001 milliseconds. |
| debug | Disables the automatic recovery when a crash happens so that a stack trace is generated. |
| errors | Enables error logging. |
| file |
Enables serving zone data from an RFC 1035-style |
| forward | Enables IP forwarding. Facilitates proxying DNS messages to upstream resolvers. |
| geoip |
Lookup |
| header | Modifies the header for queries and responses. |
| health | Enables a health check endpoint. |
| hosts |
Enables serving zone data from an |
| kuadrant |
Enables serving zone data from kuadrant |
| local |
Responds with a basic reply to a local names in the following zones, |
| log | Enables query logging to standard output. Logs are structured for aggregation by cluster logging solutions. |
| loop | Detects simple forwarding loops and halts the server. |
| metadata | Enables a metadata collector. |
| minimal | Minimizes size of the DNS response message whenever possible. |
| nsid | Adds an identifier of this server to each reply. |
| prometheus |
Enables Prometheus metrics. The default listens on |
| ready | Enables a readiness check HTTP endpoint. |
| reload | Allows automatic reload of a changed Corefile. |
| rewrite | Rewrites queries for automatic port forwarding. |
| root | Simply specifies the root of where to find files. |
| secondary | Enables serving a zone retrieved from a primary server. |
| timeouts | Means that you can configure the server read, write and idle timeouts for the TCP, TLS, DoH and DoQ (idle only) servers. |
| tls | Means that you can configure the server certificates for the TLS, gRPC, and DoH servers. |
| transfer | Perform (outgoing) zone transfers for other plugins. |
| view | Defines the conditions that must be met for a DNS request to be routed to the server block. |
| whoami | Returns your resolver’s local IP address, port and transport. |
When using CoreDNS, if you do not need to keep all logs, you can set up the logs directive to only report errors and use the prometheus plugin to gather primary metrics instead. Prometheus metrics give you trends, for example, how many queries failed, without storing every single piece of traffic.
2.7. Troubleshooting CoreDNS with the kuadrant plugin
You can troubleshoot your CoreDNS deployment by restarting CoreDNS and by checking the logs. Use the following commands as needed to investigate your specific errors:
Restart CoreDNS by using the following command:
$ oc -n kuadrant-coredns rollout restart deployment kuadrant-coredns
You can view CoreDNS logs by running the following command:
$ oc logs -f deployments/kuadrant-coredns -n kuadrant-coredns
You can get recent logs by running the following command:
$ oc logs --tail=100 deployments/kuadrant-coredns -n kuadrant-coredns
2.8. CoreDNS removal or migration
You can remove your CoreDNS integration by deleting the CoreDNS deployment and deleting your DNS policies. To migrate to a different provider, delete existing dnsPolicy CRs and re-create them with the new provider secret reference. No data is permanently locked into CoreDNS.