About OpenShift Serverless
Introduction to OpenShift Serverless
Abstract
Chapter 1. Release notes
Release notes contain information about new features, deprecated features, breaking changes, and known issues. The following release notes apply to the most recent OpenShift Serverless releases on OpenShift Container Platform.
1.1. About API versions
API versions indicate the development status of features and custom resources in OpenShift Serverless. Using an wrong API version when creating resources on a cluster can cause deployment issues.
The OpenShift Serverless Operator upgrades older resources that use deprecated API versions to the latest version. For example, if you create resources that use older versions of the ApiServerSource API, such as v1beta1, the OpenShift Serverless Operator updates those resources to version 1 when that version becomes available and v1beta1 becomes deprecated.
After deprecation, future releases might remove older API versions. Deprecated APIs continue to work and do not cause resources to fail. However, using an API version that no longer exists causes resource failures. Update your manifests to the latest API version to avoid issues.
1.2. Generally Available and Technology Preview features
Features that are Generally Available (GA) are fully supported and are suitable for production use. Technology Preview (TP) features are experimental features and are not intended for production use. See the Technology Preview scope of support on the Red Hat Customer Portal for more information about TP features.
The following table provides information about which OpenShift Serverless features are GA and which are TP:
Table 1.1. Generally Available and Technology Preview features tracker
| Feature | 1.36 | 1.37 |
|---|---|---|
| Authorization policies for Knative Eventing | TP | TP |
| Service Mesh 3.x integration | - | TP |
|
| TP | TP |
|
Automatic | TP | TP |
|
| TP | TP |
| Eventing Transport encryption | GA | GA |
| Serving Transport encryption | TP | TP |
| ARM64 support | GA | GA |
| Custom Metrics Autoscaler Operator (KEDA) | TP | TP |
| kn event plugin | GA | GA |
| Pipelines-as-code | TP | TP |
| Advanced trigger filters | GA | GA |
| Go function using S2I builder | GA | GA |
| Installing and using Serverless on single-node OpenShift | GA | GA |
| Using Service Mesh to isolate network traffic with Serverless | TP | TP |
|
Overriding | GA | GA |
|
| GA | GA |
| Quarkus functions | GA | GA |
| Node.js functions | GA | GA |
| TypeScript functions | GA | GA |
| Python functions | TP | GA |
| Service Mesh mTLS | GA | GA |
|
| GA | GA |
| HTTPS redirection | GA | GA |
| Kafka broker | GA | GA |
| Kafka sink | GA | GA |
| Init containers support for Knative services | GA | GA |
| PVC support for Knative services | GA | GA |
|
| GA | GA |
1.3. Deprecated and removed features
Earlier releases introduced some features as Generally Available (GA) or Technology Preview (TP). OpenShift Serverless has now deprecated or removed some of these features. OpenShift Serverless still includes and supports deprecated functionality, but a future release will remove it. Do not use deprecated features for new deployments.
For the most recent list of major functionality deprecated and removed within OpenShift Serverless, see the following table:
Table 1.2. Deprecated and removed features tracker
| Feature | 1.36 | 1.37 |
|---|---|---|
|
Knative client | Deprecated | Deprecated |
|
EventTypes | Deprecated | Deprecated |
|
| Removed | Removed |
| Red Hat OpenShift Service Mesh with Serverless when Kourier is enabled | Deprecated | Deprecated |
| Namespace-scoped Kafka brokers | Deprecated | Deprecated |
|
| Deprecated | Deprecated |
|
Serving and Eventing | Removed | Removed |
|
| Removed | Removed |
|
| Removed | Removed |
1.4. Red Hat OpenShift Serverless 1.37.2
OpenShift Serverless Logic 1.37.2 is now available. This release addresses identified Common Vulnerabilities and Exposures (CVEs) to enhance security and reliability. The following notes describe fixed issues that affect OpenShift Serverless Logic on OpenShift Container Platform.
1.4.1. Fixed issues
- Support for JSON schema files larger than 65 KB in
SonataFlowworkflows Before this update, OpenShift Serverless Logic loaded JSON schema input validation files as strings during the build process. The system assumed that schema files would not exceed 65 KB. As a consequence, workflows failed to process schema files larger than 65 KB. With this release, the workflow build process supports larger JSON schema files. As a result, workflows can use JSON schema files that exceed the earlier 65 KB limitation.
1.5. Red Hat OpenShift Serverless 1.37.1
OpenShift Serverless 1.37.1 is now available. This release of OpenShift Serverless addresses identified Common Vulnerabilities and Exposures (CVEs) to enhance security and reliability. Fixed issues and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in the following notes:
1.5.1. New features
- OpenTelemetry support for workflow observability in OpenShift Serverless Logic
- OpenShift Serverless Logic now includes OpenTelemetry support to provide observability for workflow executions. This enhancement enables users to collect and export telemetry data for improved monitoring and tracing of workflows.
1.5.2. Fixed issues
- 3scale Kourier gateway crash due to file descriptor limits on OpenShift Container Platform 4.21
Before this update, OpenShift Container Platform 4.21 and later used a reduced default soft limit for the number of open files when running the
3scale Kourier gatewaywith OpenShift Serverless 1.37.0 and earlier. As a consequence, the3scale Kourier gatewaycould crash with thesocket(2) failed, got error: Too many open fileserror. With this release, the3scale Kourier gatewaydeployment sets the soft limit for the maximum number of open files to the value of the hard limit. As a result, the3scale Kourier gatewayno longer crashes due to file descriptor limits on OpenShift Container Platform 4.21.- Removal of unused webhook server initialization in OpenShift Serverless Logic Operator
Before this update, the OpenShift Serverless Logic Operator initialized webhook server code that the Operator did not use. As a consequence, the Operator included unnecessary initialization logic in its startup sequence. With this release, the OpenShift Serverless Logic Operator removes the unused webhook server initialization code. As a result, the Operator startup process no longer includes unused webhook server components.
- Incorrect hibernate schema initialization option in Data Index PostgreSQL deployments
Before this update, the OpenShift Serverless Logic Operator configured Data Index PostgreSQL deployments with the
QUARKUS_HIBERNATE_ORM_DATABASE_GENERATION=updateenvironment variable, resulting in an unintended database schema generation strategy. With this release, the Operator no longer sets this environment variable for Operator-managed Data Index deployments. As a result, Data Index now uses the correct schema initialization configuration.- Variables and metadata queries restored in OpenShift Serverless Logic GraphQL
Before this update, OpenShift Serverless Logic 1.37 did not support variables and metadata queries in GraphQL because JSON query capability was missing. As a consequence, GraphQL queries that relied on variables and metadata failed. With this release, OpenShift Serverless Logic introduces JSON query capability for GraphQL. As a result, variables and metadata queries now function as expected.
1.5.3. Known issues
- Python runtime limited to 1024 open files on OpenShift Container Platform 4.21
On OpenShift Container Platform 4.21, the default
ulimit -nsoft limit for the number of open files is set to 1024, reduced from 1048576 in earlier OpenShift Container Platform releases up to version 4.20. The hard limit remains 524288. As a result, Serverless functions that use the Python runtime cannot open more than 1024 file descriptors, such as open files or TCP sockets.
1.6. Red Hat OpenShift Serverless 1.37
OpenShift Serverless 1.37 is now available. New features, updates, fixed issues, and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in the following notes:
1.6.1. New features
1.6.1.1. OpenShift Serverless Eventing
- OpenShift Serverless now uses Knative Eventing 1.17.
- OpenShift Serverless now uses Knative for Apache Kafka 1.17.
- Knative Eventing now supports the ability to define authorization policies that restrict which entities can send events to Eventing custom resources. This enables greater control and security within event-driven architectures. This functionality is now available as a Technology Preview feature.
1.6.1.2. OpenShift Serverless Serving
- OpenShift Serverless now uses Knative Serving 1.17.
- OpenShift Serverless now uses Kourier 1.17.
-
OpenShift Serverless now uses Knative (
kn) CLI 1.17. - Integration with Red Hat OpenShift Service Mesh 3.x is now available as a Technology Preview feature.
1.6.1.3. OpenShift Serverless Functions
-
The
kn funcCLI plugin now usesfunc1.17. - Python runtime for OpenShift Serverless Functions are now Generally Available (GA).
- The Func MCP server is now available as a Developer Preview feature.
1.6.1.4. OpenShift Serverless Logic
OpenShift Serverless Logic introduces a new data index mutation named
ExecuteAfter, which enables you to create and execute a new workflow instance that can reuse the output of a previously completed workflow as its input.The
ExecuteAftermutation accepts the following arguments:-
processId: Specifies the process ID of the workflow definition to execute. -
processVersion: Specifies the process version of the workflow definition to execute. -
completedInstanceId(optional): Specifies the ID of a previously completed workflow whose output serves as input for the new workflow instance. -
input(optional): Specifies the additional input data, which the system merges with the output of thecompletedInstanceId, if you provide it. -
excludeProperties(optional): Specifies the list of properties that the system does not copy from thecompletedInstanceIdoutput into the new workflow instance input.
-
- OpenShift Serverless Logic container images now use the RHEL UBI 9 parent image, aligning the Serverless Logic components with the RHEL 9 runtime environment.
-
OpenShift Serverless Logic Operator now uses a simplified
ClusterServiceVersion(CSV) naming schema that removes therhel9suffix. For example, the CSV name is nowlogic-operator.v1.37.0. - OpenShift Serverless Logic now provides a landing web application for the Jobs Service and Data Index services, offering users a centralized entry point to access and explore these services.
- OpenShift Serverless Logic now supports dynamic URLs and security configurations for OpenAPI function calls in workflows, enabling workflows to adapt securely to different environments and endpoints.
- OpenShift Serverless Logic now includes a new guide that explains how to configure Maven mirrors in builder and devmode images.
-
OpenShift Serverless Logic now supports token exchange by introducing JSON Web Token (JWT) token parsing for SonataFlow workflows. This feature adds a new Quarkus add-on,
sonataflow-addons-quarkus-jwt-parser, which enables workflows to parse JWT tokens and extract user claims to generate personalized responses.
1.6.2. Fixed issues
1.6.2.1. OpenShift Serverless Eventing
Before this update, the
KafkaSourcedispatcher stopped committing offsets when the offsets of produced events were not consecutive integers, for example, when events were produced within a Kafka transaction. This behavior caused the dispatcher to stall and prevented subsequent events from being processed.With this update, the
KafkaSourcedispatcher has been fixed to handle such empty offsets correctly. Additionally, the default Kafka consumer configuration forKafkaSourcehas been updated toisolation.level=read_committed. When Kafka transactions are used to produce events into a Kafka topic, theKafkaSourcenow processes only the events from committed transactions.
1.6.2.2. OpenShift Serverless Logic
- Before this update, the Sleep state removed tokens from the workflow context when used within a sub-flow. This issue is now fixed, ensuring that tokens remain available throughout the workflow execution.
- Before this update, converting a project to a Quarkus project using the Kn workflow plugin generated incorrect Maven repositories. This issue is now fixed, and the conversion process generates the correct Maven repositories.
-
Before this update, the OpenShift Serverless Logic Builder image downloaded
plexus-utilsversion 1.1. This issue is now fixed, and the Builder image no longer downloads this dependency.
1.6.3. Known issues
1.6.3.1. OpenShift Serverless Eventing
-
The
EventTransformcustom resource definition (CRD) is currently not compatible with Red Hat OpenShift Service Mesh. TheEventTransformresource does not provide a way to configure Istio-specific labels or annotations required for integration with Red Hat OpenShift Service Mesh. As a result, theEventTransformcomponent cannot function properly in environments where Red Hat OpenShift Service Mesh is enabled.
1.6.3.2. OpenShift Serverless Serving
In some cases, cluster-scoped resources such as webhook configurations are not removed during the uninstallation, reinstallation, or upgrade of the
KnativeServingor Serverless Operator components. When this occurs, the reconciliation ofKnativeServingfails, and the installation process becomes stuck with an error similar to the following example:failed to apply non rbac manifest: Internal error occurred: failed calling webhook "webhook.serving.knative.dev": failed to call webhook: Post "https://webhook.knative-serving.svc:443/?timeout=10s": no endpoints available for service "webhook"
-
When the
serving.knative.openshift.io/disableRoute=trueannotation is applied to a Knative Service, the service displays an invalid URL in the.status.urlfield. The URL shown does not resolve to the Knative Service and can be misleading. Additionally, both the OpenShift Console UI and the Knative client (kn) CLI display this invalid address in multiple locations. The corresponding Knative Route is also created, and its.status.urlfield contains the same invalid URL.
1.6.3.3. OpenShift Serverless Functions
Some operations of the OpenShift Serverless Function MCP server, such as build and deploy, fail when triggered from the Cursor IDE using its built-in agent. When invoking these operations, the Cursor agent sends a malformed request for any optional parameters. Although the parameter values appear correctly formatted, for example,
"quay.io/myuser", the OpenShift Serverless Function MCP API returns the following error message:Error calling tool: Parameter 'optionalStr' must be of type null,string, got string
1.6.3.4. Knative client (kn) CLI
As of OpenShift Serverless 1.37 release, the
knclient is built with RHEL 9 dependencies and cannot run on RHEL 8. Attempting to run the binary on RHEL 8 displays an error similar to the following:kn: /lib64/libc.so.6: version `GLIBC_2.33' not found (required by kn)
-
As of OpenShift Serverless 1.37 release, the
knclient binary downloaded from the Command Line Tools page in the OpenShift Container Platform web console is not signed with the Red Hat certificate for macOS and Windows platforms. This issue affects the binaries available directly through the OpenShift Container Platform console. To obtain properly signed binaries, download them from the Content from mirror.openshift.com is not included.Official OpenShift Serverless downloads mirror instead.
1.6.3.5. OpenShift Serverless Logic
-
In disconnected cluster environments, the
logic-swf-builder-rhel9image attempts to download theplexus-utils-1.1.jardependency during the build process. As external network access is restricted in disconnected setups, this behavior can result in build failures or timeouts. -
If you apply a
SonataFlowcustom resource (CR) to an OpenShift cluster and the firstSonataFlowBuildfails for any reason, the Operator does not create the workflow deployment even after the build issue is resolved. As a result, the workflow remains undeployed until you manually reapply or rebuild it.
1.7. Red Hat OpenShift Serverless 1.36.1
OpenShift Serverless 1.36.1 is now available. This release of OpenShift Serverless addresses identified Common Vulnerabilities and Exposures (CVEs) to enhance security and reliability. Fixed issues and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in the following notes:
1.7.1. Fixed issues
-
Before this update, the OpenShift Serverless Functions client failed to build remotely with Red Hat OpenShift Pipelines version 1.19, causing pipeline runs to remain in the
Pendingstate on thefetch-sourcestask and report admission webhook errors. With this release, the issue is resolved, and remote builds complete successfully.
1.7.2. Known issues
-
Deploying a Quarkus function with the
kn func deploy --remotecommand on an OpenShift Container Platform s390x cluster triggers a known issue that causes the build task to hang. As a result, the build process does not complete.
1.8. Red Hat OpenShift Serverless 1.36
OpenShift Serverless 1.36 is now available. New features, updates, fixed issues, and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in the following notes:
1.8.1. New features
1.8.1.1. OpenShift Serverless Eventing
- OpenShift Serverless now uses Knative Eventing 1.16.
- OpenShift Serverless now uses Knative for Apache Kafka 1.16.
-
IntegrationSourceandIntegrationSinkare now available as a Technology Preview. These are Knative Eventing custom resources that support selected Kamelets from the Apache Camel project. Kamelets enables you to connect to third-party systems for improved connectivity, acting as either sources (event producers) or sinks (event consumers). - Knative Eventing can now automatically discover and register EventTypes based on the structure of incoming events. This feature simplifies the configuration and management of EventTypes, reducing the need for manual definitions. This feature is available as a Technology Preview.
OpenShift Serverless Eventing introduces
EventTransform, a new API resource that you can use to declaratively transform JSON events without writing custom code. WithEventTransform, you can modify attributes, extract or reshape data, and streamline event flows across systems. Common use cases include event enrichment, format conversion, and request-response transformation.EventTransformintegrates seamlessly with Knative sources, triggers, and brokers, enhancing interoperability in event-driven architectures. This feature is now available as a Technology Preview.See the following key features of
EventTransform:- Define transformations declaratively using Kubernetes-native resources
- Use JSONata expressions for advanced and flexible event data manipulation
- Easily insert transformations at any point within event-driven workflows
- Support for transforming both sink-bound and reply events for better routing control
-
The
sinks.knative.devAPI group has now been added to theClusterRolesnamespace in Knative Eventing. Developers now have permissions toget,list, andwatchresources in this API group, improving accessibility and integration with sink resources. - Transport encryption for Knative Eventing is now available as a Generally Available (GA) feature.
- Knative Eventing now supports the ability to define authorization policies that restrict which entities can send events to Eventing custom resources. This enables greater control and security within event-driven architectures. This functionality is available as a Developer Preview.
- Knative Eventing catalog is now integrated into the Red Hat Developer Hub through the Event Catalog plugin for Backstage. This integration enables users to discover and explore Knative Eventing resources directly within the Red Hat Developer Hub interface. This functionality is available as a Developer Preview.
-
The
KafkaSourceAPI has now been promoted to versionv1, signaling its stability and readiness for production use. - OpenShift Serverless now supports deployment on ARM architecture as a Generally Available (GA) feature.
-
The
kn eventplugin is now available as a GA feature. You can use this plugin to send events directly from the command line to various destinations, streamlining event-driven application development and testing workflows.
1.8.1.2. OpenShift Serverless Serving
- OpenShift Serverless now uses Knative Serving 1.16.
- OpenShift Serverless now uses Kourier 1.16.
-
OpenShift Serverless now uses Knative (
kn) CLI 1.16.
1.8.1.3. OpenShift Serverless Functions
-
The
kn funcCLI plugin now usesfunc1.16. - OpenShift Serverless Functions support integration with Cert Manager, enabling automated certificate management for the function workloads. This functionality is available as a Developer Preview.
1.8.1.4. OpenShift Serverless Logic
When starting a workflow via HTTP, you can now include additional properties alongside the
workflowdatafield in the request body. These extra fields are ignored by the runtime but are available in the Data Index as process variables as shown in the following example:{"workflowdata": {"name": "John"}, "groupKey": "follower"}You can now filter workflow instances by the content of workflow variables using GraphQL queries on
ProcessInstances.variables. For example, the following query retrieves process instances where thelanguagefield inworkflowdataequalsSpanish:ProcessInstances (where:{variables:{workflowdata:{language:{equal:Spanish}}}}) { variables, state, lastUpdate, nodes { name } }- OpenShift Serverless Logic Data Index now supports filtering queries by using workflow definition metadata.
- OpenShift Serverless Logic Operator now emits events to the Data Index to indicate when a workflow definition becomes available or unavailable.
1.8.2. Fixed issues
1.8.2.1. OpenShift Serverless Eventing
Previously, the Knative Kafka dispatcher could stop consuming events if a Kafka consumer group rebalance occurred while a sink was processing events out of order. This behavior triggered the following errors:
-
SEVERE: Unhandled exception -
java.lang.IndexOutOfBoundsException: bitIndex < 0 -
Repeated logs like
Request joining group due to: group is already rebalancing
This issue is now fixed. The dispatcher correctly handles out-of-order event consumption during rebalancing and continues processing events without interruption.
-
-
Previously, a KafkaSource remained in a
Readystate even whenKafkaSource.spec.net.tls.keyfailed to load due to the use of unsupported TLS certificates in PKCS #1 format. This issue is now fixed. An appropriate error is now reported when attempting to create aKafkaBroker,KafkaChannel,KafkaSource, orKafkaSinkusing TLS certificates in an unsupported format.
1.8.3. Known issues
1.8.3.1. OpenShift Serverless Logic
-
If the
swf-dev-modeimage is started with a broken or invalid workflow definition, the container might enter a stuck state. -
When deploying a workflow in the
previewprofile on OpenShift Container Platform, if the initial build fails and is later corrected, the Operator does not create the corresponding workflow deployment. As a result, the deployment remains missing and theSonataFlowstatus is not updated, even after the build is fixed. -
The OpenShift Serverless Logic builder image consistently downloads the
plexus-utils-1.1artifact during the build process, regardless of local caching or dependency resolution settings. - When running images in disconnected or restricted network environments, the Maven wrapper might experience timeouts while attempting to download required components.
-
The
openshift-serverless-1/logic-swf-builder-rhel8:1.35.0andopenshift-serverless-1/logic-swf-builder-rhel8:1.36.0images are currently downloading the persistence extensions from Maven during the build process.
1.9. Additional resources
Chapter 2. OpenShift Serverless overview
OpenShift Serverless provides Kubernetes-native building blocks for creating and deploying serverless, event-driven applications on OpenShift Container Platform. These applications scale up and down (to zero) on-demand and respond to events from several sources. OpenShift Serverless uses the open source Knative project to deliver portability and consistency across hybrid and multicloud environments.
The following sections describe the core components of OpenShift Serverless:
2.1. About Knative Serving
Knative Serving builds on Kubernetes to support deploying and serving of applications and functions as serverless containers. Serving simplifies the application deployment, dynamically scales based on in incoming traffic, and supports custom rollout strategies with traffic splitting.
Knative Serving includes the following features:
- Simplified deployment of serverless containers
- Traffic-based auto-scaling, including scale-to-zero
- Routing and network programming
- Point-in-time application snapshots and their configurations
2.2. About Knative Eventing
Knative Eventing provides a platform that offers composable primitives to enable late-binding event sources and event consumers.
Knative Eventing supports the following architectural cloud-native concepts:
- Services are loosely coupled during development and deployed independently to production.
- A producer can generate events before a consumer starts listening, and a consumer can express interest in events or event types that no producer generates yet.
- You can connect services to create new applications without modifying the producer or consumer, and select a specific subset of events from a particular producer.
2.3. About OpenShift Serverless Functions
You can write OpenShift Serverless Functions and deploy them as Knative Services, using Knative Serving and Eventing.
OpenShift Serverless Functions includes the following features:
Support for the following build strategies:
- Source-to-Image (S2I)
- Buildpacks
- Multiple runtimes
-
Local developer experience through the Knative (
kn) CLI - Project templates
-
Support for receiving
CloudEventsand plain HTTP requests
2.4. About OpenShift Serverless Logic
With OpenShift Serverless Logic, you define declarative workflow models by using YAML or JSON files to orchestrate event-driven, serverless applications. You can visualize workflow execution to simplify debugging and optimization. Built-in error handling and fault tolerance help you manage errors and exceptions during workflow execution.
OpenShift Serverless Logic implements the Cloud Native Computing Foundation (CNCF) Serverless Workflow specification.
2.5. About Knative CLI
You can use the Knative (kn) CLI to create Knative resources from the command line or within shell scripts. Its extensive help pages and autocompletion reduce the need to memorize detailed Knative resource schemas.
The Knative (kn) CLI includes the following features:
Table 2.1. The Knative (kn) CLI features
| Category | Features |
|---|---|
| Knative Serving |
Services |
| Knative Eventing |
Sources |
| Extensibility |
Plugin architecture based on the Kubernetes ( |
| Integration | Integration of Knative into Tekton pipelines |
2.6. Additional resources
- This content is not included.What is serverless?
- Extending the Kubernetes API with custom resource definitions
- Managing resources from custom resource definitions
- Content from knative.dev is not included.Knative project
- Serverless Operator Life Cycles
- Content from serverlessworkflow.io is not included.CNCF Serverless Workflow specification
Chapter 3. Knative Serving overview
Knative Serving helps developers create, deploy, and manage cloud-native applications. It provides Kubernetes custom resource definitions (CRDs) that define and control serverless workloads on an OpenShift Container Platform cluster. Developers use these CRDs to create custom resources (CRs) as building blocks for complex use cases such as rapidly deploying serverless containers or automatically scaling pods.
3.1. Knative Eventing use cases
Knative Serving defines a set of resources that manage the lifecycle, configuration, and traffic routing of serverless applications on a Kubernetes cluster.
- Service
-
The
service.serving.knative.devcustom resource definition (CRD) manages the lifecycle of your workload and ensures that the application runs and remains reachable through the network. It creates a route, a configuration, and a new revision for each change to a user-created service, or custom resource. Developers interact with Knative primarily by modifying services. - Revision
-
The
revision.serving.knative.devcustom resource definition (CRD) represents a point-in-time snapshot of the code and configuration for each modification to the workload. Revisions are immutable objects and you can retain them, if needed. - Route
- The route.serving.knative.dev CRD maps a network endpoint to one or more revisions. You can manage the traffic in several ways, including fractional traffic and named routes.
- Configuration
- The configuration.serving.knative.dev CRD maintains the required state for your deployment. It provides a clean separation between code and configuration. Modifying a configuration creates a new revision.
3.2. Additional resources
Chapter 4. Knative Eventing overview
Knative Eventing on OpenShift Container Platform enables developers to use an event-driven architecture with serverless applications. An event-driven architecture is based on the concept of decoupled relationships between event producers and event consumers.
Event producers create events, and event sinks, or consumers, receive events. Knative Eventing uses standard HTTP POST requests to send and receive events between event producers and sinks. These events conform to the CloudEvents specifications, which enables creating, parsing, sending, and receiving events in any programming language.
4.1. Knative Eventing use cases
Knative Eventing supports common event-driven use cases, including publishing and consuming events independently. It also introduces generic resource interfaces that define how components receive and process, and routed events within the system.
Knative Eventing supports the following use cases:
- Publish an event without creating a consumer
- You can send events to a broker as an HTTP POST and use binding to decouple the destination configuration from your application that produces events.
- Consume an event without creating a publisher
- You can use a trigger to consume events from a broker based on event attributes. The application receives events as an HTTP POST.
To enable delivery to many types of sinks, Knative Eventing defines the following generic interfaces that many Kubernetes resources can implement:
- Addressable resources
-
Able to receive and acknowledge an event delivered over HTTP to an address defined in the
status.address.urlfield of the event. The KubernetesServiceresource also satisfies the addressable interface. - Callable resources
-
Able to receive an event delivered over HTTP and transform it, returning
0or1new events in the HTTP response payload. The system can process these returned events further in the same way it processes events from an external event source.
4.2. Using the Knative broker for Apache Kafka
The Knative broker implementation for Apache Kafka provides integration options for you to use supported versions of the Apache Kafka message streaming platform with OpenShift Serverless. Kafka provides options for event source, channel, broker, and event sink capabilities.
Knative broker for Apache Kafka provides additional options, such as:
- Kafka source
- Kafka channel
- Kafka broker
- Kafka sink
4.3. Additional resources
Chapter 5. OpenShift Serverless functions overview
Developers use OpenShift Serverless Functions to create and deploy stateless, event-driven functions as Knative Services on OpenShift Container Platform. The Knative kn CLI includes the kn func plugin. You can use the kn func CLI to create, build, and deploy container images as Knative Services on the cluster.
OpenShift Serverless Functions provides templates for creating basic functions in Quarkus, Node.js, and TypeScript runtimes.
5.1. Additional resources
Chapter 6. OpenShift Serverless Logic overview
OpenShift Serverless Logic enables developers to define declarative workflow models that orchestrate event-driven, serverless applications.
You can write the workflow models in YAML or JSON format, which are ideal for developing and deploying serverless applications in cloud or container environments.
To deploy the workflows in your OpenShift Container Platform, you can use the OpenShift Serverless Logic Operator. The following sections offer an overview of the various OpenShift Serverless Logic concepts.
6.1. Events
An event state includes one or more event definitions that specify the CloudEvent types the state listens to. You can use an event state to start a new workflow instance when it receives a designated CloudEvent, or pause an existing workflow instance until it receives one.
In an event state definition, the onEvents property groups CloudEvent types that trigger the same set of actions. The exclusive property determines how the system matches events. If exclusive is false, the system requires all CloudEvent types in the eventRefs array to match. Otherwise, any referenced CloudEvent type can trigger a match.
The following example shows event definitions, consisting of two CloudEvent types, including noisy and silent and you get an output similar to the following examples:
"events": [
{
"name": "noisyEvent",
"source": "",
"type": "noisy",
"dataOnly" : "false"
},
{
"name": "silentEvent",
"source": "",
"type": "silent"
}
]
You can define an event state with separate onEvent items for noisy and silent CloudEvent types, and set the exclusive property to false to run different actions when both events occur.
{
"name": "waitForEvent",
"type": "event",
"onEvents": [
{
"eventRefs": [
"noisyEvent"
],
"actions": [
{
"functionRef": "letsGetLoud"
}
]
},
{
"eventRefs": [
"silentEvent"
],
"actions": [
{
"functionRef": "beQuiet"
}
]
}
]
,
"exclusive": false
}6.2. Callbacks
The Callback state performs an action and waits for an event that the action produces before it resumes the workflow. It invokes an asynchronous external service, making it suitable for fire&wait-for-result operations.
From a workflow perspective, an asynchronous service returns control to the caller immediately without waiting for the action to complete. After the action completes, the system publishes a CloudEvent to resume the workflow.
{
"name": "CheckCredit",
"type": "callback",
"action": {
"functionRef": {
"refName": "callCreditCheckMicroservice",
"arguments": {
"customer": "${ .customer }"
}
}
},
"eventRef": "CreditCheckCompletedEvent",
"timeouts": {
"stateExecTimeout": "PT15M"
},
"transition": "EvaluateDecision"
}name: CheckCredit
type: callback
action:
functionRef:
refName: callCreditCheckMicroservice
arguments:
customer: "${ .customer }"
eventRef: CreditCheckCompletedEvent
timeouts:
stateExecTimeout: PT15M
transition: EvaluateDecision
The action property defines a function call that triggers an external activity or service. After the action executes, the Callback state waits for a CloudEvent, which indicates the completion of the manual decision by the called service.
After the completion callback event is received, the Callback state completes its execution and transitions to the next defined workflow state or completes workflow execution if it is an end state.
6.3. JQ expressions
Each workflow instance uses a data model. The data model consists of a JSON object, regardless of whether the workflow file uses YAML or JSON. The initial content of the JSON object depends on how you start the workflow. If you start the workflow by using a CloudEvent, the workflow reads content from the data property. If you start the workflow through an HTTP POST request, the workflow reads content from the request body.
JSON Query (JQ) expressions interact with the data model. The system supports JsonPath and JQ expression languages, and JQ serves as the default. You can change the expression language to JsonPath by using the expressionLang property.
{
"name": "max",
"type": "expression",
"operation": "{max: .numbers | max_by(.x), min: .numbers | min_by(.y)}"
}6.4. Error handling
With OpenShift Serverless Logic, you define explicit error handling in your workflow model instead of relying on generic mechanisms. Explicit error handling helps you manage errors that occur during interactions between the workflow and external systems. When an error occurs, it changes the normal workflow sequence. The workflow transitions to an alternative state that can handle the error instead of moving to the predefined state.
Each workflow state defines its own error handling for issues that occur during its execution. Error handling in one state does not handle errors that occur in another state during workflow execution.
If the workflow encounters an unknown error that the definition does not handle explicitly, the runtime reports the error and stops workflow execution.
6.4.1. Error definition
An error definition in a workflow includes the name and code parameters. The name provides a short, natural language description of the error, such as wrong parameter. The code helps the implementation identify the error.
The code parameter is mandatory. The engine uses different strategies to map the value to a runtime exception, including fully qualified class name (FQCN), error message, and status code.
During workflow execution, you must handle known errors in the top-level errors property. You can define this property as a string to reference a reusable JSON or YAML file, or as an array to define errors inline in the workflow.
The following example shows how to reference a reusable JSON error definition file:
{
"errors": "file://documents/reusable/errors.json"
}The following example shows how to reference a reusable YAML error definition file:
errors: file://documents/reusable/errors.yaml
The following example defines workflow errors inline in a JSON file:
{
"errors": [
{
"name": "Service not found error",
"code": "404",
"description": "Server has not found anything matching the provided service endpoint information"
}
]
}The following example defines workflow errors inline in a YAML file:
errors:
- name: Service not found error
code: '404'
description: Server has not found anything matching the provided service endpoint information6.5. Schema definitions
OpenShift Serverless Logic supports two types of schema definitions: input schema definition and output schema definition.
6.5.1. Input schema definition
The dataInputSchema parameter validates workflow data input against a defined JSON schema. You should offer dataInputSchema because the system verifies the input before it executes any workflow states.
You can define a dataInputSchema as follows:
"dataInputSchema": {
"schema": "URL_to_json_schema",
"failOnValidationErrors": false
}The schema property uses a URI to specify the path to the JSON schema that validates the workflow data input. You can use a classpath URI, a file path, or an HTTP URL. If you specify a classpath URI, place the JSON schema file in the project resources or another directory in the classpath.
The failOnValidationErrors parameter is optional and controls how the system handles invalid input data. If you do not specify this parameter or set it to true, the system throws an exception and stops execution. If you set it to false, the system continues execution and logs validation errors at the warning (WARN) level.
6.5.2. Output schema definition
Output schema definition is applied after workflow execution to verify that the output model has the expected format. It is also useful for Swagger generation purposes.
Similar to Input schema definition, you must specify the URL to the JSON schema, using outputSchema as follows:
Example of outputSchema definition
"extensions" : [ {
"extensionid": "workflow-output-schema",
"outputSchema": {
"schema" : "URL_to_json_schema",
"failOnValidationErrors": false
}
} ]
The same rules described for dataInputSchema are applicable for schema and failOnValidationErrors. The only difference is that the latter flag is applied after workflow execution.
6.6. Custom functions
OpenShift Serverless Logic supports the custom function type, which enables the implementation to extend the function definitions capability. By combining with the operation string, you can use a list of predefined function types.
Custom function types might not be portable across other runtime implementations.
6.6.1. Sysout custom function
You can use the sysout function for logging, as shown in the following example:
{
"functions": [
{
"name": "logInfo",
"type": "custom",
"operation": "sysout:INFO"
}
]
}
The string after the : is optional and is used to indicate the log level. The possible values are TRACE, DEBUG, INFO, WARN, and ERROR. If the value is not present, INFO is the default.
In the state definition, you can call the same sysout function as shown in the following example:
{
"states": [
{
"name": "myState",
"type": "operation",
"actions": [
{
"name": "printAction",
"functionRef": {
"refName": "logInfo",
"arguments": {
"message": "\"Workflow model is \\(.)\""
}
}
}
]
}
]
}
In the earlier example, the message argument can be a jq expression or a jq string using interpolation.
6.6.2. Java custom function
OpenShift Serverless Logic supports the java functions within an Apache Maven project, in which you define your workflow service.
The following example shows the declaration of a java function:
Example of a java function declaration
{
"functions": [
{
"name": "myFunction",
"type": "custom",
"operation": "service:java:com.acme.MyInterfaceOrClass::myMethod"
}
]
}functions.name-
myFunctionis the function name. functions.type-
customis the function type. functions.operation-
service:java:com.acme.MyInterfaceOrClass::myMethodis the custom operation definition. In the custom operation definition,serviceis the reserved operation keyword, followed by thejavakeyword.com.acme.MyInterfaceOrClassis the Fully Qualified Class Name (FQCN) of the interface or implementation class, followed by the method namemyMethod.
6.6.3. Knative custom function
OpenShift Serverless Logic implements a custom function through the knative-serving add-on to call Knative services. You define a static URI for a Knative service and use it to perform HTTP requests. The system queries the Knative service in the current cluster and translates it into a valid URL.
The following example uses a deployed Knative service:
$ kn service list NAME URL LATEST AGE CONDITIONS READY REASON custom-function-knative-service http://custom-function-knative-service.default.10.109.169.193.sslip.io custom-function-knative-service-00001 3h16m 3 OK / 3 True
You can declare a OpenShift Serverless Logic custom function by using the Knative service name, as shown in the following example:
"functions": [
{
"name": "greet",
"type": "custom",
"operation": "knative:services.v1.serving.knative.dev/custom-function-knative-service?path=/plainJsonFunction",
}
]function.name-
greetis the function name. function.type-
customis the function type. function.operation-
In
operation, you set the coordinates of the Knative service.
This function sends a POST request. If you do not specify a path, OpenShift Serverless Logic uses the root path (/). You can also send GET requests by setting method=GET in the operation. In this case, the arguments are forwarded over a query string.
6.6.4. REST custom function
OpenShift Serverless Logic offers the REST custom type as a shortcut. When you use a custom REST function, you specify the HTTP URI to cal and the HTTP method GET, POST, PATCH, or PUT in the function definition by using the operation string. When you call the function, you pass request arguments as you do with an OpenAPI function.
The following example shows the declaration of a rest function:
{
"functions": [
{
"name": "multiplyAllByAndSum",
"type": "custom",
"operation": "rest:post:/numbers/{multiplier}/multiplyByAndSum"
}
]
}function.name-
multiplyAllAndSumis the function name. function.type-
customis the function type. function.operation-
rest:post:/numbers/{multiplier}/multiplyByAndSumis the custom operation definition. In the custom operation definition,restis the reserved operation keyword that indicates this is a REST call,postis the HTTP method, and/numbers/{multiplier}/multiplyByAndSumis the relative endpoint.
When using the relative endpoints, you must specify the host as a property. The format of the host property is kogito.sw.functions.<function_name>.host. In this example, kogito.sw.functions.multiplyAllByAndSum.host is the host property key. You can override the default port (80) if needed by specifying the kogito.sw.functions.multiplyAllAndSum.port property.
This endpoint expects as body a JSON object whose field numbers is an array of integers, multiplies each item in the array by multiplier, and returns the sum of all the multiplied items.
6.7. Timeouts
OpenShift Serverless Logic defines several timeouts configurations that you can use to configure maximum times for the workflow execution in different scenarios. You can configure how long a workflow can wait for an event to arrive when it is in a given state or the maximum execution time for the workflow.
Regardless of where you define it, configure a timeout as a duration that starts when the referenced workflow element becomes active. Timeouts use the ISO 8601 date and time standard to specify durations and follow the format PnDTnHnMn.nS, where days equal exactly 24 hours. For example, PT15M sets a duration of 15 minutes, and P2DT3H4M sets a duration of 2 days, 3 hours, and 4 minutes.
Month-based timeouts such as P2M, or period of two months, are not valid since the month duration might vary. In that case, use PT60D instead.
6.7.1. Workflow timeout
To configure the maximum duration for a workflow before cancellation, define workflow timeouts. When the system cancels a workflow after the timeout expires, it marks the workflow as finished and removes access through a GET request. As a result, the workflow behaves as if you set the interrupt property to true by default.
You can define workflow timeouts by using the top-level timeouts property. You can specify this property in two formats: string or object.
-
You can use the
stringtype to give a URI that points to a JSON or YAML file containing the workflow timeout definitions. -
You can use the
objecttype to define the timeout settings inline within the workflow.
For example, to cancel the workflow after an hour of execution, use the following configuration:
{
"id": "workflow_timeouts",
"version": "1.0",
"name": "Workflow Timeouts",
"description": "Simple workflow to show the workflowExecTimeout working",
"start": "PrintStartMessage",
"timeouts": {
"workflowExecTimeout": "PT1H"
} ...
}6.7.2. Event timeout
When you define a state in a workflow, use the timeouts property to set the maximum time allowed to complete that state. If the state exceeds this time, the system marks it as timed out and continues execution from that state. The execution flow depends on the state type. For example, the workflow might move to the next state.
Event-based states can use the sub-property eventTimeout to configure the maximum time to wait for an event to arrive. This is the only property that is supported in current implementation.
Event timeouts support callback state timeout, switch state timeout, and event state timeout.
6.7.3. Callback state timeout
You can use the Callback state when you need to run an action that calls an external service and wait for an asynchronous response in the form of an event.
After the workflow consumes the response event, it continues execution and typically moves to the next state defined in the transition property.
Because the Callback state halts execution until the event arrives, you can configure an eventTimeout. If the event does not arrive within the configured duration, the workflow continues execution and moves to the next state defined in the transition property.
The following example defines a Callback state with a timeout in JSON format:
{
"name": "CallbackState",
"type": "callback",
"action": {
"name": "callbackAction",
"functionRef": {
"refName": "callbackFunction",
"arguments": {
"input": "${\"callback-state-timeouts: \" + $WORKFLOW.instanceId + \" has executed the callbackFunction.\"}"
}
}
},
"eventRef": "callbackEvent",
"transition": "CheckEventArrival",
"onErrors": [
{
"errorRef": "callbackError",
"transition": "FinalizeWithError"
}
],
"timeouts": {
"eventTimeout": "PT30S"
}
}6.7.4. Switch state timeout
You can use the Switch state when you need to take an action depending on certain conditions. You can use these conditions based on the workflow data, dataConditions, or on events, eventConditions.
When you use the eventConditions, the workflow execution waits to make a decision until any of the configured events arrives and matches a condition. In this situation, you can configure an event timeout, which controls the maximum time to wait for an event to match the conditions.
If this time expires, the workflow moves to the state defined in the defaultCondition property.
The following example defines a Switch state with a timeout:
{
"name": "ChooseOnEvent",
"type": "switch",
"eventConditions": [
{
"eventRef": "visaApprovedEvent",
"transition": "ApprovedVisa"
},
{
"eventRef": "visaDeniedEvent",
"transition": "DeniedVisa"
}
],
"defaultCondition": {
"transition": "HandleNoVisaDecision"
},
"timeouts": {
"eventTimeout": "PT5S"
}
}6.7.5. Event state timeout
You can use the Event state to wait for one or more events, run a set of actions, and then continue execution. If the Event state serves as the starting state, the workflow creates a new instance.
You can use the timeouts property in this state to set the maximum time the workflow waits for the configured events to arrive.
If the workflow exceeds this time and does not receive the events, it moves to the next state defined in the transition property. If the state defines an end state, the workflow ends the instance without running any actions.
The following example defines an Event state with a timeout:
{
"name": "WaitForEvent",
"type": "event",
"onEvents": [
{
"eventRefs": [
"event1"
],
"eventDataFilter": {
"data": "${ \"The event1 was received.\" }",
"toStateData": "${ .exitMessage }"
},
"actions": [
{
"name": "printAfterEvent1",
"functionRef": {
"refName": "systemOut",
"arguments": {
"message": "${\"event-state-timeouts: \" + $WORKFLOW.instanceId + \" executing actions for event1.\"}"
}
}
}
]
},
{
"eventRefs": [
"event2"
],
"eventDataFilter": {
"data": "${ \"The event2 was received.\" }",
"toStateData": "${ .exitMessage }"
},
"actions": [
{
"name": "printAfterEvent2",
"functionRef": {
"refName": "systemOut",
"arguments": {
"message": "${\"event-state-timeouts: \" + $WORKFLOW.instanceId + \" executing actions for event2.\"}"
}
}
}
]
}
],
"timeouts": {
"eventTimeout": "PT30S"
},
"transition": "PrintExitMessage"
}6.8. Parallelism
OpenShift Serverless Logic serializes the execution of parallel tasks. The term parallel does not imply simultaneous execution; it means that branches have no logical dependency on each other. An inactive branch can start or resume a task without waiting for an active branch to complete if the active branch suspends its execution, for example, while waiting for an event.
A parallel state splits the current workflow execution path into many branches, each with its own path. The workflow executes these paths independently and then joins them back into a single path based on the completionType parameter.
The following example shows a parallel workflow in JSON format:
{
"name":"ParallelExec",
"type":"parallel",
"completionType": "allOf",
"branches": [
{
"name": "Branch1",
"actions": [
{
"functionRef": {
"refName": "functionNameOne",
"arguments": {
"order": "${ .someParam }"
}
}
}
]
},
{
"name": "Branch2",
"actions": [
{
"functionRef": {
"refName": "functionNameTwo",
"arguments": {
"order": "${ .someParam }"
}
}
}
]
}
],
"end": true
}The following example shows a parallel workflow in YAML format:
name: ParallelExec
type: parallel
completionType: allOf
branches:
- name: Branch1
actions:
- functionRef:
refName: functionNameOne
arguments:
order: "${ .someParam }"
- name: Branch2
actions:
- functionRef:
refName: functionNameTwo
arguments:
order: "${ .someParam }"
end: true
In the earlier examples, the allOf defines all branches must complete execution before the state can change or end. This is the default value if this parameter is not set.
6.9. Additional resources
Chapter 7. OpenShift Serverless support
If you experience difficulty with a procedure described in this documentation, visit the Red Hat Customer Portal at http://access.redhat.com. You can use the Red Hat Customer Portal to search or browse through the Red Hat Knowledgebase of technical support articles about Red Hat products. You can also submit a support case to Red Hat Global Support Services (GSS), or access other product documentation.
7.1. About the Red Hat Knowledgebase
The Red Hat Knowledgebase provides rich content aimed at helping you make the most of Red Hat’s products and technologies. The Red Hat Knowledgebase consists of articles, product documentation, and videos outlining best practices on installing, configuring, and using Red Hat products. In addition, you can search for solutions to known issues, each providing concise root cause descriptions and remedial steps.
7.2. Searching the Red Hat Knowledgebase
In case of an OpenShift Container Platform issue, you can perform an initial search to find if a solution already exists within the Red Hat Knowledgebase.
Prerequisites
- You have a Red Hat Customer Portal account.
Procedure
- Log in to the Red Hat Customer Portal.
In the main Red Hat Customer Portal search field, input keywords and strings relating to the problem, including:
- OpenShift Container Platform components (such as etcd)
- Related procedure (such as installation)
- Warnings, error messages, and other outputs related to explicit failures
- Click Search.
- Select the OpenShift Container Platform product filter.
- Select the Knowledgebase content type filter.
7.3. Submitting a support case
You can submit a support case to Red Hat when you need help with OpenShift Container Platform or OpenShift Serverless issues. The support case process guides you through providing problem details, diagnostic data, and cluster information to help Red Hat support engineers resolve your issue efficiently.
Prerequisites
-
You have access to the cluster as a user with the
cluster-adminrole. -
You have installed the OpenShift CLI (
oc). - You have a Red Hat Customer Portal account.
- You have a Red Hat standard or premium Subscription.
Procedure
- Log in to the Red Hat Customer Portal and select SUPPORT CASES → Open a case.
- Select the appropriate category for your issue (such as Defect / Bug), product (OpenShift Container Platform), and product version if this is not already autofilled).
- Review the list of suggested Red Hat Knowledgebase solutions for a potential match against the problem that is being reported. If the suggested articles do not address the issue, click Continue.
- Enter a concise but descriptive problem summary and further details about the symptoms being experienced, and your expectations.
- Review the updated list of suggested Red Hat Knowledgebase solutions for a potential match against the problem that is being reported. The list is refined as you provide more information during the case creation process. If the suggested articles do not address the issue, click Continue.
- Ensure that the account information presented is as expected, and if not, change accordingly.
Check that the autofilled OpenShift Container Platform Cluster ID is correct. If it is not, manually obtain your cluster ID.
To manually obtain your cluster ID using the OpenShift Container Platform web console:
- Navigate to Home → Dashboards → Overview.
- Find the value in the Cluster ID field of the Details section.
It is also possible to open a new support case through the OpenShift Container Platform web console and have your cluster ID autofilled.
- From the toolbar, navigate to (?) Help → Open Support Case.
- The Cluster ID value is autofilled.
To obtain your cluster ID using the OpenShift CLI (
oc), run the following command:$ oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{"\n"}'
Complete the following questions where prompted and then click Continue:
- Where are you experiencing the behavior? What environment?
- When does the behavior occur? Frequency? Repeatedly? At certain times?
- What information can you give around time-frames and the business impact?
- Upload relevant diagnostic data files and click Continue.
It is recommended to include data gathered using the oc adm must-gather command as a starting point, plus any issue specific data that is not collected by that command.
- Input relevant case management details and click Continue.
- Preview the case details and click Submit.
7.4. Collecting diagnostic information for support
When you open a support case, share debugging information about your cluster with Red Hat Support. You can use the must-gather tool to collect diagnostic information about your OpenShift Container Platform cluster, including data related to OpenShift Serverless. For faster support, give diagnostic information for both OpenShift Container Platform and OpenShift Serverless.
7.5. About collecting OpenShift Serverless data
You can use the oc adm must-gather CLI command to collect information about your cluster, including features and objects associated with OpenShift Serverless. To collect OpenShift Serverless data with must-gather, you must specify the OpenShift Serverless image and the image tag for your installed version of OpenShift Serverless.
Prerequisites
-
Install the OpenShift CLI (
oc).
Procedure
Collect data by using the
oc adm must-gathercommand:$ oc adm must-gather --image=registry.redhat.io/openshift-serverless-1/serverless-must-gather-rhel8:<image_version_tag>
Example command
$ oc adm must-gather --image=registry.redhat.io/openshift-serverless-1/serverless-must-gather-rhel8:1.35.0