Administration guide

Red Hat OpenShift Dev Spaces 3.27

Administering Red Hat OpenShift Dev Spaces 3.27

Abstract

Information for administrators operating Red Hat OpenShift Dev Spaces.

Preface

Install, configure, and manage Red Hat OpenShift Dev Spaces on OpenShift clusters.

Chapter 1. Security best practices

Apply these security best practices for Red Hat OpenShift Dev Spaces to protect user credentials, isolate workspaces, and reduce the cluster attack surface.

Red Hat OpenShift Dev Spaces runs on top of OpenShift, which provides the platform, and the foundation for the products functioning on top of it. OpenShift documentation is the entry point for security hardening.

1.1. Project isolation in OpenShift

In OpenShift, project isolation is similar to namespace isolation in Kubernetes but is achieved through the concept of projects. A project in OpenShift is a top-level organizational unit that provides isolation and collaboration between different applications, teams, or workloads within a cluster.

By default, OpenShift Dev Spaces provisions a unique <username>-devspaces project for each user. Alternatively, the cluster administrator can disable project self-provisioning on the OpenShift level, and turn off automatic namespace provisioning in the CheCluster custom resource:

devEnvironments:
  defaultNamespace:
    autoProvision: false

With this setup, you achieve curated access to OpenShift Dev Spaces. Cluster administrators control provisioning for each user and can explicitly configure various settings including resource limits and quotas.

1.2. Role-based access control (RBAC)

By default, the OpenShift Dev Spaces operator creates the following ClusterRoles:

  • <namespace>-cheworkspaces-clusterrole
  • <namespace>-cheworkspaces-devworkspace-clusterrole

The <namespace> prefix corresponds to the project name where the Red Hat OpenShift Dev Spaces CheCluster CR is located. The first time a user accesses Red Hat OpenShift Dev Spaces, the corresponding RoleBinding is created in the <username>-devspaces project.

The following table lists the resources and actions that you can grant users permission to use in their namespace.

Table 1.1. Overview of resources and actions available in a user’s namespace

ResourcesActions

pods

"get", "list", "watch", "create", "delete", "update", "patch"

pods/exec

"get", "create"

pods/log

"get", "list", "watch"

pods/portforward

"get", "list", "create"

configmaps

"get", "list", "create", "update", "patch", "delete"

events

"list", "watch"

secrets

"get", "list", "create", "update", "patch", "delete"

services

"get", "list", "create", "delete", "update", "patch"

routes

"get", "list", "create", "delete"

persistentvolumeclaims

"get", "list", "watch", "create", "delete", "update", "patch"

apps/deployments

"get", "list", "watch", "create", "patch", "delete"

apps/replicasets

"get", "list", "patch", "delete"

namespaces

"get", "list"

projects

"get"

devworkspace

"get", "create", "delete", "list", "update", "patch", "watch"

devworkspacetemplates

"get", "create", "delete", "list", "update", "patch", "watch"

Important

Each user is granted permissions only to their namespace and cannot access other users' resources. Cluster administrators can add extra permissions to users. They should not remove permissions granted by default.

For more details about configuring cluster roles for Red Hat OpenShift Dev Spaces users and role-based access control, see the Additional resources section.

1.3. Dev environment isolation

Isolation of the development environments is implemented using OpenShift projects. Every developer has a project in which the following objects are created and managed:

  • Cloud Development Environment (CDE) Pods, including the Integrated Development Environment (IDE) server.
  • Secrets containing developer credentials, such as a Git token, SSH keys, and a Kubernetes token.
  • ConfigMaps with developer-specific configuration, such as the Git name and email.
  • Volumes that persist data such as the source code, even when the CDE Pod is stopped.
Important

Access to the resources in a namespace must be limited to the developer owning it. Granting read access to another developer is equivalent to sharing the developer credentials and should be avoided.

1.4. Enhanced authorization

The current trend is to split an infrastructure into several "fit for purpose" clusters instead of having a gigantic monolith OpenShift cluster. A "fit for purpose" cluster is specifically designed and configured to meet the requirements of a particular use case or workload. It is tailored to optimize performance and resource utilization based on the characteristics of the workloads it manages.

For Red Hat OpenShift Dev Spaces, this type of cluster is recommended. However, administrators might still want to provide granular access and restrict the availability of certain functionalities to particular users.

For this purpose, optional properties that you can use to configure granular access for different groups and users are available in the CheCluster Custom Resource:

  • allowUsers
  • allowGroups
  • denyUsers
  • denyGroups

The following example shows an access configuration:

networking:
  auth:
    advancedAuthorization:
      allowUsers:
        - user-a
        - user-b
      denyUsers:
        - user-c
      allowGroups:
        - openshift-group-a
        - openshift-group-b
      denyGroups:
        - openshift-group-c

Users in the denyUsers and denyGroup categories cannot use Red Hat OpenShift Dev Spaces and see a warning when trying to access the User Dashboard.

1.5. Authentication

Only authenticated OpenShift users can access Red Hat OpenShift Dev Spaces. The Gateway Pod uses a role-based access control (RBAC) subsystem to determine whether a developer is authorized to access a Cloud Development Environment (CDE) or not.

The CDE Gateway container checks the developer’s Kubernetes roles. If their roles allow access to the CDE Pod, the connection to the development environment is allowed. By default, only the owner of the namespace has access to the CDE Pod.

1.6. Security context and security context constraint

Red Hat OpenShift Dev Spaces adds SETGID and SETUID capabilities to the specification of the CDE Pod container security context:

"spec": {
  "containers": [
    "securityContext": {
            "allowPrivilegeEscalation": true,
            "capabilities": {
               "add": ["SETGID", "SETUID"],
               "drop": ["ALL","KILL","MKNOD"]
            },
            "readOnlyRootFilesystem": false,
            "runAsNonRoot": true,
            "runAsUser": 1001110000
   }
  ]
 }

This provides the ability for users to build container images from within a CDE.

By default, Red Hat OpenShift Dev Spaces assigns users a specific SecurityContextConstraint (SCC) that allows them to start a Pod with such capabilities. This SCC grants more capabilities to the users compared to the default restricted SCC but less capability compared to the anyuid SCC. This default SCC is pre-created in the OpenShift Dev Spaces namespace and named container-build.

Setting the following property in the CheCluster Custom Resource prevents assigning extra capabilities and SCC to users:

spec:
  devEnvironments:
    disableContainerBuildCapabilities: true

1.7. Resource Quotas and Limit Ranges

Resource Quotas and Limit Ranges are Kubernetes features you can use to help prevent bad actors and resource abuse within a cluster. Specifically, they allow you to set resource consumption constraints for pods and containers. By combining Resource Quotas and Limit Ranges, you can enforce project-specific policies to prevent bad actors from consuming excessive resources.

These mechanisms contribute to better resource management, stability, and fairness within an OpenShift cluster. More details about resource quotas and limit ranges are available in the OpenShift documentation.

1.8. Network policies

Network policies provide an additional layer of security by controlling network traffic between pods in a Kubernetes cluster. By default, every pod can communicate with every other pod and service on the cluster.

Implementing network policies allows you to:

  • Control ingress and egress traffic to and from workspace pods
  • Limit the attack surface by denying unauthorized network access

When configuring network policies for Red Hat OpenShift Dev Spaces, ensure that pods in the OpenShift Dev Spaces namespace can still communicate with pods in user namespaces. This communication is required for proper functionality.

For detailed instructions on implementing network policies with Red Hat OpenShift Dev Spaces, see the procedure for configuring network policies.

1.9. Disconnected environment

An air-gapped OpenShift disconnected cluster refers to an OpenShift cluster isolated from the internet or any external network. This isolation is often done for security reasons to protect sensitive or critical systems from potential cyber threats. In an air-gapped environment, the cluster cannot access external repositories or registries to download container images, updates, or dependencies.

Red Hat OpenShift Dev Spaces is supported and can be installed in a restricted environment.

1.10. Managing extensions

By default, Red Hat OpenShift Dev Spaces includes the embedded Open VSX registry which contains a limited set of extensions for the Microsoft Visual Studio Code - Open Source editor. Alternatively, cluster administrators can specify a different plugin registry in the Custom Resource, for example the open-vsx.org registry that contains thousands of extensions. They can also build a custom Open VSX registry.

Important

Installing extra extensions increases potential risks. To minimize these risks, ensure that you only install extensions from reliable sources and regularly update them.

1.11. Secrets

Keep sensitive data stored as Kubernetes secrets in the users' namespaces confidential (for example Personal Access Tokens (PAT), and SSH keys).

1.12. Git repositories

It is crucial to operate within Git repositories that you are familiar with and that you trust. Before incorporating new dependencies into the repository, verify that they are well-maintained and regularly release updates to address any identified security vulnerabilities in their code.

Chapter 2. Prepare the installation

Ensure your OpenShift cluster meets the requirements for OpenShift Dev Spaces and install the tools you need for installation.

Review the supported platforms, install the dsc management tool, understand the OpenShift Dev Spaces architecture, and estimate resource requirements for your deployment.

2.1. Supported platforms

OpenShift Dev Spaces is supported on specific OpenShift versions and CPU architectures.

OpenShift Dev Spaces runs on OpenShift 4.16–4.22 on the following CPU architectures:

  • AMD64 and Intel 64 (x86_64)
  • IBM Z (s390x)
  • IBM Power (ppc64le)
  • ARMv8 (arm64)

2.2. Install the dsc management tool

Install dsc, the Red Hat OpenShift Dev Spaces command-line management tool, on Linux, macOS, or Windows to start, stop, update, and delete the OpenShift Dev Spaces server.

Procedure

  1. Download the archive from This content is not included.https://developers.redhat.com/products/openshift-dev-spaces/download to a directory such as $HOME.
  2. Run tar xvzf on the archive to extract the /dsc directory.
  3. Add the extracted /dsc/bin subdirectory to $PATH.

Verification

  • Run dsc to view information about it.

    $ dsc

2.3. OpenShift Dev Spaces architecture overview

Figure 2.1. High-level OpenShift Dev Spaces architecture with the Dev Workspace operator

High-level architecture diagram showing OpenShift Dev Spaces components interacting with the Dev Workspace operator

The OpenShift Dev Spaces architecture consists of server components, user workspaces, and the Dev Workspace Operator, which together provide cloud-based development environments on OpenShift.

OpenShift Dev Spaces runs on three groups of components:

OpenShift Dev Spaces server components
Manage User project and workspaces. The main component is the User dashboard, from which users control their workspaces.
Dev Workspace operator
Creates and controls the necessary OpenShift objects to run User workspaces. Including Pods, Services, and PersistentVolumes.
User workspaces
Container-based development environments, the Integrated Development Environment (IDE) included.

The role of these OpenShift features is central:

Dev Workspace Custom Resources
Valid OpenShift objects representing the User workspaces and manipulated by OpenShift Dev Spaces. It is the communication channel for the three groups of components.
OpenShift role-based access control (RBAC)
Controls access to all resources.

2.3.1. Server components

The OpenShift Dev Spaces server components manage multi-tenancy and workspace lifecycle. Understanding these components helps you troubleshoot issues and plan cluster capacity.

Figure 2.2. OpenShift Dev Spaces server components interacting with the Dev Workspace operator

Diagram showing OpenShift Dev Spaces server component deployments interacting with the Dev Workspace operator

2.3.2. OpenShift Dev Spaces operator

The OpenShift Dev Spaces operator ensures full lifecycle management of the OpenShift Dev Spaces server components.

CheCluster custom resource definition (CRD)
Defines the CheCluster OpenShift object.
OpenShift Dev Spaces controller
Creates and controls the necessary OpenShift objects to run an OpenShift Dev Spaces instance, such as pods, services, and persistent volumes.
CheCluster custom resource (CR)
On a cluster with the OpenShift Dev Spaces operator, it is possible to create a CheCluster custom resource (CR). The OpenShift Dev Spaces operator ensures the full lifecycle management of the OpenShift Dev Spaces server components on this OpenShift Dev Spaces instance. These components include the Dev Workspace Operator, gateway, user dashboard, OpenShift Dev Spaces server, and plug-in registry.

2.3.3. Dev Workspace operator

The Dev Workspace Operator (DWO) is a dependency of OpenShift Dev Spaces, and is an integral part of how OpenShift Dev Spaces functions. One of DWO’s main responsibilities is to reconcile Dev Workspace custom resources (CR).

The Dev Workspace CR is an OpenShift resource representation of an OpenShift Dev Spaces workspace. Whenever a user creates a workspace using OpenShift Dev Spaces in the background, Dashboard OpenShift Dev Spaces creates a Dev Workspace CR in the cluster. For every OpenShift Dev Spaces workspace, there is an underlying Dev Workspace CR on the cluster.

Figure 2.3. Example of a Dev Workspace CR in a cluster

DevWorkspace CR example

When creating a workspace with OpenShift Dev Spaces with a devfile, the Dev Workspace CR contains the devfile details. Additionally, OpenShift Dev Spaces adds the editor definition into the Dev Workspace CR depending on which editor was chosen for the workspace. OpenShift Dev Spaces also adds attributes to the Dev Workspace that further configure the workspace depending on how you configured the CheCluster CR.

A DevWorkspaceTemplate is a custom resource that defines a reusable spec.template for Dev Workspaces.

When a workspace is started, DWO reads the corresponding Dev Workspace CR and creates the necessary resources such as deployments, secrets, configmaps, and routes. As a result, a workspace pod representing the development environment defined in the devfile is created.

2.3.3.1. Custom Resources overview

The following Custom Resource Definitions are provided by the Dev Workspace Operator:

  • Dev Workspace
  • DevWorkspaceTemplate
  • DevWorkspaceOperatorConfig
  • DevWorkspaceRouting

2.3.3.2. Dev Workspace

The Dev Workspace custom resource contains details about an OpenShift Dev Spaces workspace. Notably, it contains devfile details and a reference to the editor definition.

2.3.3.3. DevWorkspaceTemplate

In OpenShift Dev Spaces the DevWorkspaceTemplate custom resource is typically used to define an editor (such as Visual Studio Code - Open Source) for OpenShift Dev Spaces workspaces. You can use this custom resource to define reusable spec.template content that is reused by multiple Dev Workspaces.

2.3.3.4. DevWorkspaceOperatorConfig

The DevWorkspaceOperatorConfig (DWOC) custom resource defines configuration options for the DWO. There are two different types of DWOC:

  • global configuration
  • non-global configuration

The global configuration is a DWOC custom resource named devworkspace-operator-config and is usually located in the DWO installation namespace. By default, the global configuration is not created upon installation. Configuration fields set in the global configuration apply to the DWO and all Dev Workspaces. However, the DWOC configuration can be overridden by a non-global configuration.

Any other DWOC custom resource than devworkspace-operator-config is considered to be non-global configuration. A non-global configuration does not apply to any Dev Workspaces unless the Dev Workspace contains a reference to the DWOC. If the global configuration and non-global configuration have the same fields, the non-global configuration field takes precedence.

Table 2.1. Global DWOC and OpenShift Dev Spaces-owned DWOC comparison

 Global DWOCOpenShift Dev Spaces-owned DWOC

Resource name

devworkspace-operator-config

devworkspace-config

Namespace

DWO installation namespace

OpenShift Dev Spaces installation namespace

Default creation

Not created by default upon DWO installation

Created by default on OpenShift Dev Spaces installation

Scope

Applies to the DWO itself and all Dev Workspaces managed by DWO

Applies to Dev Workspaces created by OpenShift Dev Spaces

Precedence

Overridden by fields set in OpenShift Dev Spaces-owned config

Takes precedence over global config if both define the same field

Primary use case

Used to define default, broad settings that apply to DWO in general.

Used to define specific configuration for Dev Workspaces created by OpenShift Dev Spaces

For example, by default OpenShift Dev Spaces creates and manages a non-global DWOC in the OpenShift Dev Spaces namespace named devworkspace-config. This DWOC contains configuration specific to OpenShift Dev Spaces workspaces, and is maintained by OpenShift Dev Spaces depending on how you configure the CheCluster CR. When OpenShift Dev Spaces creates a workspace, OpenShift Dev Spaces adds a reference to the OpenShift Dev Spaces-owned DWOC with the controller.devfile.io/devworkspace-config attribute.

Figure 2.4. Example of Dev Workspace configuration attribute

DevWorkspace config attribute example

2.3.3.5. DevWorkspaceRouting

The DevWorkspaceRouting custom resource defines details about the endpoints of a Dev Workspace. Every Dev Workspace has its corresponding DevWorkspaceRouting object that specifies the workspace’s container endpoints. Endpoints defined from the devfile, as well as endpoints defined by the editor definition appear in the DevWorkspaceRouting custom resource.

apiVersion: controller.devfile.io/v1alpha1
kind: DevWorkspaceRouting
metadata:
  annotations:
    controller.devfile.io/devworkspace-started: 'false'
  name: routing-workspaceb14aa33254674065
  labels:
    controller.devfile.io/devworkspace_id: workspaceb14aa33254674065
spec:
  devworkspaceId: workspaceb14aa33254674065
  endpoints:
    universal-developer-image:
      - attributes:
          cookiesAuthEnabled: true
          discoverable: false
          type: main
          urlRewriteSupported: true
        exposure: public
        name: che-code
        protocol: https
        secure: true
        targetPort: 3100
  podSelector:
    controller.devfile.io/devworkspace_id: workspaceb14aa33254674065
  routingClass: che
status:
  exposedEndpoints:
    ...

2.3.3.6. Dev Workspace Operator operands

The Dev Workspace Operator has two operands:

  • controller deployment
  • webhook deployment.
$ oc get pods -l 'app.kubernetes.io/part-of=devworkspace-operator' -o custom-columns=NAME:.metadata.name -n openshift-operators
NAME
devworkspace-controller-manager-66c6f674f5-l7rhj
devworkspace-webhook-server-d4958d9cd-gh7vr
devworkspace-webhook-server-d4958d9cd-rfvj6

where:

devworkspace-controller-manager-*
The Dev Workspace controller pod, which is responsible for reconciling custom resources.
devworkspace-webhook-server-*
The Dev Workspace operator webhook server pods.

2.3.3.7. Configuring the Dev Workspace-controller-manager deployment

You can configure the devworkspace-controller-manager pod in the Dev Workspace Operator Subscription object:

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: devworkspace-operator
  namespace: openshift-operators
spec:
  config:
    affinity:
      nodeAffinity: ...
      podAffinity: ...
    resources:
      limits:
        memory: ...
        cpu: ...
      requests:
        memory: ...
        cpu: ...

2.3.3.8. Configuring the Dev Workspace-webhook-server deployment

You can configure the devworkspace-webhook-server deployment in the global DWOC:

apiVersion: controller.devfile.io/v1alpha1
kind: DevWorkspaceOperatorConfig
metadata:
  name: devworkspace-operator-config
  namespace: <DWO install namespace>
config:
  webhooks:
    nodeSelector: <map[string]string>
    replicas: <int>
    tolerations: <[]corev1.Toleration>

2.3.4. OpenShift Dev Spaces gateway

The OpenShift Dev Spaces gateway routes requests, authenticates users, and applies access control policies for OpenShift Dev Spaces resources.

The OpenShift Dev Spaces gateway has the following roles:

  • Routing requests. It uses Traefik.
  • Authenticating users with OpenID Connect (OIDC). It uses OAuth2 Proxy.
  • Applying OpenShift Role Based Access Control (RBAC) policies to control access to any OpenShift Dev Spaces resource. It uses kube-rbac-proxy.

The OpenShift Dev Spaces operator manages it as the che-gateway Deployment.

It controls access to the user dashboard, the OpenShift Dev Spaces server, the plug-in registry, and user workspaces.

Figure 2.5. OpenShift Dev Spaces gateway interactions with other components

Gateway interactions

2.3.5. User dashboard

The user dashboard is the landing page of Red Hat OpenShift Dev Spaces, providing a central interface for users to create, access, and manage their workspaces.

It needs access to the OpenShift Dev Spaces server, the plug-in registry, and the OpenShift Application Programming Interface (API).

Figure 2.6. User dashboard interactions with other components

User dashboard interactions with other components

When the user requests the user dashboard to start a workspace, the user dashboard executes this sequence of actions:

  1. Sends the repository URL to the OpenShift Dev Spaces server and expects a devfile in return, when the user is creating a workspace from a remote devfile.
  2. Reads the devfile describing the workspace.
  3. Collects the additional metadata from the plug-in registry.
  4. Converts the information into a Dev Workspace Custom Resource.
  5. Creates the Dev Workspace Custom Resource in the user project using the OpenShift API.
  6. Watches the Dev Workspace Custom Resource status.
  7. Redirects the user to the running workspace IDE.

2.3.6. OpenShift Dev Spaces server

The OpenShift Dev Spaces server is a Java web service that manages user namespaces, provisions secrets and config maps, and integrates with Git service providers.

The OpenShift Dev Spaces server main functions are:

  • Creating user namespaces.
  • Provisioning user namespaces with required secrets and config maps.
  • Integrating with Git services providers, to fetch and validate devfiles and authentication.

The OpenShift Dev Spaces server is a Java web service exposing a Hypertext Transfer Protocol (HTTP) REST API and needs access to:

  • Git service providers
  • OpenShift API

Figure 2.7. OpenShift Dev Spaces server interactions with other components

OpenShift Dev Spaces server interactions

2.3.7. Plug-in registry

Each OpenShift Dev Spaces workspace starts with a specific editor and set of associated extensions. The OpenShift Dev Spaces plugin registry provides the list of available editors and editor extensions. A Devfile v2 describes each editor or extension.

The user dashboard reads the content of the registry.

Figure 2.8. Plugin registries interactions with other components

Plugin registries interactions with other components

2.4. User workspaces

Figure 2.9. User workspaces interactions with other components

User workspaces interactions with other components

User workspaces provide browser-based IDEs running in OpenShift containers, giving developers on-demand access to editors, language servers, debugging tools, and application runtimes without local setup.

A User workspace is a web application. It consists of microservices running in containers providing all the services of a modern IDE running in your browser:

  • Editor
  • Language auto-completion
  • Language server
  • Debugging tools
  • Plug-ins
  • Application runtimes

A workspace is one OpenShift Deployment containing the workspace containers and enabled plugins, plus related OpenShift components:

  • Containers
  • ConfigMaps
  • Services
  • Endpoints
  • Ingresses or Routes
  • Secrets
  • Persistent Volumes (PV)

A OpenShift Dev Spaces workspace contains the source code of the projects, persisted in an OpenShift Persistent Volume (PV). Microservices have read/write access to this shared directory.

Use the devfile v2 format to specify the tools and runtime applications of an OpenShift Dev Spaces workspace.

The following diagram shows one running OpenShift Dev Spaces workspace and its components.

Figure 2.10. OpenShift Dev Spaces workspace components

Workspace components

In the diagram, there is one running workspace.

2.5. Calculate OpenShift Dev Spaces resource requirements

Calculate the CPU and memory resource consumption for the OpenShift Dev Spaces Operator, Dev Workspace Controller, and user workspaces to right-size your cluster for the expected number of concurrent users.

Note

The following link to an Content from github.com is not included.example devfile is a pointer to material from the upstream community. This material represents the very latest available content and the most recent best practices. These tips have not yet been vetted by Red Hat’s QE department, and they have not yet been proven by a wide user group. Please, use this information cautiously. It is best used for educational and 'developmental' purposes rather than 'production' purposes.

Prerequisites

  • You have a planned or existing OpenShift Dev Spaces deployment on OpenShift Container Platform 4.16 or later.
  • You have the devfiles that define the development environments for your users.
  • You have an estimate of the number of concurrent workspaces that your users will run.

Procedure

  1. Identify the workspace resource requirements from the devfile components section. The following example uses the Content from github.com is not included.Quarkus API example devfile.

    • The tools component of the devfile defines the following requests and limits:

          memoryLimit: 6G
          memoryRequest: 512M
          cpuRequest: 1000m
          cpuLimit: 4000m
    • During workspace startup, an internal che-gateway container is implicitly provisioned with the following requests and limits:

          memoryLimit: 256M
          memoryRequest: 64M
          cpuRequest: 50m
          cpuLimit: 500m
    • Additional memory and CPU are added implicitly for the Visual Studio Code - Open Source ("Code - OSS") editor:

          memoryLimit: 1024M
          memoryRequest: 256M
          cpuRequest: 30m
          cpuLimit: 500m
    • Additional memory and CPU are added implicitly for a JetBrains IDE, for example IntelliJ IDEA Ultimate:

          memoryLimit: 6144M
          memoryRequest: 2048M
          cpuRequest: 1500m
          cpuLimit: 2000m
  2. Calculate the sums of the resources required for each workspace. If you intend to use multiple devfiles, repeat this calculation for every expected devfile.

    Table 2.2. Workspace requirements for the Content from github.com is not included.example devfile in the previous step

    PurposePodContainer nameMemory limitMemory requestCPU limitCPU request

    Developer tools

    workspace

    tools

    6 GiB

    512 MiB

    4000 m

    1000 m

    OpenShift Dev Spaces gateway

    workspace

    che-gateway

    256 MiB

    64 MiB

    500 m

    50 m

    Visual Studio Code

    workspace

    tools

    1024 MiB

    256 MiB

    500 m

    30 m

    Total

    7.3 GiB

    832 MiB

    5000 m

    1080 m

  3. Multiply the resources calculated per workspace by the number of workspaces that you expect all of your users to run simultaneously.
  4. Calculate the sums of the requirements for the OpenShift Dev Spaces Operator, Operands, and Dev Workspace Controller.

    Table 2.3. Default requirements for the OpenShift Dev Spaces Operator, Operands, and Dev Workspace Controller

    PurposePod nameContainer namesMemory limitMemory requestCPU limitCPU request

    OpenShift Dev Spaces operator

    devspaces-operator

    devspaces-operator

    256 MiB

    64 MiB

    500 m

    100 m

    OpenShift Dev Spaces Server

    devspaces

    devspaces-server

    1 GiB

    512 MiB

    1000 m

    100 m

    OpenShift Dev Spaces Dashboard

    devspaces-dashboard

    devspaces-dashboard

    256 MiB

    32 MiB

    500 m

    100 m

    OpenShift Dev Spaces Gateway

    devspaces-gateway

    traefik

    4 GiB

    128 MiB

    1000 m

    100 m

    OpenShift Dev Spaces Gateway

    devspaces-gateway

    configbump

    256 MiB

    64 MiB

    500 m

    50 m

    OpenShift Dev Spaces Gateway

    devspaces-gateway

    oauth-proxy

    512 MiB

    64 MiB

    500 m

    100 m

    OpenShift Dev Spaces Gateway

    devspaces-gateway

    kube-rbac-proxy

    512 MiB

    64 MiB

    500 m

    100 m

    Plugin registry

    plugin-registry

    plugin-registry

    256 MiB

    32 MiB

    500 m

    100 m

    Dev Workspace Controller Manager

    devworkspace-controller-manager

    devworkspace-controller

    5 GiB

    100 MiB

    3000 m

    250 m

    Dev Workspace Controller Manager

    devworkspace-controller-manager

    kube-rbac-proxy

    N/A

    N/A

    N/A

    N/A

    Dev Workspace Operator Catalog

    devworkspace-operator-catalog

    registry-server

    N/A

    50 MiB

    N/A

    10 m

    Dev Workspace Webhook Server

    devworkspace-webhook-server

    webhook-server

    300 MiB

    20 MiB

    200 m

    100 m

    Dev Workspace Webhook Server

    devworkspace-webhook-server

    kube-rbac-proxy

    N/A

    N/A

    N/A

    N/A

    Total

    12.3 GiB

    1.1 GiB

    8.2

    1.1

  5. Add the workspace resources from step 3 and the operator resources from step 4 to determine total cluster resource requirements.

Verification

  • Verify that the total resource requirements account for all OpenShift Dev Spaces Operator components, Dev Workspace Controller components, and the expected number of concurrent workspaces.

Chapter 3. OpenShift Dev Spaces scalability

Scaling Cloud Development Environments (CDEs) to thousands of concurrent workspaces on Kubernetes presents significant infrastructure and performance challenges.

Such a scale imposes high infrastructure demands and introduces potential bottlenecks that can impact performance and stability. Addressing these challenges requires meticulous planning, strategic architectural choices, monitoring, and continuous optimization.

CDE workloads are particularly complex to scale. The underlying IDE solutions, such as Visual Studio Code - Open Source ("Code - OSS") or JetBrains Gateway, are designed as single-user applications, not as multitenant services.

3.1. Resource quantity and object maximums

While there is no strict limit on the number of resources in a Kubernetes cluster, there are certain considerations for large clusters to remember.

OpenShift Container Platform, a certified distribution of Kubernetes, provides a set of tested maximums for various resources. These maximums can serve as an initial guideline for planning your environment:

Table 3.1. OpenShift Container Platform tested cluster maximums

Resource typeTested maximum

Number of nodes

2000

Number of pods

150000

Number of pods per node

2500

Number of namespace

10000

Number of services

10000

Number of secrets

80000

Number of config maps

90000

For more details on OpenShift Container Platform tested object maximums, see the OpenShift Container Platform scalability and performance documentation.

For example, it is generally not recommended to have more than 10,000 namespaces due to potential performance and management overhead. In Red Hat OpenShift Dev Spaces, each user is allocated a namespace. If you expect the user base to be large, consider spreading workloads across multiple "fit-for-purpose" clusters and potentially using solutions for multi-cluster orchestration.

3.2. Resource requirements

When deploying Red Hat OpenShift Dev Spaces on Kubernetes, accurately calculate the resource requirements for each CDE, including memory and CPU or GPU needs. This determines the right sizing of the cluster. In general, the CDE size is limited by and cannot be bigger than the worker node size.

The resource requirements for CDEs can vary significantly based on the specific workloads and configurations. A simple CDE might require only a few hundred megabytes of memory. A more complex one might need several gigabytes of memory and multiple CPU cores.

For details about calculating resource requirements, see the procedure for calculating OpenShift Dev Spaces resource requirements.

3.3. Using etcd

The primary datastore of Kubernetes cluster configuration and state is etcd. It holds information about nodes, pods, services, and custom resources.

As a distributed key-value store, etcd does not scale well past a certain threshold. As the size of etcd grows, so does the load on the cluster, risking its stability.

Important

The default etcd size is 2 GB, and the recommended maximum is 8 GB. Exceeding the maximum limit can make the Kubernetes cluster unstable and unresponsive. Even though the data stored in a ConfigMap cannot exceed 1 MiB by design, a few thousand relatively large ConfigMap objects can overload etcd storage.

3.4. Object size as a factor

The size of the objects stored in etcd is also a critical factor. Each object consumes space, and as the number of objects increases, the overall size of etcd grows. The larger the object, the more space it takes. For example, etcd can be overloaded with only a few thousand large Kubernetes objects.

In the context of Red Hat OpenShift Dev Spaces, by default the Operator creates and manages the 'ca-certs-merged' ConfigMap, which contains the Certificate Authorities (CAs) bundle, in every user namespace. With a large number of Transport Layer Security (TLS) certificates in the cluster, this results in additional etcd usage.

To disable mounting the CA bundle by using the ConfigMap under the /etc/pki/ca-trust/extracted/pem path, configure the CheCluster Custom Resource by setting the disableWorkspaceCaBundleMount property to true. With this configuration, only custom certificates are mounted under the path /public-certs:

spec:
  devEnvironments:
    trustedCerts:
      disableWorkspaceCaBundleMount: true

3.5. Dev Workspace objects

For large Kubernetes deployments, particularly those involving a high number of custom resources such as DevWorkspace objects, which represent CDEs, etcd can become a significant performance bottleneck.

Important

Based on the load testing for 6,000 DevWorkspace objects, storage consumption for etcd was approximately 2.5GB.

Starting from Dev Workspace Operator version 0.34.0, you can configure a pruner that automatically cleans up DevWorkspace objects that were not in use for a certain period of time. To set the pruner up, configure the DevWorkspaceOperatorConfig object as follows:

apiVersion: controller.devfile.io/v1alpha1
kind: DevWorkspaceOperatorConfig
metadata:
  name: devworkspace-operator-config
  namespace: crw
config:
  workspace:
    cleanupCronJob:
      enabled: true
      dryRun: false
      retainTime: 2592000
      schedule: "0 0 1 * *"
retainTime
By default, if a workspace was not started for more than 30 days, it is marked for deletion.
schedule
By default, the pruner runs once per month.

3.6. OLMConfig

When an Operator is installed by the Operator Lifecycle Manager (OLM), a stripped-down copy of its CSV is created in every namespace the Operator watches. These "Copied CSVs" communicate which controllers are reconciling resource events in a given namespace.

On large clusters with hundreds or thousands of namespaces, Copied CSVs consume an unsustainable amount of resources, including OLM memory, etcd storage, and network bandwidth. To eliminate the CSVs copied to every namespace, configure the OLMConfig object:

apiVersion: operators.coreos.com/v1
kind: OLMConfig
metadata:
  name: cluster
spec:
  features:
    disableCopiedCSVs: true

Additional information about the disableCopiedCSVs feature is available in its original enhancement proposal.

In clusters with many namespaces and cluster-wide Operators, Copied CSVs increase etcd storage usage and memory consumption. Disabling Copied CSVs significantly reduces the data stored in etcd and improves cluster performance and stability.

Disabling Copied CSVs also reduces the memory footprint of OLM, as it no longer maintains these additional resources.

For more details about disabling Copied CSVs, see the OLM documentation.

3.7. Cluster Autoscaling

Although cluster autoscaling is a powerful Kubernetes feature, you cannot always rely on it. Consider predictive scaling by analyzing load data to detect daily or weekly usage patterns.

If your workloads follow a pattern with dramatic peaks throughout the day, provision worker nodes accordingly. For example, if workspaces increase during business hours and decrease during off-hours, predictive scaling adjusts the number of worker nodes. This ensures enough resources are available during peak load while minimizing costs during off-peak hours.

You can also use open-source solutions such as Karpenter for configuration and lifecycle management of the worker nodes. Karpenter can dynamically provision and optimize worker nodes based on the specific requirements of the workloads. This helps improve resource utilization and reduce costs.

3.8. Multi-cluster

By design, Red Hat OpenShift Dev Spaces is not multi-cluster aware. You can only have one instance per cluster.

However, you can run Red Hat OpenShift Dev Spaces in a multi-cluster environment by deploying Red Hat OpenShift Dev Spaces in each cluster. Use a load balancer or Domain Name System (DNS)-based routing to direct traffic to the appropriate instance. This approach distributes the workload across clusters and provides redundancy in case of cluster failures.

3.9. Developer Sandbox example

You can test running OpenShift Dev Spaces in a multi-cluster environment by using the Developer Sandbox, a free trial environment by Red Hat.

From an infrastructure perspective, the Developer Sandbox consists of multiple Red Hat OpenShift Service on AWS (ROSA) clusters. On each cluster, the productized version of Red Hat OpenShift Dev Spaces is installed and configured using Argo CD. The workspaces.openshift.com URL is used as a single entry point to the Red Hat OpenShift Dev Spaces instances across clusters.

Figure 3.1. Developer Sandbox multi-cluster architecture

Scheme of a multi-cluster environment

You can find implementation details about the multicluster redirector in the crw-multicluster-redirector GitHub repository.

Important

The multi-cluster architecture of workspaces.openshift.com is part of the Developer Sandbox. It is a Developer Sandbox-specific solution that cannot be reused as-is in other environments. However, you can use it as a reference for implementing a similar solution well-tailored to your specific multicluster needs.

3.10. The multicluster redirector solution for OpenShift Container Platform

Red Hat offers an open-source, Quarkus-based service that acts as a single gateway for developers. This service automatically redirects users to the correct Red Hat OpenShift Dev Spaces instance on the appropriate cluster based on their OpenShift Container Platform group membership. The community-supported version is available in the devspaces-multicluster-redirector GitHub repository.

3.11. Architecture and requirements

A critical requirement for the multicluster redirector is that all users are provisioned to the host cluster where the redirector is deployed. Users authenticate through the OAuth flow of this cluster, even if they never run workloads there. The host cluster’s OpenShift Container Platform groups determine the routing logic. See the devspaces-multicluster-redirector documentation for deployment instructions.

3.12. Configuration

The routing configuration uses a ConfigMap that contains JSON to map OpenShift Container Platform groups to Red Hat OpenShift Dev Spaces URLs. The redirector uses this file to update routing tables in real-time without requiring restarts.

3.13. Operational flow

The routing process follows these steps:

  1. Authenticate by using OAuth through a proxy sidecar.
  2. Pass identity and group information through HTTP headers.
  3. Verify group memberships by using OpenShift Container Platform API queries.
  4. Determine the appropriate Red Hat OpenShift Dev Spaces URL by using a mapping lookup.
  5. Redirect the user to the designated cluster instance.

If users belong to multiple OpenShift Container Platform groups, they can choose their desired Red Hat OpenShift Dev Spaces instance from a selection dashboard.

Additional resources

Chapter 4. Install Red Hat OpenShift Dev Spaces

Install Red Hat OpenShift Dev Spaces on an OpenShift cluster by using the command-line interface (CLI) or the web console.

Note

You can deploy only one instance of OpenShift Dev Spaces per cluster.

4.1. Install Dev Spaces on OpenShift using CLI

Install OpenShift Dev Spaces on OpenShift by using the dsc CLI management tool to deploy a new instance.

Prerequisites

Procedure

  1. Optional: If you previously deployed OpenShift Dev Spaces on this OpenShift cluster, ensure that the previous OpenShift Dev Spaces instance is removed:

    $ dsc server:delete
  2. Create the OpenShift Dev Spaces instance:

    $ dsc server:deploy --platform openshift

Verification

  1. Verify the OpenShift Dev Spaces instance status:

    $ dsc server:status
  2. Navigate to the OpenShift Dev Spaces cluster instance:

    $ dsc dashboard:open

4.2. Install Dev Spaces on OpenShift using the web console

Install OpenShift Dev Spaces on OpenShift through the web console by deploying the Operator from OperatorHub and creating a CheCluster instance.

Prerequisites

Procedure

  1. In the Administrator view of the OpenShift web console, go to OperatorsOperatorHub and search for Red Hat OpenShift Dev Spaces.
  2. Install the Red Hat OpenShift Dev Spaces Operator.

    Important

    The Red Hat OpenShift Dev Spaces Operator depends on the Dev Workspace Operator. If you install the Red Hat OpenShift Dev Spaces Operator manually to a non-default namespace, ensure that the Dev Workspace Operator is also installed in the same namespace. The Operator Lifecycle Manager installs the Dev Workspace Operator as a dependency within the Red Hat OpenShift Dev Spaces Operator namespace. If the Dev Workspace Operator is already installed in a different namespace, two conflicting installations can result.

    Important

    If you want to onboard This page is not included, but the link has been rewritten to point to the nearest parent document.Web Terminal Operator on the cluster, use the same installation namespace as the Red Hat OpenShift Dev Spaces Operator. Both operators depend on the Dev Workspace Operator, so all three must be installed in the same namespace.

  3. Create the openshift-devspaces project in OpenShift as follows:

    oc create namespace openshift-devspaces
  4. Go to OperatorsInstalled OperatorsRed Hat OpenShift Dev Spaces instance SpecificationCreate CheClusterYAML view.
  5. In the YAML view, replace namespace: openshift-operators with namespace: openshift-devspaces.
  6. Select Create.

Verification

  1. In Red Hat OpenShift Dev Spaces instance Specification, go to devspaces, landing on the Details tab.

  1. Under Message, check that there is None, which means no errors.
  2. Under Red Hat OpenShift Dev Spaces URL, wait until the URL of the OpenShift Dev Spaces instance appears, and then open the URL to check the OpenShift Dev Spaces dashboard.
  3. In the Resources tab, view the resources for the OpenShift Dev Spaces deployment and their status.

4.3. Install OpenShift Dev Spaces in a restricted environment on OpenShift

Install OpenShift Dev Spaces on an air-gapped OpenShift cluster by mirroring required images and operator catalogs to a registry within the restricted network.

On a restricted network, deploying OpenShift Dev Spaces and running workspaces requires the following public resources:

  • Operator catalog
  • Container images
  • Sample projects

To make these resources available, you can replace them with their copy in a registry accessible by the OpenShift cluster.

Prerequisites

Procedure

  1. Download and execute the mirroring script to install a custom Operator catalog and mirror the related images.

    $ bash prepare-restricted-environment.sh \
      --devworkspace_operator_index registry.redhat.io/redhat/redhat-operator-index:v4.22\
      --devworkspace_operator_version "v0.40.0" \
      --prod_operator_index "registry.redhat.io/redhat/redhat-operator-index:v4.22" \
      --prod_operator_package_name "devspaces" \
      --prod_operator_bundle_name "devspacesoperator" \
      --prod_operator_version "v3.27.0" \
      --my_registry "<my_registry>"
    --my_registry
    The private Docker registry where the images will be mirrored

Procedure

  1. Install OpenShift Dev Spaces with the configuration set in the che-operator-cr-patch.yaml during the previous step:

    $ dsc server:deploy \
      --platform=openshift \
      --olm-channel stable \
      --catalog-source-name=devspaces-disconnected-install \
      --catalog-source-namespace=openshift-marketplace \
      --skip-devworkspace-operator \
      --che-operator-cr-patch-yaml=che-operator-cr-patch.yaml
  2. Allow incoming traffic from the OpenShift Dev Spaces namespace to all Pods in the user projects. See: Section 12.1, “Configure network policies”.

Verification

  • Verify that the OpenShift Dev Spaces instance is running:

    $ dsc server:status

4.4. Set up an Ansible sample

Configure an Ansible sample for use in restricted OpenShift Dev Spaces environments.

Prerequisites

  • You have Microsoft Visual Studio Code - Open Source IDE as the configured editor.
  • You have a 64-bit x86 system.

Procedure

  1. Mirror the following images:

    ghcr.io/ansible/ansible-devspaces@sha256:ce1ecc3b3c350eab2a9a417ce14a33f4b222a6aafd663b5cf997ccc8c601fe2c
    registry.access.redhat.com/ubi8/python-39@sha256:301fec66443f80c3cc507ccaf72319052db5a1dc56deb55c8f169011d4bbaacb
  2. Configure the cluster proxy to allow access to the following domains:

    .ansible.com
    .ansible-galaxy-ng.s3.dualstack.us-east-1.amazonaws.com
    Note

    Support for the following IDE and CPU architectures is planned for a future release:

    • CPU architectures

      • IBM Power (ppc64le)
      • IBM Z (s390x)

4.5. Find the fully qualified domain name (FQDN)

Retrieve the fully qualified domain name (FQDN) of your organization’s instance of OpenShift Dev Spaces on the command line to access the OpenShift Dev Spaces dashboard URL.

Tip

You can find the FQDN for your organization’s OpenShift Dev Spaces instance in the Administrator view of the OpenShift web console as follows. Go to OperatorsInstalled OperatorsRed Hat OpenShift Dev Spaces instance SpecificationdevspacesRed Hat OpenShift Dev Spaces URL.

Prerequisites

Procedure

  1. Run the following command:

    oc get checluster devspaces -n openshift-devspaces -o jsonpath='{.status.cheURL}'

Verification

  • Open the returned URL in a web browser and verify that the OpenShift Dev Spaces dashboard loads.

4.6. Permissions to install OpenShift Dev Spaces on OpenShift using CLI

A specific set of permissions is required to install OpenShift Dev Spaces on an OpenShift cluster using the dsc CLI tool.

The following YAML shows the minimal set of permissions required to install OpenShift Dev Spaces on an OpenShift cluster using dsc:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: devspaces-install-dsc
rules:
- apiGroups: ["org.eclipse.che"]
  resources: ["checlusters"]
  verbs: ["*"]
- apiGroups: ["project.openshift.io"]
  resources: ["projects"]
  verbs: ["get", "list"]
- apiGroups: [""]
  resources: ["namespaces"]
  verbs: ["get", "list", "create"]
- apiGroups: [""]
  resources: ["pods", "configmaps"]
  verbs: ["get", "list"]
- apiGroups: ["route.openshift.io"]
  resources: ["routes"]
  verbs: ["get", "list"]
  # OLM resources permissions
- apiGroups: ["operators.coreos.com"]
  resources: ["catalogsources", "subscriptions"]
  verbs: ["create", "get", "list", "watch"]
- apiGroups: ["operators.coreos.com"]
  resources: ["operatorgroups", "clusterserviceversions"]
  verbs: ["get", "list", "watch"]
- apiGroups: ["operators.coreos.com"]
  resources: ["installplans"]
  verbs: ["patch", "get", "list", "watch"]
- apiGroups: ["packages.operators.coreos.com"]
  resources: ["packagemanifests"]
  verbs: ["get", "list"]

4.7. Permissions to install OpenShift Dev Spaces on OpenShift using web console

A specific set of permissions is required to install OpenShift Dev Spaces on an OpenShift cluster using the web console.

The following YAML shows the minimal set of permissions required to install OpenShift Dev Spaces on an OpenShift cluster using the web console:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: devspaces-install-web-console
rules:
- apiGroups: ["org.eclipse.che"]
  resources: ["checlusters"]
  verbs: ["*"]
- apiGroups: [""]
  resources: ["namespaces"]
  verbs: ["get", "list", "create"]
- apiGroups: ["project.openshift.io"]
  resources: ["projects"]
  verbs: ["get", "list", "create"]
  # OLM resources permissions
- apiGroups: ["operators.coreos.com"]
  resources: ["subscriptions"]
  verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["operators.coreos.com"]
  resources: ["operatorgroups"]
  verbs: ["get", "list", "watch"]
- apiGroups: ["operators.coreos.com"]
  resources: ["clusterserviceversions", "catalogsources", "installplans"]
  verbs: ["get", "list", "watch", "delete"]
- apiGroups: ["packages.operators.coreos.com"]
  resources: ["packagemanifests", "packagemanifests/icon"]
  verbs: ["get", "list", "watch"]
  # Workaround related to viewing operators in OperatorHub
- apiGroups: ["operator.openshift.io"]
  resources: ["cloudcredentials"]
  verbs: ["get", "list", "watch"]
- apiGroups: ["config.openshift.io"]
  resources: ["infrastructures", "authentications"]
  verbs: ["get", "list", "watch"]

Chapter 5. Configure the CheCluster Custom Resource

Configure your OpenShift Dev Spaces instance by editing the CheCluster Custom Resource (CR).

The CheCluster CR is the central configuration object for OpenShift Dev Spaces. You can set fields during installation with dsc flags or modify them at any time afterward with oc.

5.1. The CheCluster Custom Resource

A default deployment of OpenShift Dev Spaces consists of a CheCluster Custom Resource parameterized by the Red Hat OpenShift Dev Spaces Operator. Understand its structure to customize OpenShift Dev Spaces components for your environment.

The CheCluster Custom Resource is a Kubernetes object. You can configure it by editing the CheCluster Custom Resource YAML file. This file contains sections to configure each component: devWorkspace, cheServer, pluginRegistry, devfileRegistry, dashboard and imagePuller.

The Red Hat OpenShift Dev Spaces Operator translates the CheCluster Custom Resource into a config map usable by each component of the OpenShift Dev Spaces installation.

The OpenShift platform applies the configuration to each component, and creates the necessary Pods. When OpenShift detects changes in the configuration of a component, it restarts the Pods accordingly.

Example 5.1. Configuring the main properties of the OpenShift Dev Spaces server component

  1. Apply the CheCluster Custom Resource YAML file with suitable modifications in the cheServer component section.
  2. The Operator generates the che ConfigMap.
  3. OpenShift detects changes in the ConfigMap and triggers a restart of the OpenShift Dev Spaces Pod.

5.2. Use dsc to configure the CheCluster Custom Resource during installation

To deploy OpenShift Dev Spaces with a suitable configuration, edit the CheCluster Custom Resource YAML file during the installation of OpenShift Dev Spaces. Otherwise, the OpenShift Dev Spaces deployment uses the default configuration parameterized by the Operator.

Prerequisites

Procedure

  1. Create a che-operator-cr-patch.yaml YAML file that contains the subset of the CheCluster Custom Resource to configure:

    spec:
      <component>:
          <property_to_configure>: <value>
  2. Deploy OpenShift Dev Spaces and apply the changes described in che-operator-cr-patch.yaml file:

    $ dsc server:deploy \
    --che-operator-cr-patch-yaml=che-operator-cr-patch.yaml \
    --platform <chosen_platform>

Verification

  • Verify the value of the configured property:

    $ oc get configmap che -o jsonpath='{.data.<configured_property>}' \
    -n openshift-devspaces

5.3. Use the CLI to configure the CheCluster Custom Resource

Edit the CheCluster Custom Resource YAML file to customize the behavior of a running OpenShift Dev Spaces instance for your environment.

Prerequisites

Procedure

  1. Edit the CheCluster Custom Resource on the cluster:

    $ oc edit checluster/devspaces -n openshift-devspaces
  2. Save and close the file to apply the changes.

Verification

  1. Verify the value of the configured property:

    $ oc get configmap che -o jsonpath='{.data.<configured_property>}' \
    -n openshift-devspaces

5.4. CheCluster Custom Resource fields reference

Customize the CheCluster Custom Resource by configuring its specification fields to control OpenShift Dev Spaces server, dashboard, gateway, and workspace components.

Example 5.2. A minimal CheCluster Custom Resource example.

apiVersion: org.eclipse.che/v2
kind: CheCluster
metadata:
  name: devspaces
  namespace: openshift-devspaces
spec:
  components: {}
  devEnvironments: {}
  networking: {}

Table 5.1. Development environment configuration options.

PropertyDescriptionDefault

allowedSources

AllowedSources defines the allowed sources on which workspaces can be started.

 

containerBuildConfiguration

Container build configuration.

 

containerResourceCaps

ContainerResourceCaps defines the maximum resource requirements enforced for workspace containers. If a container specifies limits or requests that exceed these values, they will be capped at the maximum. Note: Caps only apply when resources are already specified on a container. For containers without resource specifications, use DefaultContainerResources instead. These resource caps do not apply to initContainers or the projectClone container.

 

containerRunConfiguration

Container run configuration.

 

defaultComponents

Default components applied to DevWorkspaces. These default components are meant to be used when a Devfile, that does not contain any components.

 

defaultContainerResources

DefaultContainerResources defines the resource requirements (memory/cpu limit/request) used for container components that do not define limits or requests.

 

defaultEditor

The default editor to workspace create with. It could be a plugin ID or a URI. The plugin ID must have publisher/name/version format. The URI must start from http:// or https://.

 

defaultNamespace

User’s default namespace.

{ "autoProvision": true, "template": "<username>-che"}

defaultPlugins

Default plug-ins applied to DevWorkspaces.

 

deploymentStrategy

DeploymentStrategy defines the deployment strategy to use to replace existing workspace pods with new ones. The available deployment stragies are Recreate and RollingUpdate. With the Recreate deployment strategy, the existing workspace pod is killed before the new one is created. With the RollingUpdate deployment strategy, a new workspace pod is created and the existing workspace pod is deleted only when the new workspace pod is in a ready state. If not specified, the default Recreate deployment strategy is used.

 

disableContainerBuildCapabilities

Disables the container build capabilities. When set to false (the default value), the devEnvironments.security.containerSecurityContext field is ignored, and the following container SecurityContext is applied: containerSecurityContext: allowPrivilegeEscalation: true capabilities: add: - SETGID - SETUID

 

disableContainerRunCapabilities

Disables container run capabilities. Can be enabled on OpenShift version 4.20 or later. When set to false, the value from devEnvironments.security.containerSecurityContext is ignored, and instead the SecurityContext defined in devEnvironments.containerRunConfiguration.containerSecurityContext is applied.

true

editorsDownloadUrls

EditorsDownloadUrls provides a list of custom download URLs for JetBrains editors in a local-to-remote flow. It is particularly useful in disconnected or air-gapped environments, where editors cannot be downloaded from the public internet. Each entry contains an editor identifier in the publisher/name/version format and the corresponding download URL. Currently, this field is intended only for JetBrains editors and should not be used for other editor types.

 

gatewayContainer

GatewayContainer configuration.

 

ignoredUnrecoverableEvents

IgnoredUnrecoverableEvents defines a list of Kubernetes event names that should be ignored when deciding to fail a workspace that is starting. This option should be used if a transient cluster issue is triggering false-positives (for example, if the cluster occasionally encounters FailedScheduling events). Events listed here will not trigger workspace failures.

[ "FailedScheduling"]

imagePullPolicy

ImagePullPolicy defines the imagePullPolicy used for containers in a DevWorkspace.

 

maxNumberOfRunningWorkspacesPerCluster

The maximum number of concurrently running workspaces across the entire Kubernetes cluster. This applies to all users in the system. If the value is set to -1, it means there is no limit on the number of running workspaces.

 

maxNumberOfRunningWorkspacesPerUser

The maximum number of running workspaces per user. The value, -1, allows users to run an unlimited number of workspaces.

 

maxNumberOfWorkspacesPerUser

Total number of workspaces, both stopped and running, that a user can keep. The value, -1, allows users to keep an unlimited number of workspaces.

-1

networking

Configuration settings related to the workspaces networking.

 

nodeSelector

The node selector limits the nodes that can run the workspace pods.

 

persistUserHome

PersistUserHome defines configuration options for persisting the user home directory in workspaces.

 

podSchedulerName

Pod scheduler for the workspace pods. If not specified, the pod scheduler is set to the default scheduler on the cluster.

 

projectCloneContainer

Project clone container configuration.

 

runtimeClassName

RuntimeClassName specifies the spec.runtimeClassName for workspace pods.

 

secondsOfInactivityBeforeIdling

Idle timeout for workspaces in seconds. This timeout is the duration after which a workspace will be idled if there is no activity. To disable workspace idling due to inactivity, set this value to -1.

1800

secondsOfRunBeforeIdling

Run timeout for workspaces in seconds. This timeout is the maximum duration a workspace runs. To disable workspace run timeout, set this value to -1.

-1

security

Workspace security configuration.

 

serviceAccount

ServiceAccount to use by the DevWorkspace operator when starting the workspaces.

 

serviceAccountTokens

List of ServiceAccount tokens that will be mounted into workspace pods as projected volumes.

 

startTimeoutSeconds

StartTimeoutSeconds determines the maximum duration (in seconds) that a workspace can take to start before it is automatically failed. If not specified, the default value of 300 seconds (5 minutes) is used.

300

storage

Workspaces persistent storage.

{ "pvcStrategy": "per-user"}

tolerations

The pod tolerations of the workspace pods limit where the workspace pods can run.

 

trustedCerts

Trusted certificate settings.

 

user

User configuration.

 

workspacesPodAnnotations

WorkspacesPodAnnotations defines additional annotations for workspace pods.

 

Table 5.2. allowedSources options.

PropertyDescriptionDefault

urls

The list of approved URLs for starting Cloud Development Environments (CDEs). CDEs can only be initiated from these URLs. Wildcards * are supported in URLs, allowing flexible matching for specific URL patterns. For instance, Content from example.com is not included.https://example.com/\* would allow CDEs to be initiated from any path within 'example.com'.

 

Table 5.3. defaultNamespace options.

PropertyDescriptionDefault

autoProvision

Indicates if is allowed to automatically create a user namespace. If it set to false, then user namespace must be pre-created by a cluster administrator.

true

template

If you do not create the user namespaces in advance, this field defines the Kubernetes namespace created when you start your first workspace. You can use <username> and <userid> placeholders, such as che-workspace-<username>.

"<username>-che"

Table 5.4. defaultPlugins options.

PropertyDescriptionDefault

editor

The editor ID to specify default plug-ins for. The plugin ID must have publisher/name/version format.

 

plugins

Default plug-in URIs for the specified editor.

 

Table 5.5. editorsDownloadUrls options.

PropertyDescriptionDefault

editor

The editor ID must have publisher/name/version format.

 

url

ul

 

Table 5.6. gatewayContainer options.

PropertyDescriptionDefault

env

List of environment variables to set in the container.

 

image

Container image. Omit it or leave it empty to use the default container image provided by the Operator.

 

imagePullPolicy

Image pull policy. Default value is Always for nightly, next or latest images, and IfNotPresent in other cases.

 

name

Container name.

 

resources

Compute resources required by this container.

 

Table 5.7. networking options.

PropertyDescriptionDefault

externalTLSConfig

External TLS configuration.

 

Table 5.8. externalTLSConfig options.

PropertyDescriptionDefault

annotations

Annotations to be applied to ingress/route objects when external TLS is enabled.

 

enabled

Enabled determines whether external TLS configuration is used. If set to true, the operator will not set TLS config for ingress/route objects. Instead, it ensures that any custom TLS configuration will not be reverted on synchronization.

 

labels

Labels to be applied to ingress/route objects when external TLS is enabled.

 

Table 5.9. persistUserHome options.

PropertyDescriptionDefault

disableInitContainer

Determines whether the init container that initializes the persistent home directory should be disabled. When the /home/user directory is persisted, the init container is used to initialize the directory before the workspace starts. If set to true, the init container will not be created. Disabling the init container allows home persistence to be initialized by the entrypoint present in the workspace’s first container component. This field is not used if the devEnvironments.persistUserHome.enabled field is set to false. The init container is enabled by default.

 

enabled

Determines whether the user home directory in workspaces should persist between workspace shutdown and startup. Must be used with the 'per-user' or 'per-workspace' PVC strategy to take effect. Disabled by default.

 

Table 5.10. projectCloneContainer options.

PropertyDescriptionDefault

env

List of environment variables to set in the container.

 

image

Container image. Omit it or leave it empty to use the default container image provided by the Operator.

 

imagePullPolicy

Image pull policy. Default value is Always for nightly, next or latest images, and IfNotPresent in other cases.

 

name

Container name.

 

resources

Compute resources required by this container.

 

Table 5.11. security options.

PropertyDescriptionDefault

containerSecurityContext

Defines the SecurityContext applied to all workspace-related containers. When set, the specified values are merged with the default SecurityContext configuration. This setting takes effect only if both devEnvironments.disableContainerBuildCapabilities and devEnvironments.disableContainerRunCapabilities are set to true.

 

podSecurityContext

PodSecurityContext used by all workspace-related pods. If set, defined values are merged into the default PodSecurityContext configuration.

 

Table 5.12. storage options.

PropertyDescriptionDefault

perUserStrategyPvcConfig

PVC settings when using the per-user PVC strategy.

 

perWorkspaceStrategyPvcConfig

PVC settings when using the per-workspace PVC strategy.

 

pvcStrategy

Persistent volume claim strategy for the OpenShift Dev Spaces server. The supported strategies are: per-user (all workspaces PVCs in one volume), per-workspace (each workspace is given its own individual PVC) and ephemeral (non-persistent storage where local changes will be lost when the workspace is stopped.)

"per-user"

Table 5.13. per-user PVC strategy options.

PropertyDescriptionDefault

claimSize

Persistent Volume Claim size. To update the claim size, the storage class that provisions it must support resizing.

 

storageAccessMode

StorageAccessMode are the desired access modes the volume should have. It is used to specify PersistentVolume access mode type to RWO/RWX when using per-user strategy, allowing user to re-use volume across multiple workspaces. It defaults to ReadWriteOnce if not specified

 

storageClass

Storage class for the Persistent Volume Claim. When omitted or left blank, a default storage class is used.

 

Table 5.14. per-workspace PVC strategy options.

PropertyDescriptionDefault

claimSize

Persistent Volume Claim size. To update the claim size, the storage class that provisions it must support resizing.

 

storageAccessMode

StorageAccessMode are the desired access modes the volume should have. It is used to specify PersistentVolume access mode type to RWO/RWX when using per-user strategy, allowing user to re-use volume across multiple workspaces. It defaults to ReadWriteOnce if not specified

 

storageClass

Storage class for the Persistent Volume Claim. When omitted or left blank, a default storage class is used.

 

Table 5.15. trustedCerts options.

PropertyDescriptionDefault

disableWorkspaceCaBundleMount

By default, the Operator creates and mounts the 'ca-certs-merged' ConfigMap containing the CA certificate bundle in users' workspaces at two locations: '/public-certs' and '/etc/pki/ca-trust/extracted/pem'. The '/etc/pki/ca-trust/extracted/pem' directory is where the system stores extracted CA certificates for trusted certificate authorities on Red Hat (e.g., CentOS, Fedora). This option disables mounting the CA bundle to the '/etc/pki/ca-trust/extracted/pem' directory while still mounting it to '/public-certs'.

 

gitTrustedCertsConfigMapName

The ConfigMap contains certificates to propagate to the OpenShift Dev Spaces components and to provide a particular configuration for Git. See the following page: Content from www.eclipse.org is not included.https://www.eclipse.org/che/docs/stable/administration-guide/deploying-che-with-support-for-git-repositories-with-self-signed-certificates/ The ConfigMap must have a app.kubernetes.io/part-of=che.eclipse.org label.

 

Table 5.16. user options.

PropertyDescriptionDefault

clusterRoles

Additional ClusterRoles assigned to the user. The role must have app.kubernetes.io/part-of=che.eclipse.org label.

 

Table 5.17. containerBuildConfiguration options.

PropertyDescriptionDefault

openShiftSecurityContextConstraint

OpenShift security context constraint to build containers.

"container-build"

Table 5.18. containerRunConfiguration options.

PropertyDescriptionDefault

containerSecurityContext

SecurityContext applied to all workspace containers when run capabilities are enabled. The default procMount: "Unmasked" is set because the pod runs in a user namespace, which safely isolates the container’s /proc from the host. This allows the container to modify its own sysctl settings for configuring networking for nested containers.

{ "allowPrivilegeEscalation": true, "capabilities": { "add": [ "SETGID", "SETUID" ] }, "procMount": "Unmasked"}

openShiftSecurityContextConstraint

Specifies the OpenShift SecurityContextConstraint used to run containers.

"container-run"

workspacesPodAnnotations

Extra annotations applied to all workspace pods, in addition to those defined in devEnvironments.workspacePodAnnotations. Enables /dev/fuse for access to the fuse driver and /dev/net/tun for safe network access.

{ "io.kubernetes.cri-o.Devices": "/dev/fuse,/dev/net/tun"}

Table 5.19. OpenShift Dev Spaces components configuration.

PropertyDescriptionDefault

cheServer

General configuration settings related to the OpenShift Dev Spaces server.

{ "debug": false, "logLevel": "INFO"}

dashboard

Configuration settings related to the dashboard used by the OpenShift Dev Spaces installation.

 

devWorkspace

DevWorkspace Operator configuration.

 

devfileRegistry

Configuration settings related to the devfile registry used by the OpenShift Dev Spaces installation.

 

imagePuller

Kubernetes Image Puller configuration.

 

metrics

OpenShift Dev Spaces server metrics configuration.

{ "enable": true}

pluginRegistry

Configuration settings related to the plug-in registry used by the OpenShift Dev Spaces installation.

 

Table 5.20. General configuration settings related to the OpenShift Dev Spaces server component.

PropertyDescriptionDefault

clusterRoles

Additional ClusterRoles assigned to OpenShift Dev Spaces ServiceAccount. Each role must have a app.kubernetes.io/part-of=che.eclipse.org label. The defaults roles are: - <devspaces-namespace>-cheworkspaces-clusterrole - <devspaces-namespace>-cheworkspaces-namespaces-clusterrole - <devspaces-namespace>-cheworkspaces-devworkspace-clusterrole where the <devspaces-namespace> is the namespace where the CheCluster CR is created. The OpenShift Dev Spaces Operator must already have all permissions in these ClusterRoles to grant them.

 

debug

Enables the debug mode for OpenShift Dev Spaces server.

false

deployment

Deployment override options.

 

extraProperties

A map of additional environment variables applied in the generated che ConfigMap to be used by the OpenShift Dev Spaces server in addition to the values already generated from other fields of the CheCluster custom resource (CR). If the extraProperties field contains a property normally generated in che ConfigMap from other CR fields, the value defined in the extraProperties is used instead.

 

logLevel

The log level for the OpenShift Dev Spaces server: INFO or DEBUG.

"INFO"

proxy

Proxy server settings for Kubernetes cluster. No additional configuration is required for OpenShift cluster. By specifying these settings for the OpenShift cluster, you override the OpenShift proxy configuration.

 

Table 5.21. proxy options.

PropertyDescriptionDefault

credentialsSecretName

The secret name that contains user and password for a proxy server. The secret must have a app.kubernetes.io/part-of=che.eclipse.org label.

 

nonProxyHosts

A list of hosts that can be reached directly, bypassing the proxy. Specify wild card domain use the following form .<DOMAIN>, for example: - localhost - 127.0.0.1 - my.host.com - 123.42.12.32 Use only when a proxy configuration is required. The Operator respects OpenShift cluster-wide proxy configuration, defining nonProxyHosts in a custom resource leads to merging non-proxy hosts lists from the cluster proxy configuration, and the ones defined in the custom resources. See the following page: This page is not included, but the link has been rewritten to point to the nearest parent document.https://docs.openshift.com/container-platform/4.22/networking/enable-cluster-wide-proxy.html. In some proxy configurations, localhost may not translate to 127.0.0.1. Both localhost and 127.0.0.1 should be specified in this situation.

 

port

Proxy server port.

 

url

URL (protocol+hostname) of the proxy server. Use only when a proxy configuration is required. The Operator respects OpenShift cluster-wide proxy configuration, defining url in a custom resource leads to overriding the cluster proxy configuration. See the following page: This page is not included, but the link has been rewritten to point to the nearest parent document.https://docs.openshift.com/container-platform/4.22/networking/enable-cluster-wide-proxy.html.

 

Table 5.22. Configuration settings related to the Plug-in registry component used by the OpenShift Dev Spaces installation.

PropertyDescriptionDefault

deployment

Deployment override options.

 

disableInternalRegistry

Disables internal plug-in registry.

 

externalPluginRegistries

External plugin registries.

 

openVSXURL

Open VSX registry URL. If omitted an embedded instance will be used.

 

Table 5.23. externalPluginRegistries options.

PropertyDescriptionDefault

url

Public URL of the plug-in registry.

 

Table 5.24. Configuration settings related to the Devfile registry component used by the OpenShift Dev Spaces installation.

PropertyDescriptionDefault

deployment

Deprecated deployment override options.

 

disableInternalRegistry

Disables internal devfile registry.

 

externalDevfileRegistries

External devfile registries serving sample ready-to-use devfiles.

 

Table 5.25. externalDevfileRegistries options.

PropertyDescriptionDefault

url

The public URL of the devfile registry that serves sample ready-to-use devfiles.

 

Table 5.26. Configuration settings related to the Dashboard component used by the OpenShift Dev Spaces installation.

PropertyDescriptionDefault

branding

Dashboard branding resources.

 

deployment

Deployment override options.

 

headerMessage

Dashboard header message.

 

logLevel

The log level for the Dashboard.

"ERROR"

Table 5.27. headerMessage options.

PropertyDescriptionDefault

show

Instructs dashboard to show the message.

 

text

Warning message displayed on the user dashboard.

 

Table 5.28. branding options.

PropertyDescriptionDefault

logo

Dashboard logo.

 

Table 5.29. Kubernetes Image Puller component configuration.

PropertyDescriptionDefault

enable

Install and configure the community supported Kubernetes Image Puller Operator. When you set the value to true without providing any specs, it creates a default Kubernetes Image Puller object managed by the Operator. When you set the value to false, the Kubernetes Image Puller object is deleted, and the Operator uninstalled, regardless of whether a spec is provided. If you leave the spec.images field empty, a set of recommended workspace-related images is automatically detected and pre-pulled after installation. Note that while this Operator and its behavior is community-supported, its payload may be commercially-supported for pulling commercially-supported images.

 

spec

A Kubernetes Image Puller spec to configure the image puller in the CheCluster.

 

Table 5.30. OpenShift Dev Spaces server metrics component configuration.

PropertyDescriptionDefault

enable

Enables metrics for the OpenShift Dev Spaces server endpoint.

true

Table 5.31. Configuration settings that allows users to work with remote Git repositories.

PropertyDescriptionDefault

azure

Enables users to work with repositories hosted on Azure DevOps Service (dev.azure.com).

 

bitbucket

Enables users to work with repositories hosted on Bitbucket (bitbucket.org or self-hosted).

 

github

Enables users to work with repositories hosted on GitHub (github.com or GitHub Enterprise).

 

gitlab

Enables users to work with repositories hosted on GitLab (gitlab.com or self-hosted).

 

Table 5.32. github options.

PropertyDescriptionDefault

disableSubdomainIsolation

Disables subdomain isolation. Deprecated in favor of che.eclipse.org/scm-github-disable-subdomain-isolation annotation. See the following page for details: Content from www.eclipse.org is not included.https://www.eclipse.org/che/docs/stable/administration-guide/configuring-oauth-2-for-github/.

 

endpoint

GitHub server endpoint URL. Deprecated in favor of che.eclipse.org/scm-server-endpoint annotation. See the following page for details: Content from www.eclipse.org is not included.https://www.eclipse.org/che/docs/stable/administration-guide/configuring-oauth-2-for-github/.

 

secretName

Kubernetes secret, that contains Base64-encoded GitHub OAuth Client id and GitHub OAuth Client secret. See the following page for details: Content from www.eclipse.org is not included.https://www.eclipse.org/che/docs/stable/administration-guide/configuring-oauth-2-for-github/.

 

Table 5.33. gitlab options.

PropertyDescriptionDefault

endpoint

GitLab server endpoint URL. Deprecated in favor of che.eclipse.org/scm-server-endpoint annotation. See the following page: Content from www.eclipse.org is not included.https://www.eclipse.org/che/docs/stable/administration-guide/configuring-oauth-2-for-gitlab/.

 

secretName

Kubernetes secret, that contains Base64-encoded GitHub Application id and GitLab Application Client secret. See the following page: Content from www.eclipse.org is not included.https://www.eclipse.org/che/docs/stable/administration-guide/configuring-oauth-2-for-gitlab/.

 

Table 5.34. bitbucket options.

PropertyDescriptionDefault

endpoint

Bitbucket server endpoint URL. Deprecated in favor of che.eclipse.org/scm-server-endpoint annotation. See the following page: Content from www.eclipse.org is not included.https://www.eclipse.org/che/docs/stable/administration-guide/configuring-oauth-1-for-a-bitbucket-server/.

 

secretName

Kubernetes secret, that contains Base64-encoded Bitbucket OAuth 1.0 or OAuth 2.0 data. See the following pages for details: Content from www.eclipse.org is not included.https://www.eclipse.org/che/docs/stable/administration-guide/configuring-oauth-1-for-a-bitbucket-server/ and Content from www.eclipse.org is not included.https://www.eclipse.org/che/docs/stable/administration-guide/configuring-oauth-2-for-the-bitbucket-cloud/.

 

Table 5.35. azure options.

PropertyDescriptionDefault

secretName

Kubernetes secret, that contains Base64-encoded Azure DevOps Service Application ID and Client Secret. See the following page: Content from www.eclipse.org is not included.https://www.eclipse.org/che/docs/stable/administration-guide/configuring-oauth-2-for-microsoft-azure-devops-services

 

Table 5.36. Networking, OpenShift Dev Spaces authentication and TLS configuration.

PropertyDescriptionDefault

annotations

Defines annotations which will be set for an Ingress (a route for OpenShift platform). The defaults for kubernetes platforms are: kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/proxy-read-timeout: "3600", nginx.ingress.kubernetes.io/proxy-connect-timeout: "3600", nginx.ingress.kubernetes.io/ssl-redirect: "true"

 

auth

Authentication settings.

{ "gateway": { "configLabels": { "app": "che", "component": "che-gateway-config" } }}

domain

For an OpenShift cluster, the Operator uses the domain to generate a hostname for the route. The generated hostname follows this pattern: che-<devspaces-namespace>.<domain>. The <devspaces-namespace> is the namespace where the CheCluster CRD is created. In conjunction with labels, it creates a route served by a non-default Ingress controller. For a Kubernetes cluster, it contains a global ingress domain. There are no default values: you must specify them.

 

hostname

The public hostname of the installed OpenShift Dev Spaces server.

 

ingressClassName

IngressClassName is the name of an IngressClass cluster resource. If a class name is defined in both the IngressClassName field and the kubernetes.io/ingress.class annotation, IngressClassName field takes precedence.

 

labels

Defines labels which will be set for an Ingress (a route for OpenShift platform).

 

tlsSecretName

The name of the secret used to set up Ingress TLS termination. If the field is an empty string, the default cluster certificate is used. The secret must have a app.kubernetes.io/part-of=che.eclipse.org label.

 

Table 5.37. auth options.

PropertyDescriptionDefault

advancedAuthorization

Advance authorization settings. Determines which users and groups are allowed to access Che. User is allowed to access OpenShift Dev Spaces if he/she is either in the allowUsers list or is member of group from allowGroups list and not in neither the denyUsers list nor is member of group from denyGroups list. If allowUsers and allowGroups are empty, then all users are allowed to access Che. if denyUsers and denyGroups are empty, then no users are denied to access Che.

 

gateway

Gateway settings.

{ "configLabels": { "app": "che", "component": "che-gateway-config" }}

identityProviderURL

Public URL of the Identity Provider server.

 

identityToken

Identity token to be passed to upstream. There are two types of tokens supported: id_token and access_token. Default value is id_token. This field is specific to OpenShift Dev Spaces installations made for Kubernetes only and ignored for OpenShift.

 

oAuthAccessTokenInactivityTimeoutSeconds

Inactivity timeout for tokens to set in the OpenShift OAuthClient resource used to set up identity federation on the OpenShift side. 0 means tokens for this client never time out.

 

oAuthAccessTokenMaxAgeSeconds

Access token max age for tokens to set in the OpenShift OAuthClient resource used to set up identity federation on the OpenShift side. 0 means no expiration.

 

oAuthClientName

Name of the OpenShift OAuthClient resource used to set up identity federation on the OpenShift side.

 

oAuthScope

Access Token Scope. This field is specific to OpenShift Dev Spaces installations made for Kubernetes only and ignored for OpenShift.

 

oAuthSecret

Name of the secret set in the OpenShift OAuthClient resource used to set up identity federation on the OpenShift side. For Kubernetes, this can either be the plain text oAuthSecret value, or the name of a kubernetes secret which contains a key oAuthSecret and the value is the secret. NOTE: this secret must exist in the same namespace as the CheCluster resource and contain the label app.kubernetes.io/part-of=che.eclipse.org.

 

Table 5.38. gateway options.

PropertyDescriptionDefault

configLabels

Gateway configuration labels.

{ "app": "che", "component": "che-gateway-config"}

deployment

Deployment override options. Since gateway deployment consists of several containers, they must be distinguished in the configuration by their names: - gateway - configbump - oauth-proxy - kube-rbac-proxy

 

kubeRbacProxy

Configuration for kube-rbac-proxy within the OpenShift Dev Spaces gateway pod.

 

oAuthProxy

Configuration for oauth-proxy within the OpenShift Dev Spaces gateway pod.

 

traefik

Configuration for Traefik within the OpenShift Dev Spaces gateway pod.

 

Table 5.39. advancedAuthorization options.

PropertyDescriptionDefault

allowGroups

List of groups allowed to access OpenShift Dev Spaces (currently supported in OpenShift only).

 

allowUsers

List of users allowed to access Che.

 

denyGroups

List of groups denied to access OpenShift Dev Spaces (currently supported in OpenShift only).

 

denyUsers

List of users denied to access Che.

 

Table 5.40. Configuration of an alternative registry that stores OpenShift Dev Spaces images.

PropertyDescriptionDefault

hostname

An optional hostname or URL of an alternative container registry to pull images from. This value overrides the container registry hostname defined in all the default container images involved in an OpenShift Dev Spaces deployment. This is particularly useful for installing OpenShift Dev Spaces in a restricted environment.

 

organization

An optional repository name of an alternative registry to pull images from. This value overrides the container registry organization defined in all the default container images involved in an OpenShift Dev Spaces deployment. This is particularly useful for installing OpenShift Dev Spaces in a restricted environment.

 

Table 5.41. deployment options.

PropertyDescriptionDefault

containers

List of containers belonging to the pod.

 

nodeSelector

The node selector limits the nodes that can run the pod.

 

securityContext

Security options the pod should run with.

 

tolerations

The pod tolerations of the component pod limit where the pod can run.

 

Table 5.42. containers options.

PropertyDescriptionDefault

env

List of environment variables to set in the container.

 

image

Container image. Omit it or leave it empty to use the default container image provided by the Operator.

 

imagePullPolicy

Image pull policy. Default value is Always for nightly, next or latest images, and IfNotPresent in other cases.

 

name

Container name.

 

resources

Compute resources required by this container.

 

Table 5.43. resources options.

PropertyDescriptionDefault

limits

Describes the maximum amount of compute resources allowed.

 

request

Describes the minimum amount of compute resources required.

 

Table 5.44. request options.

PropertyDescriptionDefault

cpu

CPU, in cores. (500m = .5 cores) If the value is not specified, then the default value is set depending on the component. If value is 0, then no value is set for the component.

 

memory

Memory, in bytes. (500Gi = 500GiB = 500 * 1024 * 1024 * 1024) If the value is not specified, then the default value is set depending on the component. If value is 0, then no value is set for the component.

 

Table 5.45. limits options.

PropertyDescriptionDefault

cpu

CPU, in cores. (500m = .5 cores) If the value is not specified, then the default value is set depending on the component. If value is 0, then no value is set for the component.

 

memory

Memory, in bytes. (500Gi = 500GiB = 500 * 1024 * 1024 * 1024) If the value is not specified, then the default value is set depending on the component. If value is 0, then no value is set for the component.

 

Table 5.46. securityContext options.

PropertyDescriptionDefault

fsGroup

A special supplemental group that applies to all containers in a pod. The default value is 1724.

 

runAsUser

The UID to run the entrypoint of the container process. The default value is 1724.

 

Table 5.47. CheCluster Custom Resource status defines the observed state of OpenShift Dev Spaces installation

PropertyDescriptionDefault

chePhase

Specifies the current phase of the OpenShift Dev Spaces deployment.

 

cheURL

Public URL of the OpenShift Dev Spaces server.

 

cheVersion

Currently installed OpenShift Dev Spaces version.

 

devfileRegistryURL

Deprecated the public URL of the internal devfile registry.

 

gatewayPhase

Specifies the current phase of the gateway deployment.

 

message

A human readable message indicating details about why the OpenShift Dev Spaces deployment is in the current phase.

 

pluginRegistryURL

The public URL of the internal plug-in registry.

 

reason

A brief CamelCase message indicating details about why the OpenShift Dev Spaces deployment is in the current phase.

 

workspaceBaseDomain

The resolved workspace base domain. This is either the copy of the explicitly defined property of the same name in the spec or, if it is undefined in the spec and we’re running on OpenShift, the automatically resolved basedomain for routes.

 

Chapter 6. Configure projects

Configure projects for OpenShift Dev Spaces workspaces, including namespace templates, pre-provisioning, and resource synchronization.

6.1. Project configuration

OpenShift Dev Spaces isolates workspaces for each user in a project, identified by labels and annotations. If the project does not exist, OpenShift Dev Spaces creates it from a template.

You can modify OpenShift Dev Spaces behavior by configuring the project name, provisioning projects in advance, or configuring a user project.

6.2. Configure project name

Configure the project name template that OpenShift Dev Spaces uses when creating workspace projects to enforce naming conventions and organizational compliance.

A valid project name template follows these conventions:

  • The <username> or <userid> placeholder is mandatory.
  • Usernames and IDs cannot contain invalid characters. If a username or ID is incompatible with OpenShift naming conventions, OpenShift Dev Spaces replaces incompatible characters with the - symbol.
  • OpenShift Dev Spaces evaluates the <userid> placeholder into a 14 character long string, and adds a random six character long suffix to prevent IDs from colliding. The result is stored in the user preferences for reuse.
  • Kubernetes limits the length of a project name to 63 characters.
  • OpenShift limits the length further to 49 characters.

Prerequisites

Procedure

  1. Configure the CheCluster Custom Resource. See Section 5.3, “Use the CLI to configure the CheCluster Custom Resource”.

    spec:
      components:
        devEnvironments:
          defaultNamespace:
            template: <workspace_namespace_template>

    where:

    <workspace_namespace_template>

    The project name template. Must include the <username> or <userid> placeholder.

    Table 6.1. User workspaces project name template examples

    User workspaces project name templateResulting project example

    <username>-devspaces (default)

    user1-devspaces

    <userid>-namespace

    cge1egvsb2nhba-namespace-ul1411

    <userid>-aka-<username>-namespace

    cgezegvsb2nhba-aka-user1-namespace-6m2w2b

Verification

  • Start a workspace and verify that the workspace project name matches the configured template:

    oc get devworkspaces -A -o jsonpath='{range .items[*]}{.metadata.namespace}{"\n"}{end}'

6.3. Provision projects in advance

Provision workspace projects in advance, rather than relying on automatic provisioning, to control namespace naming and apply custom resource quotas. Repeat the procedure for each user.

Prerequisites

Procedure

  1. Disable automatic namespace provisioning on the CheCluster level:

    devEnvironments:
      defaultNamespace:
        autoProvision: false
  2. Create the <project_name> project for <username> user with the following labels and annotations:

    kind: Namespace
    apiVersion: v1
    metadata:
      name: <project_name>
      labels:
        app.kubernetes.io/part-of: che.eclipse.org
        app.kubernetes.io/component: workspaces-namespace
      annotations:
        che.eclipse.org/username: <username>

    where:

    <project_name>
    A project name of your choosing.
    <username>
    The username of the OpenShift Dev Spaces user.

Verification

  • Verify that the project was created with the correct labels:

    $ oc get namespace <project_name> --show-labels

6.4. Configure a user namespace

Synchronize ConfigMaps, Secrets, PersistentVolumeClaims, and other Kubernetes objects from the openshift-devspaces namespace to user-specific namespaces to provide consistent workspace configurations.

If you make changes to a Kubernetes resource in the openshift-devspaces namespace, OpenShift Dev Spaces immediately synchronizes the changes across all user namespaces. In reverse, if a Kubernetes resource is modified in a user namespace, OpenShift Dev Spaces immediately reverts the changes.

Prerequisites

Warning

Applying or modifying a Secret or ConfigMap with the controller.devfile.io/mount-to-devworkspace: 'true' label restarts all running workspaces in the project. Ensure that users save their work before you apply these changes.

Procedure

  1. Create the following ConfigMap to mount it into every workspace:

    kind: ConfigMap
    apiVersion: v1
    metadata:
      name: devspaces-user-configmap
      namespace: openshift-devspaces
      labels:
        app.kubernetes.io/part-of: che.eclipse.org
        app.kubernetes.io/component: workspaces-config
    data:
      ...

    For example, to mount a default SSH configuration into every workspace, create a ConfigMap:

    kind: ConfigMap
    apiVersion: v1
    metadata:
      name: ssh-config-configmap
      namespace: openshift-devspaces
      labels:
        app.kubernetes.io/component: workspaces-config
        app.kubernetes.io/part-of: che.eclipse.org
      annotations:
        controller.devfile.io/mount-as: subpath
        controller.devfile.io/mount-path: /etc/ssh/ssh_config.d/
    data:
      ssh.conf: <ssh_config_content>

    The ConfigMap propagates the SSH configuration as an extension by using Include /etc/ssh/ssh_config.d/*.conf. For details, see Content from man.openbsd.org is not included.Include definition.

    For other labels and annotations, see the Content from github.com is not included.mounting volumes, configmaps, and secrets.

  2. Optional: To prevent the ConfigMap from being mounted automatically, add these labels:

    controller.devfile.io/watch-configmap: "false"
    controller.devfile.io/mount-to-devworkspace: "false"
  3. Optional: To retain the ConfigMap in a user namespace after deletion from openshift-devspaces, add this annotation:

    che.eclipse.org/sync-retain-on-delete: "true"
  4. Create the following Secret to mount it into every workspace:

    kind: Secret
    apiVersion: v1
    metadata:
      name: devspaces-user-secret
      namespace: openshift-devspaces
      labels:
        app.kubernetes.io/part-of: che.eclipse.org
        app.kubernetes.io/component: workspaces-config
    stringData:
        ...

    See the Content from github.com is not included.mounting volumes, configmaps, and secrets for other possible labels and annotations.

  5. Optional: To prevent the Secret from being mounted automatically, add these labels:

    controller.devfile.io/watch-secret: "false"
    controller.devfile.io/mount-to-devworkspace: "false"
  6. Optional: To retain the Secret in a user namespace after deletion from openshift-devspaces, add this annotation:

    che.eclipse.org/sync-retain-on-delete: "true"
  7. Create the following PersistentVolumeClaim for every user project:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: devspaces-user-pvc
      namespace: openshift-devspaces
      labels:
        app.kubernetes.io/part-of: che.eclipse.org
        app.kubernetes.io/component: workspaces-config
    spec:
      ...

    See the Content from github.com is not included.mounting volumes, configmaps, and secrets for other possible labels and annotations.

  8. Optional: By default, deleting a PersistentVolumeClaim from openshift-devspaces does not delete it from a user namespace. To delete the PersistentVolumeClaim from user namespaces as well, add this annotation:

    che.eclipse.org/sync-retain-on-delete: "false"
  9. Optional: To use the OpenShift Kubernetes Engine, create a Template object to replicate all resources defined within the template across each user project.

    Aside from the previously mentioned ConfigMap, Secret, and PersistentVolumeClaim, Template objects can include:

    • LimitRange
    • NetworkPolicy
    • ResourceQuota
    • Role
    • RoleBinding

      apiVersion: template.openshift.io/v1
      kind: Template
      metadata:
        name: devspaces-user-namespace-configurator
        namespace: openshift-devspaces
        labels:
          app.kubernetes.io/part-of: che.eclipse.org
          app.kubernetes.io/component: workspaces-config
      objects:
        ...
      parameters:
      - name: PROJECT_NAME
      - name: PROJECT_ADMIN_USER

      The parameters are optional and define which parameters can be used. Currently, only PROJECT_NAME and PROJECT_ADMIN_USER are supported. PROJECT_NAME is the name of the OpenShift Dev Spaces namespace, while PROJECT_ADMIN_USER is the OpenShift Dev Spaces user of the namespace.

      The namespace name in objects is replaced with the user’s namespace name during synchronization.

      For example, a Template that replicates ResourceQuota, LimitRange, Role, and RoleBinding objects:

      apiVersion: template.openshift.io/v1
      kind: Template
      metadata:
        name: devspaces-user-namespace-configurator
        namespace: openshift-devspaces
        labels:
          app.kubernetes.io/part-of: che.eclipse.org
          app.kubernetes.io/component: workspaces-config
      objects:
      - apiVersion: v1
        kind: ResourceQuota
        metadata:
          name: devspaces-user-resource-quota
        spec:
          ...
      - apiVersion: v1
        kind: LimitRange
        metadata:
          name: devspaces-user-resource-constraint
        spec:
          ...
      - apiVersion: rbac.authorization.k8s.io/v1
        kind: Role
        metadata:
          name: devspaces-user-roles
        rules:
          ...
      - apiVersion: rbac.authorization.k8s.io/v1
        kind: RoleBinding
        metadata:
          name: devspaces-user-rolebinding
        roleRef:
          apiGroup: rbac.authorization.k8s.io
          kind: Role
          name: devspaces-user-roles
        subjects:
        - kind: User
          apiGroup: rbac.authorization.k8s.io
          name: ${PROJECT_ADMIN_USER}
      parameters:
      - name: PROJECT_ADMIN_USER
      Note

      Creating Template Kubernetes resources is supported only on OpenShift.

Verification

  • Verify that the Kubernetes objects are synchronized to a user project:

    $ oc get configmaps,secrets -n <user_namespace> -l app.kubernetes.io/part-of=che.eclipse.org

Chapter 7. Configure server components

Mount OpenShift Secrets and ConfigMaps into OpenShift Dev Spaces containers to provide configuration files, credentials, and environment variables without modifying container images.

You can mount Secrets and ConfigMaps as files, as subpath volumes, or as environment variables. Each method requires specific annotations and labels on the OpenShift resource.

7.1. Mount a Secret or a ConfigMap as a file

Mount an OpenShift Secret or a ConfigMap as a file into an OpenShift Dev Spaces container to provide configuration files, certificates, or credentials without embedding them in the container image.

Prerequisites

  • You have a running instance of Red Hat OpenShift Dev Spaces.

Procedure

  1. Create a new OpenShift Secret or a ConfigMap in the OpenShift project where OpenShift Dev Spaces is deployed with the required labels:

    apiVersion: v1
    kind: Secret
    metadata:
      name: custom-settings
      labels:
        app.kubernetes.io/part-of: che.eclipse.org
        app.kubernetes.io/component: <DEPLOYMENT_NAME>-<OBJECT_KIND>
    ...

    where:

    kind
    Secret for a Secret or ConfigMap for a ConfigMap.
    <DEPLOYMENT_NAME>
    Target deployment: devspaces, devspaces-dashboard, devfile-registry, or plugin-registry.
    <OBJECT_KIND>
    secret for a Secret or configmap for a ConfigMap.
  2. Configure the annotation values. Annotations must indicate that the given object is mounted as a file:

    • che.eclipse.org/mount-as: file - Mounts an object as a file.
    • che.eclipse.org/mount-path: <TARGET_PATH> - To provide a required mount path.

      For a Secret:

      apiVersion: v1
      kind: Secret
      metadata:
        name: custom-data
        annotations:
          che.eclipse.org/mount-as: file
          che.eclipse.org/mount-path: /data
        labels:
          app.kubernetes.io/part-of: che.eclipse.org
          app.kubernetes.io/component: devspaces-secret
      ...

      For a ConfigMap:

      apiVersion: v1
      kind: ConfigMap
      metadata:
        name: custom-data
        annotations:
          che.eclipse.org/mount-as: file
          che.eclipse.org/mount-path: /data
        labels:
          app.kubernetes.io/part-of: che.eclipse.org
          app.kubernetes.io/component: devspaces-configmap
      ...
  3. Add data items to the object. Each item name must match the desired file name mounted into the container.

    For a Secret:

    apiVersion: v1
    kind: Secret
    metadata:
      name: custom-data
      labels:
        app.kubernetes.io/part-of: che.eclipse.org
        app.kubernetes.io/component: devspaces-secret
      annotations:
        che.eclipse.org/mount-as: file
        che.eclipse.org/mount-path: /data
    data:
      ca.crt: <base64 encoded data content here>

    For a ConfigMap:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: custom-data
      labels:
        app.kubernetes.io/part-of: che.eclipse.org
        app.kubernetes.io/component: devspaces-configmap
      annotations:
        che.eclipse.org/mount-as: file
        che.eclipse.org/mount-path: /data
    data:
      ca.crt: <data content here>

Verification

  • Verify that the file is mounted in the target container:

    oc exec -n openshift-devspaces deploy/<DEPLOYMENT_NAME> -- ls <TARGET_PATH>/<FILE_NAME>

    Each data item name in the object corresponds to a file name at the mount path. For example, a data item named ca.crt with a mount path of /data results in a file at /data/ca.crt.

    Important

    If you update the Secret or ConfigMap data, re-create the object entirely to make the changes visible in the OpenShift Dev Spaces container.

7.2. Mount a Secret or a ConfigMap as a subPath

Mount an OpenShift Secret or a ConfigMap as a subPath to add individual files to a target directory without replacing existing contents. Use a subPath mount when the target directory already contains files that must be preserved.

Prerequisites

  • You have a running instance of Red Hat OpenShift Dev Spaces.

Procedure

  1. Create a new OpenShift Secret or a ConfigMap in the OpenShift project where OpenShift Dev Spaces is deployed with the required labels:

    apiVersion: v1
    kind: Secret
    metadata:
      name: custom-settings
      labels:
        app.kubernetes.io/part-of: che.eclipse.org
        app.kubernetes.io/component: <DEPLOYMENT_NAME>-<OBJECT_KIND>
    ...

    where:

    kind
    Secret for a Secret or ConfigMap for a ConfigMap.
    <DEPLOYMENT_NAME>
    Target deployment: devspaces, devspaces-dashboard, devfile-registry, or plugin-registry.
    <OBJECT_KIND>
    secret for a Secret or configmap for a ConfigMap.
  2. Configure the annotation values. Annotations must indicate that the given object is mounted as a subPath:

    • che.eclipse.org/mount-as: subpath - Mounts an object as a subPath.
    • che.eclipse.org/mount-path: <TARGET_PATH> - To provide a required mount path.

      For a Secret:

      apiVersion: v1
      kind: Secret
      metadata:
        name: custom-data
        annotations:
          che.eclipse.org/mount-as: subpath
          che.eclipse.org/mount-path: /data
        labels:
          app.kubernetes.io/part-of: che.eclipse.org
          app.kubernetes.io/component: devspaces-secret
      ...

      For a ConfigMap:

      apiVersion: v1
      kind: ConfigMap
      metadata:
        name: custom-data
        annotations:
          che.eclipse.org/mount-as: subpath
          che.eclipse.org/mount-path: /data
        labels:
          app.kubernetes.io/part-of: che.eclipse.org
          app.kubernetes.io/component: devspaces-configmap
      ...
  3. Add data items to the object. Each item name must match the file name mounted into the container.

    For a Secret:

    apiVersion: v1
    kind: Secret
    metadata:
      name: custom-data
      labels:
        app.kubernetes.io/part-of: che.eclipse.org
        app.kubernetes.io/component: devspaces-secret
      annotations:
        che.eclipse.org/mount-as: subpath
        che.eclipse.org/mount-path: /data
    data:
      ca.crt: <base64 encoded data content here>

    For a ConfigMap:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: custom-data
      labels:
        app.kubernetes.io/part-of: che.eclipse.org
        app.kubernetes.io/component: devspaces-configmap
      annotations:
        che.eclipse.org/mount-as: subpath
        che.eclipse.org/mount-path: /data
    data:
      ca.crt: <data content here>

Verification

  • Verify that the file is mounted in the target container:

    oc exec -n openshift-devspaces deploy/<DEPLOYMENT_NAME> -- ls <TARGET_PATH>/<FILE_NAME>

    Each data item name in the object corresponds to a file name at the mount path. For example, a data item named ca.crt with a mount path of /data results in a file at /data/ca.crt.

    Important

    If you update the Secret or ConfigMap data, re-create the object entirely to make the changes visible in the OpenShift Dev Spaces container.

7.3. Mount a Secret or a ConfigMap as an environment variable

Mount an OpenShift Secret or a ConfigMap as an environment variable in an OpenShift Dev Spaces container. This injects configuration values such as credentials, API keys, or feature flags without modifying the container image.

Prerequisites

  • You have a running instance of Red Hat OpenShift Dev Spaces.

Procedure

  1. Create a new OpenShift Secret or a ConfigMap in the OpenShift project where OpenShift Dev Spaces is deployed with the required labels:

    apiVersion: v1
    kind: Secret
    metadata:
      name: custom-settings
      labels:
        app.kubernetes.io/part-of: che.eclipse.org
        app.kubernetes.io/component: <DEPLOYMENT_NAME>-<OBJECT_KIND>
    ...

    where:

    kind
    Secret for a Secret or ConfigMap for a ConfigMap.
    <DEPLOYMENT_NAME>
    Target deployment: devspaces, devspaces-dashboard, devfile-registry, or plugin-registry.
    <OBJECT_KIND>
    secret for a Secret or configmap for a ConfigMap.
  2. Configure the annotation values. Annotations must indicate that the given object is mounted as an environment variable:

    • che.eclipse.org/mount-as: env - Mounts an object as an environment variable.
    • che.eclipse.org/env-name: <FOO_ENV> - Provides the environment variable name, which is required to mount an object key value.

      For a Secret:

      apiVersion: v1
      kind: Secret
      metadata:
        name: custom-settings
        annotations:
          che.eclipse.org/env-name: FOO_ENV
          che.eclipse.org/mount-as: env
        labels:
          app.kubernetes.io/part-of: che.eclipse.org
          app.kubernetes.io/component: devspaces-secret
      stringData:
        mykey: myvalue

      For a ConfigMap:

      apiVersion: v1
      kind: ConfigMap
      metadata:
        name: custom-settings
        annotations:
          che.eclipse.org/env-name: FOO_ENV
          che.eclipse.org/mount-as: env
        labels:
          app.kubernetes.io/part-of: che.eclipse.org
          app.kubernetes.io/component: devspaces-configmap
      data:
        mykey: myvalue
  3. If the object provides more than one data item, provide the environment variable name for each data key by using the che.eclipse.org/<key>_env-name annotation format.

    For a Secret:

    apiVersion: v1
    kind: Secret
    metadata:
      name: custom-settings
      annotations:
        che.eclipse.org/mount-as: env
        che.eclipse.org/mykey_env-name: FOO_ENV
        che.eclipse.org/otherkey_env-name: OTHER_ENV
      labels:
        app.kubernetes.io/part-of: che.eclipse.org
        app.kubernetes.io/component: devspaces-secret
    stringData:
      mykey: <data_content_here>
      otherkey: <data_content_here>

    For a ConfigMap:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: custom-settings
      annotations:
        che.eclipse.org/mount-as: env
        che.eclipse.org/mykey_env-name: FOO_ENV
        che.eclipse.org/otherkey_env-name: OTHER_ENV
      labels:
        app.kubernetes.io/part-of: che.eclipse.org
        app.kubernetes.io/component: devspaces-configmap
    data:
      mykey: <data content here>
      otherkey: <data content here>

    The maximum length of annotation names in an OpenShift object is 63 characters, where 9 characters are reserved for a prefix that ends with /. This restricts the maximum length of the key that can be used for the object.

Verification

  • Verify that the environment variable is set in the target container:

    oc exec -n openshift-devspaces deploy/<DEPLOYMENT_NAME> -- env | grep <ENV_NAME>

    For a single-key object, both the env-name value and the data key name become environment variables. For a multi-key object, only the per-key env-name values are provisioned.

    Important

    If you update the Secret or ConfigMap data, re-create the object entirely to make the changes visible in the OpenShift Dev Spaces container.

7.4. Advanced configuration options for OpenShift Dev Spaces server

Advanced configuration of the OpenShift Dev Spaces server allows you to set environment variables or override properties that are not exposed through the standard CheCluster Custom Resource fields.

Advanced configuration is necessary to:

  • Add environment variables not automatically generated by the Operator from the standard CheCluster Custom Resource fields.
  • Override the properties automatically generated by the Operator from the standard CheCluster Custom Resource fields.

The customCheProperties field, part of the CheCluster Custom Resource server settings, contains a map of additional environment variables to apply to the OpenShift Dev Spaces server component.

7.4.1. Override the default memory limit for workspaces

  • Configure the CheCluster Custom Resource.

    apiVersion: org.eclipse.che/v2
    kind: CheCluster
    spec:
      components:
        cheServer:
          extraProperties:
            CHE_LOGS_APPENDERS_IMPL: json
Note

Previous versions of the OpenShift Dev Spaces Operator had a ConfigMap named custom to fulfill this role. If the OpenShift Dev Spaces Operator finds a configMap with the name custom, it adds the data into the customCheProperties field. The Operator then redeploys OpenShift Dev Spaces and deletes the custom configMap.

Chapter 8. Configure autoscaling

Configure autoscaling for OpenShift Dev Spaces container replicas and for cluster nodes running workspaces.

8.1. Configure replicas for OpenShift Dev Spaces containers

Define a Kubernetes HorizontalPodAutoscaler (HPA) resource for OpenShift Dev Spaces operands to ensure high availability and handle varying workloads. The HPA dynamically adjusts the number of replicas based on specified metrics.

Prerequisites

Procedure

  1. Create an HPA resource for a deployment, specifying the target metrics and desired replica count.

    apiVersion: autoscaling/v2
    kind: HorizontalPodAutoscaler
    metadata:
      name: scaler
      namespace: openshift-devspaces
    spec:
      scaleTargetRef:
        apiVersion: apps/v1
        kind: Deployment
        name: <deployment_name>
      ...

    where:

    <deployment_name>

    One of the following deployments:

    • devspaces
    • che-gateway
    • devspaces-dashboard
    • plugin-registry
    • devfile-registry

      For example:

      apiVersion: autoscaling/v2
      kind: HorizontalPodAutoscaler
      metadata:
        name: devspaces-scaler
        namespace: openshift-devspaces
      spec:
        scaleTargetRef:
          apiVersion: apps/v1
          kind: Deployment
          name: devspaces
        minReplicas: 2
        maxReplicas: 5
        metrics:
          - type: Resource
            resource:
              name: cpu
              target:
                type: Utilization
                averageUtilization: 75

      In this example, the HPA targets the devspaces deployment with a minimum of 2 replicas, a maximum of 5 replicas, and scales based on CPU utilization.

Verification

  • Verify that the HPA resource is created and targeting the correct deployment:

    oc get hpa -n openshift-devspaces

8.2. Configure machine autoscaling

Configure OpenShift Dev Spaces startup timeouts and pod annotations to work with the cluster autoscaler, preventing workspace disruptions when nodes are added or removed.

When the autoscaler adds a new node, workspace startup can take longer than usual until node provisioning is complete. When the autoscaler removes a node, workspace pods should not be evicted because eviction can cause interruptions and loss of unsaved data.

Prerequisites

Procedure

  1. Set the startup timeout and event handling in the CheCluster Custom Resource to handle autoscaler node additions:

    spec:
      devEnvironments:
        startTimeoutSeconds: 600
        ignoredUnrecoverableEvents:
          - FailedScheduling

    where:

    startTimeoutSeconds
    Set to at least 600 seconds to allow time for a new node to be provisioned during workspace startup.
    ignoredUnrecoverableEvents
    Ignore the FailedScheduling event to allow workspace startup to continue when a new node is provisioned. This setting is enabled by default.
  2. Add the safe-to-evict annotation to the CheCluster Custom Resource to prevent workspace pod eviction when the autoscaler removes a node:

    spec:
      devEnvironments:
        workspacesPodAnnotations:
          cluster-autoscaler.kubernetes.io/safe-to-evict: "false"

Verification

  • Start a workspace and verify that the workspace pod contains the cluster-autoscaler.kubernetes.io/safe-to-evict: "false" annotation:

    $ oc get pod <workspace_pod_name> -o jsonpath='{.metadata.annotations.cluster-autoscaler\.kubernetes\.io/safe-to-evict}'
    false

Chapter 9. Configure workspaces globally

Configure workspace limits, self-signed Git certificates, node scheduling, allowed URLs, and container run capabilities for all users.

9.1. Limit the number of workspaces that a user can keep

By default, users can keep an unlimited number of workspaces in the dashboard. Limit this number to reduce demand on the cluster.

Prerequisites

Procedure

  1. Get the name of the OpenShift Dev Spaces namespace. The default is openshift-devspaces.

    $ oc get checluster --all-namespaces \
      -o=jsonpath="{.items[*].metadata.namespace}"
  2. Configure the maxNumberOfWorkspacesPerUser in the CheCluster Custom Resource:

    spec:
      devEnvironments:
        maxNumberOfWorkspacesPerUser: <kept_workspaces_limit>

    where:

    <kept_workspaces_limit>
    The maximum number of workspaces per user. The default value, -1, allows users to keep an unlimited number of workspaces. Use a positive integer to set the maximum number of workspaces per user.
  3. Apply the change:

    $ oc patch checluster/devspaces -n openshift-devspaces \
    --type='merge' -p \
    '{"spec":{"devEnvironments":{"maxNumberOfWorkspacesPerUser": <kept_workspaces_limit>}}}'

    where:

    -n
    The OpenShift Dev Spaces namespace that you got in step 1.

Verification

  • Verify the maxNumberOfWorkspacesPerUser value in the CheCluster Custom Resource:

    $ oc get checluster devspaces -n openshift-devspaces -o jsonpath='{.spec.devEnvironments.maxNumberOfWorkspacesPerUser}'

9.2. Limit the number of workspaces that all users can run simultaneously

By default, all users can run an unlimited number of workspaces. Limit the number of concurrently running workspaces across the cluster to manage resource consumption.

Prerequisites

Procedure

  1. Configure the maxNumberOfRunningWorkspacesPerCluster in the CheCluster Custom Resource:

    spec:
      devEnvironments:
        maxNumberOfRunningWorkspacesPerCluster: <running_workspaces_limit>

    where:

    <running_workspaces_limit>
    The maximum number of concurrently running workspaces across the entire Kubernetes cluster. This applies to all users in the system. The -1 value means there is no limit on the number of running workspaces.
  2. Apply the change:

    $ oc patch checluster/devspaces -n openshift-devspaces \
    --type='merge' -p \
    '{"spec":{"devEnvironments":{"maxNumberOfRunningWorkspacesPerCluster": <running_workspaces_limit>}}}'

Verification

  • Verify the maxNumberOfRunningWorkspacesPerCluster value in the CheCluster Custom Resource:

    $ oc get checluster devspaces -n openshift-devspaces -o jsonpath='{.spec.devEnvironments.maxNumberOfRunningWorkspacesPerCluster}'

9.3. Enable users to run multiple workspaces simultaneously

By default, a user can run only one workspace at a time. Enable users to run multiple workspaces simultaneously so that they can work on several projects without stopping active sessions.

Note

If using the default storage method, users might experience problems when concurrently running workspaces if pods are distributed across nodes in a multi-node cluster. Switching from the per-user common storage strategy to the per-workspace storage strategy or using the ephemeral storage type can avoid or solve those problems.

Prerequisites

Procedure

  1. Get the name of the OpenShift Dev Spaces namespace. The default is openshift-devspaces.

    $ oc get checluster --all-namespaces \
      -o=jsonpath="{.items[*].metadata.namespace}"
  2. Configure the maxNumberOfRunningWorkspacesPerUser in the CheCluster Custom Resource:

    spec:
      devEnvironments:
        maxNumberOfRunningWorkspacesPerUser: <running_workspaces_limit>

    where:

    <running_workspaces_limit>
    The maximum number of simultaneously running workspaces per user. The -1 value enables users to run an unlimited number of workspaces. The default value is 1.
  3. Apply the change:

    $ oc patch checluster/devspaces -n openshift-devspaces \
    --type='merge' -p \
    '{"spec":{"devEnvironments":{"maxNumberOfRunningWorkspacesPerUser": <running_workspaces_limit>}}}'

    where:

    -n
    The OpenShift Dev Spaces namespace that you got in step 1.

Verification

  • Verify the maxNumberOfRunningWorkspacesPerUser value in the CheCluster Custom Resource:

    oc get checluster devspaces -n openshift-devspaces -o jsonpath='{.spec.devEnvironments.maxNumberOfRunningWorkspacesPerUser}'

9.4. Configure Git with self-signed certificates

Configure OpenShift Dev Spaces to support operations on Git providers that use self-signed certificates so that workspaces can clone and push to repositories secured by internal certificate authorities.

Prerequisites

Procedure

  1. Create a new ConfigMap with details about the Git server:

    $ oc create configmap che-git-self-signed-cert \
      --from-file=ca.crt=<path_to_certificate> \
      --from-literal=githost=<git_server_url> -n openshift-devspaces

    where:

    --from-file
    Path to the self-signed certificate.
    --from-literal

    Optional parameter to specify the Git server URL for example Content from git.example.com is not included.https://git.example.com:8443. When omitted, the self-signed certificate is used for all repositories over HTTPS.

    Note
    • Certificate files are typically stored as Base64 ASCII files, such as. .pem, .crt, .ca-bundle. All ConfigMaps that hold certificate files should use the Base64 ASCII certificate rather than the binary data certificate.
    • A certificate chain of trust is required. If the ca.crt is signed by a certificate authority (CA), the CA certificate must be included in the ca.crt file.
  2. Add the required labels to the ConfigMap:

    $ oc label configmap che-git-self-signed-cert \
      app.kubernetes.io/part-of=che.eclipse.org -n openshift-devspaces
  3. Configure OpenShift Dev Spaces operand to use self-signed certificates for Git repositories. See Section 5.3, “Use the CLI to configure the CheCluster Custom Resource”.

    spec:
      devEnvironments:
        trustedCerts:
          gitTrustedCertsConfigMapName: che-git-self-signed-cert

Verification

  • Create and start a new workspace. Every container used by the workspace mounts a special volume that contains a file with the self-signed certificate. The container’s /etc/gitconfig file contains information about the Git server host (its URL) and the path to the certificate in the http section (see Git documentation about Content from git-scm.com is not included.git-config).

    For example:

    [http "https://10.33.177.118:3000"]
    sslCAInfo = /etc/config/che-git-tls-creds/certificate

9.5. Configure workspaces nodeSelector

Configure nodeSelector and tolerations for OpenShift Dev Spaces workspace Pods to control which nodes run workspaces for compliance, hardware affinity, or zone isolation.

Prerequisites

Procedure

  1. Set nodeSelector in the CheCluster Custom Resource to schedule workspace Pods on specific nodes:

    spec:
      devEnvironments:
        nodeSelector:
          <key>: <value>

    This section must contain a set of key=value pairs for each node label to form the nodeSelector rule.

  2. Set tolerations in the CheCluster Custom Resource to allow workspace Pods to be scheduled on tainted nodes. Tolerations work in the opposite way to nodeSelector. Instead of specifying which nodes the Pod is scheduled on, you specify which nodes the Pod cannot be scheduled on.

    spec:
      devEnvironments:
        tolerations:
          - effect: NoSchedule
            key: <key>
            value: <value>
            operator: Equal
    Important

    nodeSelector must be configured during OpenShift Dev Spaces installation. This prevents existing workspaces from failing to run due to volumes affinity conflict caused by existing workspace PVC and Pod being scheduled in different zones.

    On large, multizone clusters, Pods and PVCs can be scheduled in different zones. To avoid this, create an additional StorageClass object (pay attention to the allowedTopologies field) to coordinate the PVC creation process.

    Pass the name of this newly created StorageClass to OpenShift Dev Spaces through the CheCluster Custom Resource. For more information, see: Section 13.2, “Configure storage classes”.

Verification

  • Verify the nodeSelector or tolerations configuration in the CheCluster Custom Resource:

    oc get checluster devspaces -n openshift-devspaces -o jsonpath='{.spec.devEnvironments.nodeSelector}'

9.6. Configure allowed URLs for Cloud Development Environments

Configure allowed URLs to restrict Cloud Development Environment (CDE) initiation to authorized sources, protecting your infrastructure from untrusted deployments.

Prerequisites

Procedure

  1. Patch the CheCluster Custom Resource to configure the allowed source URLs:

    oc patch checluster/devspaces \
        --namespace openshift-devspaces \
        --type='merge' \
        -p \
    '{
       "spec": {
         "devEnvironments": {
           "allowedSources": {
             "urls": ["<url_1>", "<url_2>"]
           }
         }
       }
     }'

    where:

    urls
    The array of approved URLs for starting CDEs. Wildcards * are supported. For example, Content from example.com is not included.https://example.com/\* allows CDEs from any path within example.com.

Verification

  • In the OpenShift Dev Spaces Dashboard, start a workspace from an allowed URL and verify that it starts successfully.
  • Attempt to start a workspace from a URL that is not in the allowed list and verify that it is rejected.

9.7. Enable container run capabilities

Enable container run capabilities in OpenShift Dev Spaces workspaces to allow running nested containers using tools like Podman. This feature uses Linux kernel user namespaces for isolation, so that users can build and run container images within their workspaces.

Important

Previously created workspaces cannot be started after enabling this feature. Users must create new workspaces.

Important
  • This feature is available on OpenShift 4.20 and later versions.

Prerequisites

Procedure

  1. Configure the CheCluster custom resource to enable container run capabilities:

    oc patch checluster/devspaces -n openshift-devspaces \
      --type='merge' -p \
      '{"spec":{"devEnvironments":{"disableContainerRunCapabilities":false}}}'

Verification

  • Create a new workspace and verify that Podman is available:

    podman run --rm hello-world

Chapter 10. Cache images for faster workspace start

Use the Kubernetes Image Puller to pre-pull images and reduce workspace startup time.

10.1. Image caching for faster workspace start

To improve workspace start time, use the Image Puller, a community-supported OpenShift Dev Spaces-agnostic component that pre-pulls images for OpenShift clusters.

The Image Puller is an additional OpenShift deployment that creates a DaemonSet to pre-pull relevant OpenShift Dev Spaces workspace images on each node. These images are already available when a workspace starts, improving the workspace start time.

10.2. Install Image Puller on OpenShift using CLI

Install the Kubernetes Image Puller on OpenShift by using the oc CLI to cache images and reduce workspace startup time.

Important

If the Image Puller is installed with the oc CLI, it cannot be configured through the CheCluster Custom Resource.

Prerequisites

Procedure

  1. Gather a list of relevant container images to pull. See Section 10.7, “Retrieve the default list of images for Kubernetes Image Puller”.
  2. Define the memory requests and limits parameters to ensure pulled containers and the platform have enough memory to run.

    When defining the minimal value for CACHING_MEMORY_REQUEST or CACHING_MEMORY_LIMIT, consider the necessary amount of memory required to run each of the container images to pull.

    When defining the maximal value for CACHING_MEMORY_REQUEST or CACHING_MEMORY_LIMIT, consider the total memory allocated to the DaemonSet Pods in the cluster:

    (memory limit) * (number of images) * (number of nodes in the cluster)

    Pulling 5 images on 20 nodes, with a container memory limit of 20Mi requires 2000Mi of memory.

  3. Clone the Image Puller repository and get in the directory containing the OpenShift templates:

    git clone https://github.com/che-incubator/kubernetes-image-puller
    cd kubernetes-image-puller/deploy/openshift
  4. Configure the app.yaml, configmap.yaml, and serviceaccount.yaml OpenShift templates using the following parameters:

    Table 10.1. Image Puller OpenShift templates parameters in app.yaml

    ValueUsageDefault

    DEPLOYMENT_NAME

    The value of DEPLOYMENT_NAME in the ConfigMap

    kubernetes-image-puller

    IMAGE

    Image used for the kubernetes-image-puller deployment

    registry.redhat.io/devspaces/imagepuller-rhel8

    IMAGE_TAG

    The image tag to pull

    latest

    SERVICEACCOUNT_NAME

    The name of the ServiceAccount created and used by the deployment

    kubernetes-image-puller

    Table 10.2. Image Puller OpenShift templates parameters in configmap.yaml

    ValueUsageDefault

    CACHING_CPU_LIMIT

    The value of CACHING_CPU_LIMIT in the ConfigMap

    .2

    CACHING_CPU_REQUEST

    The value of CACHING_CPU_REQUEST in the ConfigMap

    .05

    CACHING_INTERVAL_HOURS

    The value of CACHING_INTERVAL_HOURS in the ConfigMap

    "1"

    CACHING_MEMORY_LIMIT

    The value of CACHING_MEMORY_LIMIT in the ConfigMap

    "20Mi"

    CACHING_MEMORY_REQUEST

    The value of CACHING_MEMORY_REQUEST in the ConfigMap

    "10Mi"

    DAEMONSET_NAME

    The value of DAEMONSET_NAME in the ConfigMap

    kubernetes-image-puller

    DEPLOYMENT_NAME

    The value of DEPLOYMENT_NAME in the ConfigMap

    kubernetes-image-puller

    IMAGES

    The value of IMAGES in the ConfigMap

    {}

    NAMESPACE

    The value of NAMESPACE in the ConfigMap

    k8s-image-puller

    NODE_SELECTOR

    The value of NODE_SELECTOR in the ConfigMap

    "{}"

    Table 10.3. Image Puller OpenShift templates parameters in serviceaccount.yaml

    ValueUsageDefault

    SERVICEACCOUNT_NAME

    The name of the ServiceAccount created and used by the deployment

    kubernetes-image-puller

    KIP_IMAGE

    The image puller image to copy the sleep binary from

    registry.redhat.io/devspaces/imagepuller-rhel8:latest

  5. Create an OpenShift project to host the Image Puller:

    oc new-project <k8s-image-puller>
  6. Process and apply the templates to install the puller:

    oc process -f serviceaccount.yaml | oc apply -f -
    oc process -f configmap.yaml | oc apply -f -
    oc process -f app.yaml | oc apply -f -

Verification

  1. Verify the existence of a <kubernetes-image-puller> deployment and a <kubernetes-image-puller> DaemonSet. The DaemonSet needs to have a Pod for each node in the cluster:

    oc get deployment,daemonset,pod --namespace <k8s-image-puller>
  2. Verify the values of the <kubernetes-image-puller> ConfigMap.

    oc get configmap <kubernetes-image-puller> --output yaml

10.3. Install Image Puller on OpenShift by using the web console

Install the Kubernetes Image Puller Operator on OpenShift by using the OpenShift web console to cache images and reduce workspace startup time.

Prerequisites

Verification

  • In the OpenShift web console, go to OperatorsInstalled Operators and verify that the Kubernetes Image Puller Operator status is Succeeded.

10.4. Configure Image Puller to pre-pull default OpenShift Dev Spaces images

Pre-pull default OpenShift Dev Spaces images with Kubernetes Image Puller to reduce workspace startup time. The Red Hat OpenShift Dev Spaces Operator controls the image list and updates it automatically on OpenShift Dev Spaces upgrade.

Prerequisites

Procedure

  1. Configure the Image Puller to pre-pull OpenShift Dev Spaces images.

    oc patch checluster/devspaces \
        --namespace openshift-devspaces \
        --type='merge' \
        --patch '{
                  "spec": {
                    "components": {
                      "imagePuller": {
                        "enable": true
                      }
                    }
                  }
                }'

Verification

  • Verify that the image puller is enabled:

    oc get checluster devspaces -n openshift-devspaces -o jsonpath='{.spec.components.imagePuller.enable}'

10.5. Configure Image Puller to pre-pull custom images

Pre-pull custom images with Kubernetes Image Puller so that workspaces using organization-specific container images start without waiting for large image downloads.

Prerequisites

Procedure

  1. Configure the Image Puller to pre-pull custom images.

    oc patch checluster/devspaces \
        --namespace openshift-devspaces \
        --type='merge' \
        --patch '{
                  "spec": {
                    "components": {
                      "imagePuller": {
                        "enable": true,
                        "spec": {
                          "images": "NAME-1=IMAGE-1;NAME-2=IMAGE-2"
                        }
                      }
                    }
                  }
                }'

    where:

    images
    The semicolon-separated list of images in name=image format.

Verification

  • Verify that the image puller is configured with the custom images:

    oc get checluster devspaces -n openshift-devspaces -o jsonpath='{.spec.components.imagePuller.spec.images}'

10.6. Configure Image Puller to pre-pull additional images

Pre-pull additional OpenShift Dev Spaces images with Kubernetes Image Puller to reduce workspace startup time by ensuring that required images are already cached on each node.

Prerequisites

Procedure

  1. Create k8s-image-puller namespace:

    oc create namespace k8s-image-puller
  2. Create KubernetesImagePuller Custom Resource:

    oc apply -f - <<EOF
    apiVersion: che.eclipse.org/v1alpha1
    kind: KubernetesImagePuller
    metadata:
      name: k8s-image-puller-images
      namespace: k8s-image-puller
    spec:
      images: "NAME-1=IMAGE-1;NAME-2=IMAGE-2"
    EOF

    where:

    images
    The semicolon-separated list of images in name=image format.

Verification

  • Verify that the image puller DaemonSet is running in the k8s-image-puller namespace:

    oc get daemonset -n k8s-image-puller

10.7. Retrieve the default list of images for Kubernetes Image Puller

Retrieve the default list of images used by Kubernetes Image Puller. This list helps administrators review and configure Image Puller to use only a subset of these images in advance.

Prerequisites

Procedure

  1. Determine the namespace where the OpenShift Dev Spaces Operator is deployed:

    OPERATOR_NAMESPACE=$(oc get pods -l app.kubernetes.io/component=devspaces-operator -o jsonpath={".items[0].metadata.namespace"} --all-namespaces)
  2. Determine the images that can be pre-pulled by the Image Puller:

    oc exec -n $OPERATOR_NAMESPACE deploy/devspaces-operator -- cat /tmp/external_images.txt

Chapter 11. Configure observability

Configure logging, monitoring, and telemetry for OpenShift Dev Spaces to gain visibility into workspace health, operator performance, and usage patterns.

11.1. Configure the Woopra telemetry plugin

The Content from github.com is not included.Woopra Telemetry Plugin sends telemetry from a Red Hat OpenShift Dev Spaces installation to Segment and Woopra. Any Red Hat OpenShift Dev Spaces deployment can use this plugin with a valid Woopra domain and Segment Write key.

The devfile v2 for the plugin, Content from raw.githubusercontent.com is not included.plugin.yaml, has four environment variables that can be passed to the plugin:

  • WOOPRA_DOMAIN - The Woopra domain to send events to.
  • SEGMENT_WRITE_KEY - The write key to send events to Segment and Woopra.
  • WOOPRA_DOMAIN_ENDPOINT - If you prefer not to pass in the Woopra domain directly, the plugin gets it from a supplied HTTP endpoint that returns the Woopra Domain.
  • SEGMENT_WRITE_KEY_ENDPOINT - If you prefer not to pass in the Segment write key directly, the plugin gets it from a supplied HTTP endpoint that returns the Segment write key.

To enable the Woopra plugin on the Red Hat OpenShift Dev Spaces installation:

Procedure

  1. Deploy the plugin.yaml devfile v2 file to an HTTP server with the environment variables set correctly.
  2. Configure the CheCluster Custom Resource. See Section 5.3, “Use the CLI to configure the CheCluster Custom Resource”.

    spec:
      devEnvironments:
        defaultPlugins:
        - editor: eclipse/che-theia/next
          plugins:
          - '<your_plugin_url>'

    where:

    editor
    The editorId to set the telemetry plugin for.
    plugins
    The URL to the telemetry plugin’s devfile v2 definition, for example, Content from your-web-server is not included.https://your-web-server/plugin.yaml.

11.2. Telemetry plugin overview

Create a telemetry plugin for OpenShift Dev Spaces to collect workspace usage data and send it to your analytics backend. The plugin extends the AbstractAnalyticsManager class with methods for event handling, activity tracking, and shutdown.

The AbstractAnalyticsManager class requires the following method implementations:

  • isEnabled() - determines whether the telemetry backend is functioning correctly. This can mean always returning true, or have more complex checks, for example, returning false when a connection property is missing.
  • destroy() - cleanup method that is run before shutting down the telemetry backend. This method sends the WORKSPACE_STOPPED event.
  • onActivity() - notifies that some activity is still happening for a given user. This is mainly used to send WORKSPACE_INACTIVE events.
  • onEvent() - submits telemetry events to the telemetry server, such as WORKSPACE_USED or WORKSPACE_STARTED.
  • increaseDuration() - increases the duration of a current event rather than sending many events in a small frame of time.

A finished example of the telemetry backend is available in the devworkspace-telemetry-example-plugin repository.

11.2.1. Create a telemetry server

Create a server that receives telemetry events from the OpenShift Dev Spaces telemetry plugin and writes them to standard output. For production, consider integrating with a third-party telemetry system such as Segment or Woopra.

Prerequisites

  • You have a running instance of Red Hat OpenShift Dev Spaces.

Procedure

  1. Create a main.go file for a Go application that starts a server on port 8080 and writes events to standard output:

    package main
    
    import (
    	"io/ioutil"
    	"net/http"
    
    	"go.uber.org/zap"
    )
    
    var logger *zap.SugaredLogger
    
    func event(w http.ResponseWriter, req *http.Request) {
    	switch req.Method {
    	case "GET":
    		logger.Info("GET /event")
    	case "POST":
    		logger.Info("POST /event")
    	}
    	body, err := req.GetBody()
    	if err != nil {
    		logger.With("err", err).Info("error getting body")
    		return
    	}
    	responseBody, err := ioutil.ReadAll(body)
    	if err != nil {
    		logger.With("error", err).Info("error reading response body")
    		return
    	}
    	logger.With("body", string(responseBody)).Info("got event")
    }
    
    func activity(w http.ResponseWriter, req *http.Request) {
    	switch req.Method {
    	case "GET":
    		logger.Info("GET /activity, doing nothing")
    	case "POST":
    		logger.Info("POST /activity")
    		body, err := req.GetBody()
    		if err != nil {
    			logger.With("error", err).Info("error getting body")
    			return
    		}
    		responseBody, err := ioutil.ReadAll(body)
    		if err != nil {
    			logger.With("error", err).Info("error reading response body")
    			return
    		}
    		logger.With("body", string(responseBody)).Info("got activity")
    	}
    }
    
    func main() {
    
    	log, _ := zap.NewProduction()
    	logger = log.Sugar()
    
    	http.HandleFunc("/event", event)
    	http.HandleFunc("/activity", activity)
    	logger.Info("Added Handlers")
    
    	logger.Info("Starting to serve")
    	http.ListenAndServe(":8080", nil)
    }

    The code for the example telemetry server is available in the telemetry-server-example repository.

  2. Create a container image based on this code and expose it as a deployment in OpenShift in the openshift-devspaces project. Clone the repository and build the container:

    $ git clone https://github.com/che-incubator/telemetry-server-example
    $ cd telemetry-server-example
    $ podman build -t registry/organization/telemetry-server-example:latest .
    $ podman push registry/organization/telemetry-server-example:latest
  3. Deploy the telemetry server to OpenShift.

    Both manifest_with_ingress.yaml and manifest_with_route contain definitions for a Deployment and Service. The former also defines a Kubernetes Ingress, while the latter defines an OpenShift Route.

    In the manifest file, replace the image and host fields to match the image you pushed, and the public hostname of your OpenShift cluster. Then run:

    $ oc apply -f manifest_with_[ingress|route].yaml -n openshift-devspaces

Verification

  • Verify that the telemetry server pod is running:

    oc get pods -n openshift-devspaces -l app=telemetry-server-example

11.2.2. Create a telemetry backend

Create a Quarkus-based telemetry backend that extends the OpenShift Dev Spaces telemetry client and implements custom event handling logic.

Note

For fast feedback when developing, develop inside a Dev Workspace. This way, you can run the application in a cluster and receive events from the front-end telemetry plugin.

Prerequisites

Procedure

  1. Create a Maven Quarkus project:

    mvn io.quarkus:quarkus-maven-plugin:2.7.1.Final:create \
        -DprojectGroupId=mygroup -DprojectArtifactId=devworkspace-telemetry-example-plugin \
    -DprojectVersion=1.0.0-SNAPSHOT
  2. Remove the files under src/main/java/mygroup and src/test/java/mygroup.
  3. Consult the Content from github.com is not included.GitHub packages for the latest version of backend-base and add the following dependencies to your pom.xml:

    <!-- Required -->
    <dependency>
        <groupId>org.eclipse.che.incubator.workspace-telemetry</groupId>
        <artifactId>backend-base</artifactId>
        <version><latest_version></version>
    </dependency>
    
    
    <!-- Used to make http requests to the telemetry server -->
    <dependency>
        <groupId>io.quarkus</groupId>
        <artifactId>quarkus-rest-client</artifactId>
    </dependency>
    <dependency>
        <groupId>io.quarkus</groupId>
        <artifactId>quarkus-rest-client-jackson</artifactId>
    </dependency>
  4. Create a personal access token with read:packages permissions from Content from docs.github.com is not included.GitHub packages and add your GitHub username, the token, and che-incubator repository details in your ~/.m2/settings.xml file:

    <settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0
    http://maven.apache.org/xsd/settings-1.0.0.xsd">
       <servers>
          <server>
             <id>che-incubator</id>
             <username><github_username></username>
             <password><github_token></password>
          </server>
       </servers>
    
       <profiles>
          <profile>
             <id>github</id>
             <activation>
                <activeByDefault>true</activeByDefault>
             </activation>
             <repositories>
                <repository>
                   <id>central</id>
                   <url>https://repo1.maven.org/maven2</url>
                   <releases><enabled>true</enabled></releases>
                   <snapshots><enabled>false</enabled></snapshots>
                   </repository>
                   <repository>
                   <id>che-incubator</id>
                   <url>https://maven.pkg.github.com/che-incubator/che-workspace-telemetry-client</url>
                </repository>
             </repositories>
          </profile>
       </profiles>
    </settings>
  5. Create MainConfiguration.java under src/main/java/mygroup. This file contains configuration provided to AnalyticsManager:

    package org.my.group;
    
    import java.util.Optional;
    
    import javax.enterprise.context.Dependent;
    import javax.enterprise.inject.Alternative;
    
    import org.eclipse.che.incubator.workspace.telemetry.base.BaseConfiguration;
    import org.eclipse.microprofile.config.inject.ConfigProperty;
    
    @Dependent
    @Alternative
    public class MainConfiguration extends BaseConfiguration {
        @ConfigProperty(name = "welcome.message")
        Optional<String> welcomeMessage;
    }

    where:

    @ConfigProperty(name = "welcome.message")
    A MicroProfile configuration annotation that injects the welcome.message configuration. For more details on how to set configuration properties specific to your backend, see the Quarkus Configuration Reference Guide.
  6. Create AnalyticsManager.java under src/main/java/mygroup. This file contains logic specific to the telemetry system:

    package org.my.group;
    
    import java.util.HashMap;
    import java.util.Map;
    
    import javax.enterprise.context.Dependent;
    import javax.enterprise.inject.Alternative;
    import javax.inject.Inject;
    
    import org.eclipse.che.incubator.workspace.telemetry.base.AbstractAnalyticsManager;
    import org.eclipse.che.incubator.workspace.telemetry.base.AnalyticsEvent;
    import org.eclipse.che.incubator.workspace.telemetry.finder.DevWorkspaceFinder;
    import org.eclipse.che.incubator.workspace.telemetry.finder.UsernameFinder;
    import org.eclipse.microprofile.rest.client.inject.RestClient;
    import org.slf4j.Logger;
    
    import static org.slf4j.LoggerFactory.getLogger;
    
    @Dependent
    @Alternative
    public class AnalyticsManager extends AbstractAnalyticsManager {
    
        private static final Logger LOG = getLogger(AbstractAnalyticsManager.class);
    
        public AnalyticsManager(MainConfiguration mainConfiguration, DevWorkspaceFinder devworkspaceFinder, UsernameFinder usernameFinder) {
            super(mainConfiguration, devworkspaceFinder, usernameFinder);
    
            mainConfiguration.welcomeMessage.ifPresentOrElse(
                (str) -> LOG.info("The welcome message is: {}", str),
                () -> LOG.info("No welcome message provided")
            );
        }
    
        @Override
        public boolean isEnabled() {
            return true;
        }
    
        @Override
        public void destroy() {}
    
        @Override
        public void onEvent(AnalyticsEvent event, String ownerId, String ip, String userAgent, String resolution, Map<String, Object> properties) {
            LOG.info("The received event is: {}", event);
        }
    
        @Override
        public void increaseDuration(AnalyticsEvent event, Map<String, Object> properties) { }
    
        @Override
        public void onActivity() {}
    }

    where:

    ifPresentOrElse()
    Log the welcome message if it was provided.
    LOG.info("The received event is: {}", event)
    Log the event received from the front-end plugin.
  7. Add the quarkus.arc.selected-alternatives property to src/main/resources/application.properties to specify the alternative beans org.my.group.AnalyticsManager and org.my.group.MainConfiguration:

    quarkus.arc.selected-alternatives=MainConfiguration,AnalyticsManager

Verification

  • Run the Quarkus application and verify that it starts without errors:

    mvn quarkus:dev

11.2.3. Implement and test telemetry backend event handlers

Implement the AnalyticsManager event handling methods in your telemetry backend and test the backend in a running Dev Workspace to verify that events are received from the front-end plugin.

Prerequisites

Procedure

  1. Set the DEVWORKSPACE_TELEMETRY_BACKEND_PORT environment variable in the Dev Workspace. Here, the value is set to 4167.

    spec:
      template:
        attributes:
          workspaceEnv:
            - name: DEVWORKSPACE_TELEMETRY_BACKEND_PORT
              value: '4167'
  2. Restart the Dev Workspace from the Red Hat OpenShift Dev Spaces dashboard.
  3. Run the following command within a Dev Workspace’s terminal window to start the application. Use the --settings flag to specify the path to the settings.xml file that contains the GitHub access token.

    $ mvn --settings=settings.xml quarkus:dev -Dquarkus.http.port=${DEVWORKSPACE_TELEMETRY_BACKEND_PORT}

    The application now receives telemetry events through port 4167 from the front-end plugin. Verify that the following output is logged:

    INFO  [org.ecl.che.inc.AnalyticsManager] (Quarkus Main Thread) No welcome message provided
    INFO  [io.quarkus] (Quarkus Main Thread) devworkspace-telemetry-example-plugin 1.0.0-SNAPSHOT on JVM (powered by Quarkus 2.7.2.Final) started in 0.323s. Listening on: http://localhost:4167
    INFO  [io.quarkus] (Quarkus Main Thread) Profile dev activated. Live Coding activated.
    INFO  [io.quarkus] (Quarkus Main Thread) Installed features: [cdi, kubernetes-client, rest-client, rest-client-jackson, resteasy, resteasy-jsonb, smallrye-context-propagation, smallrye-openapi, swagger-ui, vertx]
  4. Customize isEnabled() in AnalyticsManager.java. For this example, the method always returns true:

    @Override
    public boolean isEnabled() {
        return true;
    }

    The Content from github.com is not included.hosted OpenShift Dev Spaces Woopra backend demonstrates a more advanced isEnabled() implementation that checks for a configuration property before enabling the backend.

  5. Implement onEvent() to send events to the telemetry server. For the example application, it sends an HTTP POST payload to the /event endpoint.

    1. Configure the RESTEasy REST Client by creating a TelemetryService.java interface:

      package org.my.group;
      
      import java.util.Map;
      
      import javax.ws.rs.Consumes;
      import javax.ws.rs.POST;
      import javax.ws.rs.Path;
      import javax.ws.rs.core.MediaType;
      import javax.ws.rs.core.Response;
      
      import org.eclipse.microprofile.rest.client.inject.RegisterRestClient;
      
      @RegisterRestClient
      public interface TelemetryService {
          @POST
          @Path("/event")
          @Consumes(MediaType.APPLICATION_JSON)
          Response sendEvent(Map<String, Object> payload);
      }

      where:

      @Path("/event")
      The endpoint to make the POST request to.
    2. Specify the base URL for TelemetryService in src/main/resources/application.properties:

      org.my.group.TelemetryService/mp-rest/url=http://little-telemetry-server-che.apps-crc.testing
    3. Inject TelemetryService into AnalyticsManager.java and send a POST request in onEvent():

      @Dependent
      @Alternative
      public class AnalyticsManager extends AbstractAnalyticsManager {
          @Inject
          @RestClient
          TelemetryService telemetryService;
      
      ...
      
      @Override
      public void onEvent(AnalyticsEvent event, String ownerId, String ip, String userAgent, String resolution, Map<String, Object> properties) {
          Map<String, Object> payload = new HashMap<String, Object>(properties);
          payload.put("event", event);
          telemetryService.sendEvent(payload);
      }

      This sends an HTTP request to the telemetry server and automatically delays identical events for a small period of time. The default duration is 1500 milliseconds.

  6. Implement increaseDuration() in AnalyticsManager.java. Many telemetry systems recognize event duration. The AbstractAnalyticsManager merges similar events that happen in the same frame of time into one event. This implementation is a no-op:

    @Override
    public void increaseDuration(AnalyticsEvent event, Map<String, Object> properties) {}
  7. Implement onActivity() in AnalyticsManager.java. Set an inactive timeout limit and send a WORKSPACE_INACTIVE event if the last event time exceeds the timeout:

    public class AnalyticsManager extends AbstractAnalyticsManager {
    
        ...
    
        private long inactiveTimeLimit = 60000 * 3;
    
        ...
    
        @Override
        public void onActivity() {
            if (System.currentTimeMillis() - lastEventTime >= inactiveTimeLimit) {
                onEvent(WORKSPACE_INACTIVE, lastOwnerId, lastIp, lastUserAgent, lastResolution, commonProperties);
            }
        }
  8. Implement destroy() in AnalyticsManager.java. When called, send a WORKSPACE_STOPPED event and shut down any resources such as connection pools:

    @Override
    public void destroy() {
        onEvent(WORKSPACE_STOPPED, lastOwnerId, lastIp, lastUserAgent, lastResolution, commonProperties);
    }

Verification

  1. To verify that the onEvent() method receives events from the front-end plugin, press the l key to disable Quarkus live coding and edit any file within the IDE. The following output should be logged:

    INFO  [io.qua.dep.dev.RuntimeUpdatesProcessor] (Aesh InputStream Reader) Live reload disabled
    INFO  [org.ecl.che.inc.AnalyticsManager] (executor-thread-2) The received event is: Edit Workspace File in Che
  2. Stop the application with Ctrl+C and verify that a WORKSPACE_STOPPED event is sent to the server.

11.2.4. Deploy a telemetry plugin

Package the telemetry backend as a container image, create a devfile v2 plugin, and host the plugin on a web server so that Dev Workspaces can load it.

This guide demonstrates hosting the plugin on an Apache web server on OpenShift. In production, deploy the plugin file to a corporate web server.

Prerequisites

Procedure

  1. Package the Quarkus application as a container image and push it to a container registry by using one of the following options. See Content from quarkus.io is not included.the Quarkus documentation for details.

    Option A: JVM image
    1. Create a Dockerfile.jvm:

      FROM registry.access.redhat.com/ubi8/openjdk-11:1.11
      
      ENV LANG='en_US.UTF-8' LANGUAGE='en_US:en'
      
      COPY --chown=185 target/quarkus-app/lib/ /deployments/lib/
      COPY --chown=185 target/quarkus-app/*.jar /deployments/
      COPY --chown=185 target/quarkus-app/app/ /deployments/app/
      COPY --chown=185 target/quarkus-app/quarkus/ /deployments/quarkus/
      
      EXPOSE 8080
      USER 185
      
      ENTRYPOINT ["java", "-Dquarkus.http.host=0.0.0.0", "-Djava.util.logging.manager=org.jboss.logmanager.LogManager", "-Dquarkus.http.port=${DEVWORKSPACE_TELEMETRY_BACKEND_PORT}", "-jar", "/deployments/quarkus-run.jar"]
    2. Build and push the image:

      mvn package && \
      podman build -f src/main/docker/Dockerfile.jvm -t image:tag .
    Option B: Native image
    1. Create a Dockerfile.native:

      FROM registry.access.redhat.com/ubi8/ubi-minimal:8.5
      WORKDIR /work/
      RUN chown 1001 /work \
          && chmod "g+rwX" /work \
          && chown 1001:root /work
      COPY --chown=1001:root target/*-runner /work/application
      
      EXPOSE 8080
      USER 1001
      
      CMD ["./application", "-Dquarkus.http.host=0.0.0.0", "-Dquarkus.http.port=${DEVWORKSPACE_TELEMETRY_BACKEND_PORT}"]
    2. Build and push the image:

      mvn package -Pnative -Dquarkus.native.container-build=true && \
      podman build -f src/main/docker/Dockerfile.native -t image:tag .
  2. Create a plugin.yaml devfile v2 file representing a Dev Workspace plugin that runs your custom backend in a Dev Workspace Pod. For more information about devfile v2, see Content from devfile.io is not included.Devfile v2 documentation.

    schemaVersion: 2.1.0
    metadata:
      name: devworkspace-telemetry-backend-plugin
      version: 0.0.1
      description: A Demo telemetry backend
      displayName: Devworkspace Telemetry Backend
    components:
      - name: devworkspace-telemetry-backend-plugin
        attributes:
          workspaceEnv:
            - name: DEVWORKSPACE_TELEMETRY_BACKEND_PORT
              value: '4167'
        container:
          image: <your_image>
          env:
            - name: WELCOME_MESSAGE
              value: 'hello world!'

    where:

    <your_image>
    The container image built in the previous step.
    WELCOME_MESSAGE
    Set the value for the welcome.message optional configuration property.
  3. Create a ConfigMap object that references the plugin.yaml file:

    $ oc create configmap --from-file=plugin.yaml -n openshift-devspaces telemetry-plugin-yaml
  4. Create a manifest.yaml file with a Deployment, a Service, and a Route to expose the Apache web server. The Deployment references this ConfigMap object and places the plugin.yaml in the /var/www/html directory.

    kind: Deployment
    apiVersion: apps/v1
    metadata:
      name: apache
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: apache
      template:
        metadata:
          labels:
            app: apache
        spec:
          volumes:
            - name: plugin-yaml
              configMap:
                name: telemetry-plugin-yaml
                defaultMode: 420
          containers:
            - name: apache
              image: 'registry.redhat.io/rhscl/httpd-24-rhel7:latest'
              ports:
                - containerPort: 8080
                  protocol: TCP
              resources: {}
              volumeMounts:
                - name: plugin-yaml
                  mountPath: /var/www/html
      strategy:
        type: RollingUpdate
        rollingUpdate:
          maxUnavailable: 25%
          maxSurge: 25%
      revisionHistoryLimit: 10
      progressDeadlineSeconds: 600
    ---
    kind: Service
    apiVersion: v1
    metadata:
      name: apache
    spec:
      ports:
        - protocol: TCP
          port: 8080
          targetPort: 8080
      selector:
        app: apache
      type: ClusterIP
    ---
    kind: Route
    apiVersion: route.openshift.io/v1
    metadata:
      name: apache
    spec:
      host: apache-che.apps-crc.testing
      to:
        kind: Service
        name: apache
        weight: 100
      port:
        targetPort: 8080
      wildcardPolicy: None
  5. Apply the manifest:

    $ {orch-cli} apply -f manifest.yaml

Verification

  • After the deployment has started, confirm that plugin.yaml is available in the web server:

    $ curl apache-che.apps-crc.testing/plugin.yaml

11.2.5. Configure workspaces to load a telemetry plugin

Add the telemetry plugin to Dev Workspaces so that workspace activity events are sent to your telemetry backend for collection and analysis.

Prerequisites

Procedure

  1. Add the telemetry plugin to the components field of an existing Dev Workspace:

    components:
      ...
      - name: telemetry-plugin
        plugin:
          uri: <telemetry_plugin_url>
  2. Start the Dev Workspace from the OpenShift Dev Spaces dashboard.
  3. Optional: Configure the CheCluster Custom Resource to apply the telemetry plugin as a default for all Dev Workspaces. Default plugins are applied on Dev Workspace startup for new and existing Dev Workspaces.

    spec:
      devEnvironments:
        defaultPlugins:
        - editor: eclipse/che-theia/next
          plugins:
          - '<telemetry_plugin_url>'

    where:

    editor
    The editor identification to set the default plugins for.
    plugins
    List of URLs to devfile v2 plugins.

Verification

  1. Verify that the telemetry plugin container is running in the Dev Workspace pod by checking the Workspace view within the editor.

    Dev Workspace telemetry plugin
  2. Edit files within the editor and observe their events in the example telemetry server’s logs.

11.3. Server logging

Fine-tune the log levels of individual loggers available in the OpenShift Dev Spaces server to control output verbosity and isolate issues during troubleshooting.

The log level of the whole OpenShift Dev Spaces server is configured globally using the cheLogLevel configuration property of the Operator. To set the global log level in installations not managed by the Operator, specify the CHE_LOG_LEVEL environment variable in the che ConfigMap.

It is possible to configure the log levels of the individual loggers in the OpenShift Dev Spaces server using the CHE_LOGGER_CONFIG environment variable.

The names of the loggers follow the class names of the internal server classes that use those loggers.

11.3.1. Configure log levels

Configure the log levels of individual loggers in the OpenShift Dev Spaces server using the CHE_LOGGER_CONFIG environment variable to control log verbosity and simplify troubleshooting.

Prerequisites

Procedure

  1. Configure the CheCluster Custom Resource. See Section 5.3, “Use the CLI to configure the CheCluster Custom Resource”.

    spec:
      components:
        cheServer:
          extraProperties:
            CHE_LOGGER_CONFIG: "<key1=value1,key2=value2>"

    where:

    <key1=value1,key2=value2>

    Comma-separated list of key-value pairs, where keys are the names of the loggers as seen in the OpenShift Dev Spaces server log output and values are the required log levels.

    For example, to configure debug mode for the WorkspaceManager:

    spec:
      components:
        cheServer:
          extraProperties:
            CHE_LOGGER_CONFIG: "org.eclipse.che.api.workspace.server.WorkspaceManager=DEBUG"

Verification

  • Verify that the log level is applied by checking the OpenShift Dev Spaces server logs:

    $ oc logs deployment/devspaces -n openshift-devspaces | grep -i "log level"

11.3.2. Log HTTP traffic

Log the HTTP traffic between the OpenShift Dev Spaces server and the API server of the Kubernetes or OpenShift cluster to troubleshoot communication issues and debug API errors.

Prerequisites

Procedure

  1. Configure the CheCluster Custom Resource:

    spec:
      components:
        cheServer:
          extraProperties:
            CHE_LOGGER_CONFIG: "che.infra.request-logging=TRACE"

Verification

  • Verify that HTTP traffic is logged in the OpenShift Dev Spaces server logs:

    $ oc logs deploy/devspaces -n openshift-devspaces | grep "request-logging"

11.4. Log collection with dsc

The dsc management tool provides commands to collect OpenShift Dev Spaces logs for troubleshooting and diagnostics. These commands automate log collection from the multiple containers that comprise a Red Hat OpenShift Dev Spaces installation in the OpenShift cluster.

dsc server:logs

Collects existing Red Hat OpenShift Dev Spaces server logs and stores them in a directory on the local machine. By default, logs are downloaded to a temporary directory on the machine. However, this can be overwritten by specifying the -d parameter. For example, to download OpenShift Dev Spaces logs to the /home/user/che-logs/ directory, use the command

dsc server:logs -d /home/user/che-logs/

When run, dsc server:logs prints a message in the console specifying the directory that stores the log files:

Red Hat OpenShift Dev Spaces logs will be available in '/tmp/chectl-logs/1648575098344'

If Red Hat OpenShift Dev Spaces is installed in a non-default project, dsc server:logs requires the -n <NAMESPACE> parameter. <NAMESPACE> is the project in which Red Hat OpenShift Dev Spaces was installed. For example, to get logs from OpenShift Dev Spaces in the my-namespace project, use the command

dsc server:logs -n my-namespace
dsc server:deploy
Logs are automatically collected during the OpenShift Dev Spaces installation when installed using dsc. As with dsc server:logs, the directory logs are stored in can be specified using the -d parameter.

11.5. Dev Workspace Operator metrics

The Dev Workspace Operator exposes workspace startup, failure, and performance metrics on port 8443 on the /metrics endpoint of the devworkspace-controller-metrics Service. The OpenShift in-cluster monitoring stack can scrape these metrics to help administrators track workspace health and diagnose startup failures.

11.5.1. Dev Workspace-specific metrics

The following tables describe the Dev Workspace-specific metrics exposed by the devworkspace-controller-metrics Service.

Table 11.1. Metrics

NameTypeDescriptionLabels

devworkspace_started_total

Counter

Number of Dev Workspace starting events.

source, routingclass

devworkspace_started_success_total

Counter

Number of Dev Workspaces successfully entering the Running phase.

source, routingclass

devworkspace_fail_total

Counter

Number of failed Dev Workspaces.

source, reason

devworkspace_startup_time

Histogram

Total time taken to start a Dev Workspace, in seconds.

source, routingclass

Table 11.2. Labels

NameDescriptionValues

source

The controller.devfile.io/devworkspace-source label of the Dev Workspace.

string

routingclass

The spec.routingclass of the Dev Workspace.

"basic|cluster|cluster-tls|web-terminal"

reason

The workspace startup failure reason.

"BadRequest|InfrastructureFailure|Unknown"

Table 11.3. Startup failure reasons

NameDescription

BadRequest

Startup failure due to an invalid devfile used to create a Dev Workspace.

InfrastructureFailure

Startup failure due to the following errors: CreateContainerError, RunContainerError, FailedScheduling, FailedMount.

Unknown

Unknown failure reason.

11.5.2. Dev Workspace Operator dashboard panels

The OpenShift web console custom dashboard is based on Grafana 6.x and displays the following metrics from the Dev Workspace Operator.

Note

Not all features for Grafana 6.x dashboards are supported as an OpenShift web console dashboard.

The Dev Workspace Metrics panel displays Dev Workspace-specific metrics.

Figure 11.1. The Dev Workspace Metrics panel

Grafana dashboard panels that contain metrics related to DevWorkspace startup
Average workspace start time
The average workspace startup duration.
Workspace starts
The number of successful and failed workspace startups.
Dev Workspace successes and failures
A comparison between successful and failed Dev Workspace startups.
Dev Workspace failure rate
The ratio between the number of failed workspace startups and the number of total workspace startups.
Dev Workspace startup failure reasons

A pie chart that displays the distribution of workspace startup failures:

  • BadRequest
  • InfrastructureFailure
  • Unknown

The Operator Metrics panel displays Operator-specific metrics.

Figure 11.2. The Operator Metrics panel

Grafana dashboard panels that contain Operator metrics
Webhooks in flight
A comparison between the number of different webhook requests.
Work queue depth
The number of reconcile requests that are in the work queue.
Memory
Memory usage for the Dev Workspace controller and the Dev Workspace webhook server.
Average reconcile counts per second (DWO)
The average per-second number of reconcile counts for the Dev Workspace controller.

11.6. Collect Dev Workspace Operator metrics with Prometheus

Create the required ServiceMonitor and enable namespace monitoring to collect, store, and query Dev Workspace Operator metrics from the in-cluster Prometheus instance.

Prerequisites

Procedure

  1. Create the ServiceMonitor for detecting the Dev Workspace Operator metrics Service:

    apiVersion: monitoring.coreos.com/v1
    kind: ServiceMonitor
    metadata:
      name: devworkspace-controller
      namespace: openshift-devspaces
    spec:
      endpoints:
        - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
          interval: 10s
          port: metrics
          scheme: https
          tlsConfig:
            insecureSkipVerify: true
      namespaceSelector:
        matchNames:
          - openshift-operators
      selector:
        matchLabels:
          app.kubernetes.io/name: devworkspace-controller

    where:

    namespace
    The OpenShift Dev Spaces namespace. The default is openshift-devspaces.
    interval
    The rate at which a target is scraped.
  2. Allow the in-cluster Prometheus instance to detect the ServiceMonitor by labeling the OpenShift Dev Spaces namespace:

    $ oc label namespace openshift-devspaces openshift.io/cluster-monitoring=true

Verification

  1. For a fresh installation of OpenShift Dev Spaces, generate metrics by creating an OpenShift Dev Spaces workspace from the Dashboard.
  2. In the Administrator view of the OpenShift web console, go to ObserveMetrics.
  3. Run a PromQL query to confirm that the metrics are available. For example, enter devworkspace_started_total and click Run queries. The query returns data points showing the total number of started workspaces.

Troubleshooting

  • To troubleshoot missing metrics, view the Prometheus container logs for possible RBAC-related errors.

    1. Get the name of the Prometheus pod:

      $ oc get pods -l app.kubernetes.io/name=prometheus -n openshift-monitoring -o=jsonpath='{.items[*].metadata.name}'
    2. Print the last 20 lines of the Prometheus container logs from the Prometheus pod from the previous step:

      $ oc logs --tail=20 <prometheus_pod_name> -c prometheus -n openshift-monitoring

11.7. View Dev Workspace Operator metrics from an OpenShift web console dashboard

View Dev Workspace Operator metrics on a custom dashboard in the Administrator perspective of the OpenShift web console. This dashboard helps you monitor operator health and detect workspace provisioning issues.

Prerequisites

Procedure

  1. Create a ConfigMap for the dashboard definition in the openshift-config-managed project and apply the necessary label.

    1. $ oc create configmap grafana-dashboard-dwo \
        --from-literal=dwo-dashboard.json="$(curl https://raw.githubusercontent.com/devfile/devworkspace-operator/main/docs/grafana/openshift-console-dashboard.json)" \
        -n openshift-config-managed
      Note

      The previous command contains a link to material from the upstream community. This material represents the very latest available content and the most recent best practices. These tips have not yet been vetted by Red Hat’s QE department, and they have not yet been proven by a wide user group. Please, use this information cautiously.

    2. $ oc label configmap grafana-dashboard-dwo console.openshift.io/dashboard=true -n openshift-config-managed
      Note

      The dashboard definition is based on Grafana 6.x dashboards. Not all Grafana 6.x dashboard features are supported in the OpenShift web console.

Verification

  1. In the Administrator view of the OpenShift web console, go to ObserveDashboards.
  2. Go to DashboardDev Workspace Operator and verify that the dashboard panels contain data.

11.8. OpenShift Dev Spaces server monitoring

The OpenShift Dev Spaces server exposes JVM metrics such as memory usage and class loading on port 8087 on the /metrics endpoint. Monitoring these metrics helps administrators identify performance bottlenecks and plan server capacity.

11.9. Enable and expose OpenShift Dev Spaces Server metrics

OpenShift Dev Spaces exposes the JVM metrics on port 8087 of the che-host Service. Configure this behavior to support performance monitoring and capacity planning.

Prerequisites

Procedure

  1. Configure the CheCluster Custom Resource. See Section 5.3, “Use the CLI to configure the CheCluster Custom Resource”.

    spec:
      components:
        metrics:
          enable: <boolean>

    where:

    <boolean>
    true to enable, false to disable.

Verification

  • Verify the metrics endpoint is accessible:

    oc get service che-host -n openshift-devspaces -o jsonpath='{.spec.ports[?(@.port==8087)]}'

11.10. Collect OpenShift Dev Spaces Server metrics with Prometheus

Create the required ServiceMonitor, Role, and RoleBinding objects to collect, store, and query JVM metrics for the OpenShift Dev Spaces Server from the in-cluster Prometheus instance.

Prerequisites

Procedure

  1. Create the ServiceMonitor for detecting the OpenShift Dev Spaces JVM metrics Service:

    apiVersion: monitoring.coreos.com/v1
    kind: ServiceMonitor
    metadata:
      name: che-host
      namespace: openshift-devspaces
    spec:
      endpoints:
        - interval: 10s
          port: metrics
          scheme: http
      namespaceSelector:
        matchNames:
          - openshift-devspaces
      selector:
        matchLabels:
          app.kubernetes.io/name: devspaces

    where:

    namespace
    The OpenShift Dev Spaces namespace. The default is openshift-devspaces.
    interval
    The rate at which a target is scraped.
  2. Create a Role to allow Prometheus to view the metrics:

    kind: Role
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: prometheus-k8s
      namespace: openshift-devspaces
    rules:
      - verbs:
          - get
          - list
          - watch
        apiGroups:
          - ''
        resources:
          - services
          - endpoints
          - pods

    where:

    namespace
    The OpenShift Dev Spaces namespace. The default is openshift-devspaces.
  3. Create a RoleBinding to bind the Role to the Prometheus service account:

    kind: RoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: view-devspaces-openshift-monitoring-prometheus-k8s
      namespace: openshift-devspaces
    subjects:
      - kind: ServiceAccount
        name: prometheus-k8s
        namespace: openshift-monitoring
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: prometheus-k8s

    where:

    namespace
    The OpenShift Dev Spaces namespace. The default is openshift-devspaces.
  4. Allow the in-cluster Prometheus instance to detect the ServiceMonitor by labeling the OpenShift Dev Spaces namespace:

    $ oc label namespace openshift-devspaces openshift.io/cluster-monitoring=true

Verification

  1. In the Administrator view of the OpenShift web console, go to ObserveMetrics.
  2. Run a PromQL query to confirm that the metrics are available. For example, enter process_uptime_seconds{job="che-host"} and click Run queries. The query returns data points showing the OpenShift Dev Spaces Server uptime.

Troubleshooting

To troubleshoot missing metrics, view the Prometheus container logs for possible RBAC-related errors.

  1. Get the name of the Prometheus pod:

    $ oc get pods -l app.kubernetes.io/name=prometheus -n openshift-monitoring -o=jsonpath='{.items[*].metadata.name}'
  2. Print the last 20 lines of the Prometheus container logs from the Prometheus pod from the previous step:

    $ oc logs --tail=20 <prometheus_pod_name> -c prometheus -n openshift-monitoring

11.11. View OpenShift Dev Spaces Server from an OpenShift web console dashboard

View OpenShift Dev Spaces Server JVM metrics on a custom dashboard in the Administrator perspective of the OpenShift web console. This dashboard helps you identify performance bottlenecks and monitor server health.

Prerequisites

Procedure

  1. Create a ConfigMap for the dashboard definition in the openshift-config-managed project and apply the necessary label.

    1. $ oc create configmap grafana-dashboard-devspaces-server \
        --from-literal=devspaces-server-dashboard.json="$(curl https://raw.githubusercontent.com/eclipse-che/che-server/main/docs/grafana/openshift-console-dashboard.json)" \
        -n openshift-config-managed
      Note

      The previous command contains a link to material from the upstream community. This material represents the very latest available content and the most recent best practices. These tips have not yet been vetted by Red Hat’s QE department, and they have not yet been proven by a wide user group. Please, use this information cautiously.

    2. $ oc label configmap grafana-dashboard-devspaces-server console.openshift.io/dashboard=true -n openshift-config-managed
      Note

      The dashboard definition is based on Grafana 6.x dashboards. Not all Grafana 6.x dashboard features are supported in the OpenShift web console.

Verification

  1. In the Administrator view of the OpenShift web console, go to ObserveDashboards.
  2. Go to DashboardChe Server JVM and verify that the dashboard panels contain data.

    Figure 11.3. Quick Facts

    The *JVM quick facts* panel

    Figure 11.4. JVM Memory

    The *JVM Memory* panel

    Figure 11.5. JVM Misc

    The *JVM Misc* panel

    Figure 11.6. JVM Memory Pools (heap)

    The *JVM Memory Pools (heap)* panel

    Figure 11.7. JVM Memory Pools (Non-Heap)

    The *JVM Memory Pools (non-heap)* panel

    Figure 11.8. Garbage Collection

    The *JVM garbage collection* panel

    Figure 11.9. Class loading

    The *JVM class loading* panel

    Figure 11.10. Buffer Pools

    The *JVM buffer pools* panel

Chapter 12. Configure networking

Configure networking for OpenShift Dev Spaces to secure communications, enable custom routing, and support restricted environments through network policies, TLS certificates, custom hostnames, and proxy settings.

12.1. Configure network policies

By default, all Pods in an OpenShift cluster can communicate across namespaces. Configure network policies to restrict traffic between workspace Pods in different user projects to improve security through multitenant isolation.

With multitenant isolation, NetworkPolicy objects restrict all incoming traffic to Pods in a user project. However, Pods in the OpenShift Dev Spaces project must still communicate with Pods in user projects.

Prerequisites

  • You have an OpenShift cluster with network restrictions such as multitenant isolation.

Procedure

  1. Create an allow-from-openshift-devspaces.yaml file. The allow-from-openshift-devspaces NetworkPolicy allows incoming traffic from the OpenShift Dev Spaces namespace to all Pods in the user project.

    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
        name: allow-from-openshift-devspaces
    spec:
        ingress:
        - from:
            - namespaceSelector:
                matchLabels:
                    kubernetes.io/metadata.name: openshift-devspaces
        podSelector: {}
        policyTypes:
        - Ingress

    where:

    kubernetes.io/metadata.name: openshift-devspaces
    Selects traffic from the OpenShift Dev Spaces namespace. The default namespace is openshift-devspaces.
    podSelector: {}
    The empty podSelector selects all Pods in the project.
  2. Apply the allow-from-openshift-devspaces NetworkPolicy to each user project:

    oc apply -f allow-from-openshift-devspaces.yaml -n <user_namespace>
  3. Optional: If you configured This content is not included.multitenant isolation with network policy, create and apply the allow-from-openshift-apiserver and allow-from-workspaces-namespaces NetworkPolicies to openshift-devspaces. The allow-from-openshift-apiserver NetworkPolicy allows incoming traffic from the openshift-apiserver namespace to the devworkspace-webhook-server, enabling webhooks. The allow-from-workspaces-namespaces NetworkPolicy allows incoming traffic from each user project to the che-gateway pod.

    1. Create an allow-from-openshift-apiserver.yaml file:

      apiVersion: networking.k8s.io/v1
      kind: NetworkPolicy
      metadata:
        name: allow-from-openshift-apiserver
        namespace: openshift-devspaces
      spec:
        podSelector:
          matchLabels:
            app.kubernetes.io/name: devworkspace-webhook-server
        ingress:
          - from:
              - podSelector: {}
                namespaceSelector:
                  matchLabels:
                    kubernetes.io/metadata.name: openshift-apiserver
        policyTypes:
          - Ingress

      where:

      namespace: openshift-devspaces
      The OpenShift Dev Spaces namespace. The default is openshift-devspaces.
      app.kubernetes.io/name: devworkspace-webhook-server
      The podSelector only selects devworkspace-webhook-server pods.
    2. Create an allow-from-workspaces-namespaces.yaml file:

      apiVersion: networking.k8s.io/v1
      kind: NetworkPolicy
      metadata:
        name: allow-from-workspaces-namespaces
        namespace: openshift-devspaces
      spec:
        podSelector: {}
        ingress:
          - from:
              - podSelector: {}
                namespaceSelector:
                  matchLabels:
                    app.kubernetes.io/component: workspaces-namespace
        policyTypes:
          - Ingress

      where:

      namespace: openshift-devspaces
      The OpenShift Dev Spaces namespace. The default is openshift-devspaces.
      podSelector: {}
      The empty podSelector selects all pods in the OpenShift Dev Spaces namespace.
    3. Apply both NetworkPolicies:

      oc apply -f allow-from-openshift-apiserver.yaml -n openshift-devspaces
      oc apply -f allow-from-workspaces-namespaces.yaml -n openshift-devspaces

Verification

  • Verify that the NetworkPolicy is applied in the user namespace:

    oc get networkpolicy -n <user_namespace>
  • Start a workspace and verify that the workspace can communicate with the OpenShift Dev Spaces server.

12.2. Configure OpenShift Dev Spaces hostname

Configure OpenShift Dev Spaces to use a custom hostname instead of the default cluster-assigned URL to align with corporate DNS standards and branding requirements.

Prerequisites

Procedure

  1. Pre-create a project for OpenShift Dev Spaces:

    $ oc create project openshift-devspaces
  2. Create a TLS secret:

    $ oc create secret tls <tls_secret_name> \
    --key <key_file> \
    --cert <cert_file> \
    -n openshift-devspaces

    where:

    <tls_secret_name>
    The TLS secret name.
    --key
    A file with the private key.
    --cert
    A file with the certificate.
  3. Add the required labels to the secret:

    $ oc label secret <tls_secret_name> \
    app.kubernetes.io/part-of=che.eclipse.org -n openshift-devspaces

    where:

    <tls_secret_name>
    The TLS secret name.
  4. Configure the CheCluster Custom Resource:

    spec:
      networking:
        hostname: <hostname>
        tlsSecretName: <secret>

    where:

    <hostname>
    Custom Red Hat OpenShift Dev Spaces server hostname.
    <secret>
    The TLS secret name.
  5. If OpenShift Dev Spaces is already deployed, wait for the rollout of all OpenShift Dev Spaces components to complete.

Verification

  • Verify that the OpenShift Dev Spaces Dashboard is accessible at the custom hostname.

12.3. Import untrusted TLS certificates to OpenShift Dev Spaces

Import TLS certificate authority (CA) chains for external services into OpenShift Dev Spaces. This enables the server, dashboard, and workspaces to establish trusted encrypted connections to proxies, identity providers, and Git servers.

OpenShift Dev Spaces uses labeled ConfigMaps in OpenShift Dev Spaces project as sources for TLS certificates. The ConfigMaps can have an arbitrary amount of keys with an arbitrary amount of certificates each. All certificates are mounted into:

  • /public-certs location of OpenShift Dev Spaces server and dashboard pods
  • /etc/pki/ca-trust/extracted/pem locations of workspaces pods

Configure the CheCluster Custom Resource to disable CA bundle mounting at /etc/pki/ca-trust/extracted/pem. The certificates are instead mounted at /public-certs to keep the behavior from the previous version.

Note

Configure the CheCluster Custom Resource to disable the mounting of the CA bundle under the path /etc/pki/ca-trust/extracted/pem. Certificates are mounted under the path /public-certs in this case.

spec:
  devEnvironments:
    trustedCerts:
      disableWorkspaceCaBundleMount: true
Important

On an OpenShift cluster, OpenShift Dev Spaces operator automatically adds Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle into mounted certificates.

Prerequisites

Procedure

  1. Concatenate all CA chains PEM files to import, into the custom-ca-certificates.pem file, and remove the return character that is incompatible with the Java truststore.

    $ cat ca-cert-for-devspaces-*.pem | tr -d '\r' > custom-ca-certificates.pem
  2. Create the custom-ca-certificates ConfigMap with the required TLS certificates:

    $ oc create configmap custom-ca-certificates \
        --from-file=custom-ca-certificates.pem \
        --namespace=openshift-devspaces
  3. Label the custom-ca-certificates ConfigMap:

    $ oc label configmap custom-ca-certificates \
        app.kubernetes.io/component=ca-bundle \
        app.kubernetes.io/part-of=che.eclipse.org \
        --namespace=openshift-devspaces
  4. Deploy OpenShift Dev Spaces if it has not been deployed before. Otherwise, wait until the rollout of OpenShift Dev Spaces components finishes.
  5. Restart running workspaces for the changes to take effect.

Verification

  1. Verify that the ConfigMap contains your custom CA certificates. This command returns CA bundle certificates in PEM format:

    oc get configmap \
        --namespace=openshift-devspaces \
        --output='jsonpath={.items[0:].data.custom-ca-certificates\.pem}' \
        --selector=app.kubernetes.io/component=ca-bundle,app.kubernetes.io/part-of=che.eclipse.org
  2. Verify in the OpenShift Dev Spaces server logs that the imported certificates count is not null:

    oc logs deploy/devspaces --namespace=openshift-devspaces \
        | grep tls-ca-bundle.pem
  3. Start a workspace, get the project name in which it has been created: <workspace_namespace>, and wait for the workspace to be started.
  4. Verify that the ca-certs-merged ConfigMap contains your custom CA certificates. This command returns OpenShift Dev Spaces CA bundle certificates in PEM format:

    oc get configmap ca-certs-merged \
        --namespace=<workspace_namespace> \
        --output='jsonpath={.data.tls-ca-bundle\.pem}'
  5. Verify that the workspace pod mounts the ca-certs-merged ConfigMap:

    oc get pod \
        --namespace=<workspace_namespace> \
        --selector='controller.devfile.io/devworkspace_name=<workspace_name>' \
        --output='jsonpath={.items[0:].spec.volumes[0:].configMap.name}' \
        | grep ca-certs-merged
  6. Get the workspace pod name <workspace_pod_name>:

    oc get pod \
        --namespace=<workspace_namespace> \
        --selector='controller.devfile.io/devworkspace_name=<workspace_name>' \
        --output='jsonpath={.items[0:].metadata.name}'
  7. Verify that the workspace container has your custom CA certificates. This command returns OpenShift Dev Spaces CA bundle certificates in PEM format:

    oc exec <workspace_pod_name> \
        --namespace=<workspace_namespace> \
        -- cat /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem

    Or if disableWorkspaceCaBundleMount set to true:

    oc exec <workspace_pod_name> \
        --namespace=<workspace_namespace> \
        -- cat /public-certs/tls-ca-bundle.pem

12.4. Configure OpenShift Route to work with Router Sharding

Configure labels, annotations, and domains for OpenShift Route to direct OpenShift Dev Spaces traffic to the correct ingress controller when using Router Sharding on an OpenShift cluster.

Prerequisites

Procedure

  1. Configure the CheCluster Custom Resource. See Section 5.3, “Use the CLI to configure the CheCluster Custom Resource”.

    spec:
      networking:
        labels: <labels>
        domain: <domain>
        annotations: <annotations>

    where:

    <labels>
    An unstructured key value map of labels that the target ingress controller uses to filter the set of Routes to service.
    <domain>
    The DNS name serviced by the target ingress controller.
    <annotations>
    An unstructured key value map stored with a resource.

Verification

  • Verify that OpenShift Dev Spaces routes have the configured labels and annotations:

    oc get routes -n openshift-devspaces -o yaml

12.5. Configure workspace endpoints base domain

Configure a custom base domain for workspace endpoints to align URLs with your organization’s DNS naming conventions. By default, the OpenShift Dev Spaces Operator detects the base domain automatically.

Prerequisites

Procedure

  1. Set the CHE_INFRA_OPENSHIFT_ROUTE_HOST_DOMAIN__SUFFIX field in the CheCluster Custom Resource:

    spec:
      components:
        cheServer:
          extraProperties:
            CHE_INFRA_OPENSHIFT_ROUTE_HOST_DOMAINSUFFIX: "<base_domain>__"

    where:

    <base_domain>
    Workspace endpoints base domain, for example, my-devspaces.example.com.
  2. Apply the change:

    oc patch checluster/devspaces \
        --namespace openshift-devspaces \
        --type='merge' -p \
    '{"spec":
        {"components":
            {"cheServer":
                {"extraProperties":
                    {"CHE_INFRA_OPENSHIFT_ROUTE_HOST_DOMAIN__SUFFIX": "my-devspaces.example.com"}}}}}'

Verification

  • Verify the CHE_INFRA_OPENSHIFT_ROUTE_HOST_DOMAIN__SUFFIX value in the CheCluster Custom Resource:

    oc get checluster devspaces -n openshift-devspaces -o jsonpath='{.spec.components.cheServer.extraProperties.CHE_INFRA_OPENSHIFT_ROUTE_HOST_DOMAIN__SUFFIX}'

12.6. Configure proxy

Configure a proxy for Red Hat OpenShift Dev Spaces by creating a Kubernetes Secret for proxy credentials and configuring the necessary proxy settings in the CheCluster custom resource. The proxy settings are propagated to the operands and workspaces through environment variables.

On an OpenShift cluster, you do not need to configure proxy settings. OpenShift Dev Spaces Operator automatically uses the OpenShift cluster-wide proxy configuration. However, you can override the proxy settings by specifying them in the CheCluster custom resource.

Prerequisites

Procedure

  1. Optional: Create a Secret in the openshift-devspaces namespace that contains a user and password for a proxy server. The secret must have the app.kubernetes.io/part-of=che.eclipse.org label. Skip this step if the proxy server does not require authentication.

    oc apply -f - <<EOF
    kind: Secret
    apiVersion: v1
    metadata:
      name: devspaces-proxy-credentials
      namespace: openshift-devspaces
      labels:
        app.kubernetes.io/part-of: che.eclipse.org
    type: Opaque
    stringData:
      user: <user>
      password: <password>
    EOF

    where:

    <user>
    The username for the proxy server.
    <password>
    The password for the proxy server.
  2. Configure the proxy or override the cluster-wide proxy configuration for an OpenShift cluster by setting the following properties in the CheCluster custom resource:

    oc patch checluster/devspaces \
        --namespace openshift-devspaces \
        --type='merge' -p \
    '{"spec":
        {"components":
            {"cheServer":
                {"proxy":
                    {"credentialsSecretName" : "<secretName>",
                     "nonProxyHosts"         : ["<host_1>"],
                     "port"                  : "<port>",
                     "url"                   : "<protocol>://<domain>"}}}}}'

    where:

    <secretName>
    The credentials secret name created in the previous step.
    <host_1>
    The list of hosts that can be reached directly, without using the proxy. Use the following form .<DOMAIN> to specify a wildcard domain. OpenShift Dev Spaces Operator automatically adds .svc and Kubernetes service host to the list of non-proxy hosts. In OpenShift, OpenShift Dev Spaces Operator combines the non-proxy host list from the cluster-wide proxy configuration with the custom resource. In some proxy configurations, localhost may not translate to 127.0.0.1. Both localhost and 127.0.0.1 should be specified in this situation.
    <port>
    The port of the proxy server.
    <protocol>://<domain>
    Protocol and domain of the proxy server.

Verification

  1. Start a workspace.
  2. Verify that the workspace pod contains HTTP_PROXY, HTTPS_PROXY, http_proxy, and https_proxy environment variables, each set to <protocol>://<user>:<password>@<domain>:<port>.
  3. Verify that the workspace pod contains NO_PROXY and no_proxy environment variables, each set to a comma-separated list of non-proxy hosts.

Chapter 13. Configure storage

Configure storage for OpenShift Dev Spaces workspaces, including storage classes, strategies, and sizes.

13.1. Workspace storage requirements

OpenShift Dev Spaces workspaces store project files in a hierarchical directory structure and require specific storage capabilities depending on the selected strategy.

All workspace storage must use volumeMode: FileSystem.

The per-user storage strategy shares a single Persistent Volume Claim (PVC) across all of a user’s workspaces. This requires ReadWriteMany (RWX) access mode so that multiple workspace pods can mount the same volume simultaneously.

13.1.1. Choosing a storage backend for the Per-User strategy

Generic NFS provisioning supports RWX access but has two operational limitations:

  • Quota enforcement: Kubernetes PVCs cannot reliably enforce storage quotas on generic NFS volumes. A single workspace can exceed its allocation and consume the entire shared volume, causing instability for all users on that node.
  • Data integrity: Generic NFS implementations often lack the locking and cache coherency required when multiple cluster nodes access the same volume concurrently.

To avoid these issues, use a certified clustered or managed storage solution with a CSI driver that enforces quota limits and provides high-performance RWX file access. Most cloud providers offer suitable CSI drivers, and community-supported distributed storage projects are also available.

13.2. Configure storage classes

To configure OpenShift Dev Spaces to use a configured infrastructure storage, install OpenShift Dev Spaces using storage classes. This is especially useful when you want to bind a persistent volume provided by a non-default provisioner.

OpenShift Dev Spaces has one component that requires persistent volumes to store data:

  • A OpenShift Dev Spaces workspace. OpenShift Dev Spaces workspaces store source code using volumes, for example /projects volume.
Note

OpenShift Dev Spaces workspaces source code is stored in the persistent volume only if a workspace is not ephemeral.

Persistent volume claims facts:

  • OpenShift Dev Spaces does not create persistent volumes in the infrastructure.
  • OpenShift Dev Spaces uses persistent volume claims (PVC) to mount persistent volumes.
  • The Dev Workspace operator creates persistent volume claims.

Define a storage class name in the OpenShift Dev Spaces configuration to use the storage classes feature in the OpenShift Dev Spaces PVC.

Use CheCluster Custom Resource definition to define storage classes:

Prerequisites

Procedure

  1. Define storage class names: configure the CheCluster Custom Resource, and install OpenShift Dev Spaces. See Section 5.2, “Use dsc to configure the CheCluster Custom Resource during installation”.

    spec:
      devEnvironments:
        storage:
          perUserStrategyPvcConfig:
            claimSize: <claim_size>
            storageClass: <storage_class_name>
          perWorkspaceStrategyPvcConfig:
            claimSize: <claim_size>
            storageClass: <storage_class_name>
          pvcStrategy: <pvc_strategy>

    where:

    claimSize
    Persistent Volume Claim size.
    storageClass
    Storage class for the Persistent Volume Claim. When omitted or left blank, a default storage class is used.
    pvcStrategy

    Persistent volume claim strategy. The supported strategies are:

    • per-user: All workspaces Persistent Volume Claims share one volume.
    • per-workspace: Each workspace gets its own individual Persistent Volume Claim.
    • ephemeral: Non-persistent storage. Local changes are lost when the workspace stops.

Verification

  • Start a workspace and verify that the PersistentVolumeClaim uses the configured storage class:

    oc get pvc -n <user_namespace> -o jsonpath='{.items[*].spec.storageClassName}'

13.3. Configure the storage strategy

Configure OpenShift Dev Spaces to provide persistent or non-persistent storage to workspaces by selecting a storage strategy. The selected strategy applies to all newly created workspaces by default.

Available storage strategies:

  • per-user: Use a single PVC for all workspaces created by a user.
  • per-workspace: Each workspace gets its own PVC.
  • ephemeral: Non-persistent storage; any local changes are lost when the workspace is stopped.

The default storage strategy used in OpenShift Dev Spaces is per-user.

Prerequisites

Procedure

  1. Set the pvcStrategy field in the CheCluster Custom Resource to per-user, per-workspace, or ephemeral:

    spec:
      devEnvironments:
        storage:
          pvc:
            pvcStrategy: 'per-user'

    where:

    pvcStrategy

    The available storage strategies are per-user, per-workspace, and ephemeral.

Verification

  • Verify the pvcStrategy value in the CheCluster Custom Resource:

    oc get checluster devspaces -n openshift-devspaces -o jsonpath='{.spec.devEnvironments.storage.pvc.pvcStrategy}'

13.4. Configure storage sizes

Configure the persistent volume claim (PVC) size for the per-user or per-workspace storage strategy by setting the claimSize field in the CheCluster Custom Resource. Specify PVC sizes as a Kubernetes resource quantity.

Default persistent volume claim sizes:

  • per-user: 10Gi
  • per-workspace: 5Gi

Prerequisites

Procedure

  1. Set the appropriate claimSize field for the desired storage strategy in the CheCluster Custom Resource.

    spec:
      devEnvironments:
        storage:
          pvc:
            pvcStrategy: '<strategy_name>'
            perUserStrategyPvcConfig:
              claimSize: <resource_quantity>
            perWorkspaceStrategyPvcConfig:
              claimSize: <resource_quantity>

    where:

    <strategy_name>
    Select the storage strategy: per-user or per-workspace or ephemeral. Note: the ephemeral storage strategy does not use persistent storage, therefore you cannot configure its storage size or other PVC-related attributes.
    perUserStrategyPvcConfig, perWorkspaceStrategyPvcConfig
    Specify a claim size on the next line or omit the next line to set the default claim size value. The specified claim size is only used when you select this storage strategy.
    <resource_quantity>

    The claim size must be specified as a Content from kubernetes.io is not included.Kubernetes resource quantity. The available quantity units include: Ei, Pi, Ti, Gi, Mi and Ki.

    Important

    Manually modifying a PVC on the cluster that was provisioned by OpenShift Dev Spaces is not officially supported and may result in unexpected consequences.

    If you want to resize a PVC that is in use by a workspace, you must restart the workspace for the PVC change to occur.

Verification

  • Start a workspace and verify that the PersistentVolumeClaim has the configured size:

    oc get pvc -n <user_namespace> -o jsonpath='{.items[*].spec.resources.requests.storage}'

13.5. Persistent user home

Red Hat OpenShift Dev Spaces provides a persistent home directory feature that preserves the /home/user directory across workspace restarts. User settings, shell history, and tooling configurations persist between sessions.

You can enable this feature in the CheCluster by setting spec.devEnvironments.persistUserHome.enabled to true.

For newly started workspaces, this feature creates a PVC mounted to the /home/user path of the tools container. In this documentation, a "tools container" refers to the first container in the devfile. This container is the container that includes the project source code by default.

When the PVC is mounted for the first time, the persistent volume’s contents are empty and therefore must be populated with the /home/user directory content.

By default, the persistUserHome feature creates an init container for each new workspace pod named init-persistent-home. This init container is created with the tools container image. It runs a stow command to create symbolic links in the persistent volume to populate the /home/user directory.

Note

For files that cannot be symbolically linked to the /home/user directory such as the .viminfo and .bashrc file, cp is used instead of stow.

The primary function of the stow command is to run:

stow -t /home/user/ -d /home/tooling/ --no-folding

The stow command creates symbolic links in /home/user for files and directories located in /home/tooling. This populates the persistent volume with symbolic links to the content in /home/tooling. As a result, the persistUserHome feature expects the tooling image to have its /home/user/ content within /home/tooling.

For example, the tools container image might contain files in the home/tooling directory such as .config and .config-folder/another-file. In this case, stow creates symbolic links in the persistent volume as shown in the following diagram:

Figure 13.1. Tools container with persistUserHome enabled

Persistent user home example scenario

The init container writes the output of the stow command to /home/user/.stow.log and only runs stow the first time the persistent volume is mounted to the workspace.

Using the stow command to populate /home/user content in the persistent volume provides two main advantages:

  1. Creating symbolic links is faster and consumes less storage than creating copies of the /home/user directory content in the persistent volume. To put it differently, the persistent volume in this case contains symbolic links and not the actual files themselves.
  2. If the tools image is updated with newer versions of existing binaries, configs, and files, the init container does not need to stow the new versions. The existing symbolic links already point to the updated content in /home/tooling.
Note

If the tooling image is updated with additional binaries or files, they are not symbolically linked to the /home/user directory since the stow command does not run again. In this case, the user must delete the /home/user/.stow_completed file and restart the workspace to rerun stow.

13.5.1. persistUserHome tools image requirements

The persistUserHome feature depends on the tools image used for the workspace. By default OpenShift Dev Spaces uses the Universal Developer Image (UDI) for sample workspaces, which supports persistUserHome out of the box.

If you are using a custom image, it must meet three requirements to support the persistUserHome feature.

  1. The tools image should contain stow version >= 2.4.0.
  2. The $HOME environment variable is set to /home/user.
  3. In the tools image, the directory that is intended to contain the /home/user content is /home/tooling.

Because the /home/user content must be in /home/tooling, the default UDI image adds the /home/user content to /home/tooling instead, and runs:

RUN stow -t /home/user/ -d /home/tooling/ --no-folding

in the Dockerfile so that files in /home/tooling are accessible from /home/user even when not using the persistUserHome feature.

Chapter 14. Configure dashboard

Customize the OpenShift Dev Spaces dashboard to control the getting started experience, available editors, and branding that users see when they log in.

14.1. Configure getting started samples

Configure the OpenShift Dev Spaces Dashboard to display custom samples that reflect your organization’s preferred languages, frameworks, and project templates for faster onboarding.

Prerequisites

Procedure

  1. Create a JSON file with the samples configuration. The file must contain an array of objects, where each object represents a sample.

    cat > my-samples.json <<EOF
    [
      {
        "displayName": "<display_name>",
        "description": "<description>",
        "tags": <tags>,
        "url": "<url>",
        "icon": {
          "base64data": "<base64data>",
          "mediatype": "<mediatype>"
        }
      }
    ]
    EOF

    where:

    displayName
    The display name of the sample.
    description
    The description of the sample.
    tags
    The JSON array of tags, for example, ["java", "spring"].
    url
    The URL to the repository containing the devfile.
    base64data
    The base64-encoded data of the icon.
    mediatype
    The media type of the icon. For example, image/png.
  2. Create a ConfigMap with the samples configuration:

    oc create configmap getting-started-samples --from-file=my-samples.json -n openshift-devspaces
  3. Add the required labels to the ConfigMap:

    oc label configmap getting-started-samples app.kubernetes.io/part-of=che.eclipse.org app.kubernetes.io/component=getting-started-samples -n openshift-devspaces

Verification

  • Refresh the OpenShift Dev Spaces Dashboard page and verify that the new samples are displayed on the Create Workspace page.

14.2. Configure editor definitions

Configure custom editor definitions for OpenShift Dev Spaces by creating a devfile with the editor configuration and storing it in a ConfigMap to offer additional IDE options to your users.

Prerequisites

Procedure

  1. Create the my-editor-definition-devfile.yaml YAML file with the editor definition configuration. Provide actual values for publisher and version under metadata.attributes because these construct the editor ID in the format publisher/name/version.

    For example:

    # Version of the devfile schema
    schemaVersion: 2.2.2
    # Meta information of the editor
    metadata:
      # (MANDATORY) The editor name
      # Must consist of lower case alphanumeric characters, '-' or '.'
      name: editor-name
      displayName: Display Name
      description: Run Editor Foo on top of OpenShift Dev Spaces
      # (OPTIONAL) Array of tags of the current editor. The Tech-Preview tag means the option is considered experimental and is not recommended for production environments. While it can include new features and improvements, it may still contain bugs or undergo significant changes before reaching a stable version.
      tags:
        - Tech-Preview
      # Additional attributes
      attributes:
        title: This is my editor
        # (MANDATORY) The supported architectures
        arch:
          - x86_64
          - arm64
        # (MANDATORY) The publisher name
        publisher: publisher
        # (MANDATORY) The editor version
        version: version
        repository: https://github.com/editor/repository/
        firstPublicationDate: '2024-01-01'
        iconMediatype: image/svg+xml
        iconData: |
          <icon-content>
    # List of editor components
    components:
      # Name of the component
      - name: che-code-injector
        # Configuration of devworkspace-related container
        container:
          # Image of the container
          image: 'quay.io/che-incubator/che-code:insiders'
          # The command to run in the dockerimage component instead of the default one provided in the image
          command:
            - /entrypoint-init-container.sh
          # (OPTIONAL) List of volumes mounts that should be mounted in this container
          volumeMounts:
              # The name of the mount
            - name: checode
              # The path of the mount
              path: /checode
          # (OPTIONAL) The memory limit of the container
          memoryLimit: 256Mi
          # (OPTIONAL) The memory request of the container
          memoryRequest: 32Mi
          # (OPTIONAL) The CPU limit of the container
          cpuLimit: 500m
          # (OPTIONAL) The CPU request of the container
          cpuRequest: 30m
      # Name of the component
      - name: che-code-runtime-description
        # (OPTIONAL) Map of implementation-dependant free-form YAML attributes
        attributes:
          # The component within the architecture
          app.kubernetes.io/component: che-code-runtime
          # The name of a higher level application this one is part of
          app.kubernetes.io/part-of: che-code.eclipse.org
          # Defines a container component as a "container contribution". If a flattened DevWorkspace has a container component with the merge-contribution attribute, then any container contributions are merged into that container component
          controller.devfile.io/container-contribution: true
        container:
          # Can be a placeholder image because the component is expected to be injected into workspace dev component
          image: quay.io/devfile/universal-developer-image:latest
          # (OPTIONAL) List of volume mounts that should be mounted in this container
          volumeMounts:
              # The name of the mount
            - name: checode
              # (OPTIONAL) The path in the component container where the volume should be mounted. If no path is defined, the default path is /<name>
              path: /checode
          # (OPTIONAL) The memory limit of the container
          memoryLimit: 1024Mi
          # (OPTIONAL) The memory request of the container
          memoryRequest: 256Mi
          # (OPTIONAL) The CPU limit of the container
          cpuLimit: 500m
          # (OPTIONAL) The CPU request of the container
          cpuRequest: 30m
          # (OPTIONAL) Environment variables used in this container
          env:
            - name: ENV_NAME
              value: value
          # Component endpoints
          endpoints:
            # Name of the editor
            - name: che-code
              # (OPTIONAL) Map of implementation-dependant string-based free-form attributes
              attributes:
                # Type of the endpoint. You can only set its value to main, indicating that the endpoint should be used as the mainUrl in the workspace status (i.e. it should be the URL used to access the editor in this context)
                type: main
                # An attribute that instructs the service to automatically redirect the unauthenticated requests for current user authentication. Setting this attribute to true has security consequences because it makes Cross-site request forgery (CSRF) attacks possible. The default value of the attribute is false.
                cookiesAuthEnabled: true
                # Defines an endpoint as "discoverable", meaning that a service should be created using the endpoint name (i.e. instead of generating a service name for all endpoints, this endpoint should be statically accessible)
                discoverable: false
                # Used to secure the endpoint with authorization on OpenShift, so that not anyone on the cluster can access the endpoint, the attribute enables authentication.
                urlRewriteSupported: true
              # Port number to be used within the container component
              targetPort: 3100
              # (OPTIONAL) Describes how the endpoint should be exposed on the network (public, internal, none)
              exposure: public
              # (OPTIONAL) Describes whether the endpoint should be secured and protected by some authentication process
              secure: true
              # (OPTIONAL) Describes the application and transport protocols of the traffic that will go through this endpoint
              protocol: https
        # Mandatory name that allows referencing the component from other elements
      - name: checode
        # (OPTIONAL) Allows specifying the definition of a volume shared by several other components. Ephemeral volumes are not stored persistently across restarts. Defaults to false
        volume: {ephemeral: true}
    # (OPTIONAL) Bindings of commands to events. Each command is referred-to by its name
    events:
      # IDs of commands that should be executed before the devworkspace start. These commands would typically be executed in an init container
      preStart:
        - init-container-command
      # IDs of commands that should be executed after the devworkspace has completely started. In the case of Che-Code, these commands should be executed after all plugins and extensions have started, including project cloning. This means that those commands are not triggered until the user opens the IDE within the browser
      postStart:
        - init-che-code-command
    # (OPTIONAL) Predefined, ready-to-use, devworkspace-related commands
    commands:
        # Mandatory identifier that allows referencing this command
      - id: init-container-command
        apply:
          # Describes the component for the apply command
          component: che-code-injector
        # Mandatory identifier that allows referencing this command
      - id: init-che-code-command
        # CLI Command executed in an existing component container
        exec:
          # Describes component for the exec command
          component: che-code-runtime-description
          # The actual command-line string
          commandLine: 'nohup /checode/entrypoint-volume.sh > /checode/entrypoint-logs.txt
            2>&1 &'

    where:

    <icon-content>
    The SVG icon data for the editor, displayed in the OpenShift Dev Spaces Dashboard editor selector.
  2. Create a ConfigMap with the editor definition content:

    oc create configmap my-editor-definition --from-file=my-editor-definition-devfile.yaml -n openshift-devspaces
  3. Add the required labels to the ConfigMap:

    oc label configmap my-editor-definition app.kubernetes.io/part-of=che.eclipse.org app.kubernetes.io/component=editor-definition -n openshift-devspaces

Verification

  • Refresh the OpenShift Dev Spaces Dashboard page and verify that the new editor is available.
  • Verify the editor definition through the OpenShift Dev Spaces Dashboard API:

    https://<openshift_dev_spaces_fqdn>/dashboard/api/editors

    To retrieve a specific editor definition, use the publisher, name, and version values:

    https://<openshift_dev_spaces_fqdn>/dashboard/api/editors/devfile?che-editor=publisher/editor-name/version

    When retrieving the editor definition from within the OpenShift cluster, access the OpenShift Dev Spaces Dashboard API through the dashboard service: http://devspaces-dashboard.openshift-devspaces.svc.cluster.local:8080/dashboard/api/editors

14.3. Show deprecated editors

Show deprecated OpenShift Dev Spaces editors on the Dashboard to support users who need them during migration to a supported editor. By default, the Dashboard UI hides them.

Prerequisites

Procedure

  1. Determine the IDs of the deprecated editors. An editor ID has the following format: publisher/name/version.

    oc exec deploy/devspaces-dashboard -n openshift-devspaces  \
        -- curl -s http://localhost:8080/dashboard/api/editors | jq -r '[.[] | select(.metadata.tags != null) | select(.metadata.tags[] | contains("Deprecate")) | "\(.metadata.attributes.publisher)/\(.metadata.name)/\(.metadata.attributes.version)"]'
  2. Configure the CheCluster Custom Resource. See Section 5.3, “Use the CLI to configure the CheCluster Custom Resource”.

    spec:
      components:
        dashboard:
          deployment:
            containers:
            - env:
              - name: CHE_SHOW_DEPRECATED_EDITORS
                value: 'true'

14.4. Configure default editor

Configure the default editor that OpenShift Dev Spaces uses when creating new workspaces to ensure a consistent development experience. The default editor is specified by its plugin ID in the publisher/name/version format.

Prerequisites

Procedure

  1. Determine the IDs of the available editors. An editor ID has the following format: publisher/name/version.

    oc exec deploy/devspaces-dashboard -n openshift-devspaces  \
        -- curl -s http://localhost:8080/dashboard/api/editors | jq -r '[.[] | "\(.metadata.attributes.publisher)/\(.metadata.name)/\(.metadata.attributes.version)"]'
  2. Configure the defaultEditor:

    oc patch checluster/devspaces \
        --namespace openshift-devspaces \
        --type='merge' \
        -p '{"spec":{"devEnvironments":{"defaultEditor": "<default_editor>"}}}'

    where:

    <default_editor>
    The default editor specified as a plugin ID in publisher/name/version format or as a URI.

Verification

  • Create a new workspace from the OpenShift Dev Spaces Dashboard and verify that the configured default editor opens.

14.5. Conceal editors in the Dashboard

Conceal OpenShift Dev Spaces editors to hide selected editors from the Dashboard UI, for example hide IntelliJ IDEA Ultimate and have only Visual Studio Code - Open Source visible.

Prerequisites

Procedure

  1. Determine the IDs of the available editors. An editor ID has the following format: publisher/name/version.

    oc exec deploy/devspaces-dashboard -n openshift-devspaces  \
        -- curl -s http://localhost:8080/dashboard/api/editors | jq -r '[.[] | "\(.metadata.attributes.publisher)/\(.metadata.name)/\(.metadata.attributes.version)"]'
  2. Configure the CheCluster Custom Resource. See Section 5.3, “Use the CLI to configure the CheCluster Custom Resource”.

    spec:
      components:
        dashboard:
          deployment:
            containers:
            - env:
              - name: CHE_HIDE_EDITORS_BY_ID
                value: 'che-incubator/che-webstorm-server/latest, che-incubator/che-webstorm-server/next'

    where:

    value
    A string containing comma-separated IDs of editors to hide.

Verification

  • In the OpenShift Dev Spaces Dashboard, go to Create Workspace and verify that the concealed editors are no longer visible.

14.6. Configure editor download URLs

Configure custom download URLs for editors in air-gapped OpenShift Dev Spaces environments where editors cannot be retrieved from the public internet. This option applies only to JetBrains editors.

Prerequisites

Procedure

  1. Determine the IDs of the available editors. An editor ID has the following format: publisher/name/version.

    oc exec deploy/devspaces-dashboard -n openshift-devspaces  \
        -- curl -s http://localhost:8080/dashboard/api/editors | jq -r '[.[] | "\(.metadata.attributes.publisher)/\(.metadata.name)/\(.metadata.attributes.version)"]'
  2. Configure the download URLs for editors:

    oc patch checluster/devspaces \
      --namespace openshift-devspaces \
      --type='merge' \
      -p '{
        "spec": {
          "devEnvironments": {
            "editorsDownloadUrls": [
              { "editor": "publisher1/editor-name1/version1", "url": "https://example.com/editor1.tar.gz" },
              { "editor": "publisher2/editor-name2/version2", "url": "https://example.com/editor2.tar.gz" }
            ]
          }
        }
      }'

    where:

    editor
    The editor ID in the format publisher/name/version. Determine the IDs by running the command in step 1.
    url
    The URL of the editor archive to download.

Verification

  • Verify that the editor download URLs appear in the CheCluster Custom Resource specification.

Chapter 15. Manage identities and authorizations

Manage identities and authorizations for Red Hat OpenShift Dev Spaces, including cluster roles, advanced authorization policies, and GDPR-compliant user data removal.

15.1. Configure cluster roles for OpenShift Dev Spaces users

Grant OpenShift Dev Spaces users additional cluster permissions by adding cluster roles, enabling them to perform actions beyond the default workspace operations.

Prerequisites

Procedure

  1. Define the user roles name:

    $ USER_ROLES=<name>

    where:

    name
    Unique resource name.
  2. Determine the namespace where the OpenShift Dev Spaces Operator is deployed:

    $ OPERATOR_NAMESPACE=$(oc get pods -l app.kubernetes.io/component=devspaces-operator -o jsonpath={".items[0].metadata.namespace"} --all-namespaces)
  3. Create needed roles:

    $ kubectl apply -f - <<EOF
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: ${USER_ROLES}
      labels:
        app.kubernetes.io/part-of: che.eclipse.org
    rules:
      - verbs:
          - <verbs>
        apiGroups:
          - <apiGroups>
        resources:
          - <resources>
    EOF

    where:

    verbs
    List all Verbs that apply to all ResourceKinds and AttributeRestrictions contained in this rule. You can use * to represent all verbs.
    apiGroups
    Name the APIGroups that contain the resources.
    resources
    List all resources that this rule applies to. You can use * to represent all verbs.
  4. Delegate the roles to the OpenShift Dev Spaces Operator:

    $ kubectl apply -f - <<EOF
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: ${USER_ROLES}
      labels:
        app.kubernetes.io/part-of: che.eclipse.org
    subjects:
      - kind: ServiceAccount
        name: devspaces-operator
        namespace: ${OPERATOR_NAMESPACE}
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: ${USER_ROLES}
    EOF
  5. Configure the OpenShift Dev Spaces Operator to delegate the roles to the che service account:

    $ kubectl patch checluster devspaces \
      --patch '{"spec": {"components": {"cheServer": {"clusterRoles": ["'${USER_ROLES}'"]}}}}' \
      --type=merge -n openshift-devspaces
  6. Configure the OpenShift Dev Spaces server to delegate the roles to a user:

    $ kubectl patch checluster devspaces \
      --patch '{"spec": {"devEnvironments": {"user": {"clusterRoles": ["'${USER_ROLES}'"]}}}}' \
      --type=merge -n openshift-devspaces
  7. Wait for the rollout of the OpenShift Dev Spaces server components to complete.
  8. Ask the user to log out and log in to have the new roles applied.

Verification

  • Verify that the ClusterRole exists:

    $ kubectl get clusterrole ${USER_ROLES}

15.2. Configure advanced authorization

Determine which users and groups are allowed to access OpenShift Dev Spaces to enforce access control policies and meet organizational compliance requirements.

Prerequisites

Procedure

  1. Configure the CheCluster Custom Resource. See Section 5.3, “Use the CLI to configure the CheCluster Custom Resource”.

    spec:
      networking:
        auth:
          advancedAuthorization:
            allowUsers:
              - <allow_users>
            allowGroups:
              - <allow_groups>
            denyUsers:
              - <deny_users>
            denyGroups:
              - <deny_groups>

    where:

    allowUsers
    List of users allowed to access Red Hat OpenShift Dev Spaces.
    allowGroups
    List of groups of users allowed to access Red Hat OpenShift Dev Spaces (for OpenShift Container Platform only).
    denyUsers
    List of users denied access to Red Hat OpenShift Dev Spaces.
    denyGroups

    List of groups of users denied access to Red Hat OpenShift Dev Spaces (for OpenShift Container Platform only).

    If a user is on both allow and deny lists, access is denied. If allowUsers and allowGroups are empty, all users are allowed except the ones on the deny lists. If denyUsers and denyGroups are empty, only the users from allow lists are allowed. If both allow and deny lists are empty, all users are allowed.

  2. Wait for the rollout of the OpenShift Dev Spaces server components to complete.

Verification

  • Log in to the OpenShift Dev Spaces dashboard as a user on the allowUsers list and verify access to the dashboard.
  • Log in as a user on the denyUsers list and verify that OpenShift Dev Spaces returns a 403 Forbidden response.

15.3. Remove user data in compliance with the GDPR

Remove a user’s data on OpenShift Container Platform in compliance with the Content from gdpr.eu is not included.General Data Protection Regulation (GDPR). The process for other Kubernetes infrastructures might vary.

Warning

Removing user data as follows is irreversible. All removed data is deleted and unrecoverable.

Prerequisites

Procedure

  1. List all the users in the OpenShift cluster using the following command:

    $ oc get users
  2. Delete the user entry:

    Important

    If the user has any associated resources (such as projects, roles, or service accounts), you must delete those first before deleting the user.

    $ oc delete user <username>

Chapter 16. Configure OAuth for Git providers

Configure OAuth to allow OpenShift Dev Spaces users to interact with remote Git repositories without re-entering credentials.

OpenShift Dev Spaces supports GitHub, GitLab, Bitbucket Server (OAuth 2.0 and OAuth 1.0), Bitbucket Cloud, and Microsoft Azure DevOps Services. For each provider, you create an OAuth application, then apply the corresponding secret to your OpenShift Dev Spaces instance.

16.1. Set up the GitHub OAuth App

To enable users to work with a remote Git repository that is hosted on GitHub, register the GitHub OAuth App (OAuth 2.0).

Prerequisites

  • You are logged in to GitHub.

Procedure

  1. Go to Content from github.com is not included.the GitHub OAuth application registration page.
  2. Enter the following values:

    1. Application name: <application name>
    2. Homepage URL: https://<openshift_dev_spaces_fqdn>/
    3. Authorization callback URL: https://<openshift_dev_spaces_fqdn>/api/oauth/callback
  3. Click Register application.
  4. Click Generate new client secret.
  5. Copy and save the GitHub OAuth Client ID for use when applying the GitHub OAuth App Secret.
  6. Copy and save the GitHub OAuth Client Secret for use when applying the GitHub OAuth App Secret.

16.2. Apply the GitHub OAuth App Secret

Prepare and apply the GitHub OAuth App Secret so that OpenShift Dev Spaces users can access remote Git repositories hosted on GitHub without re-entering credentials.

Prerequisites

Procedure

  1. Prepare the Secret:

    kind: Secret
    apiVersion: v1
    metadata:
      name: github-oauth-config
      namespace: openshift-devspaces
      labels:
        app.kubernetes.io/part-of: che.eclipse.org
        app.kubernetes.io/component: oauth-scm-configuration
      annotations:
        che.eclipse.org/oauth-scm-server: github
        che.eclipse.org/scm-server-endpoint: <github_server_url>
        che.eclipse.org/scm-github-disable-subdomain-isolation: 'false'
    type: Opaque
    stringData:
      id: <GitHub_OAuth_Client_ID>
      secret: <GitHub_OAuth_Client_Secret>

    where:

    namespace
    The OpenShift Dev Spaces namespace. The default is openshift-devspaces.
    che.eclipse.org/scm-server-endpoint
    This depends on the GitHub product your organization is using. When hosting repositories on GitHub.com or GitHub Enterprise Cloud, omit this line or enter the default Content from github.com is not included.https://github.com. When hosting repositories on GitHub Enterprise Server, enter the GitHub Enterprise Server URL.
    che.eclipse.org/scm-github-disable-subdomain-isolation
    If you are using GitHub Enterprise Server with a disabled Content from docs.github.com is not included.subdomain isolation option, you must set the annotation to true. Otherwise, you can either omit the annotation or set it to false.
    id
    The GitHub OAuth Client ID.
    secret
    The GitHub OAuth Client Secret.
  2. Apply the Secret:

    $ oc apply -f - <<EOF
    <Secret_prepared_in_the_previous_step>
    EOF
  3. Optional: To configure OAuth 2.0 for another GitHub provider, repeat the previous steps and create a second GitHub OAuth Secret with a different name.

Verification

  • Verify that the output displays secret/github-oauth-config created.

16.3. Set up the GitLab authorized application

To enable users to work with a remote Git repository that is hosted using a GitLab instance, create the GitLab authorized application (OAuth 2.0).

Prerequisites

  • You are logged in to GitLab.

Procedure

  1. Click your avatar and go to Edit profileApplications.
  2. Enter OpenShift Dev Spaces as the Name.
  3. Enter https://<openshift_dev_spaces_fqdn>/api/oauth/callback as the Redirect URI.
  4. Check the Confidential and Expire access tokens checkboxes.
  5. Under Scopes, check the api, write_repository, and openid checkboxes.
  6. Click Save application.
  7. Copy and save the GitLab Application ID for use when applying the GitLab-authorized application Secret.
  8. Copy and save the GitLab Client Secret for use when applying the GitLab-authorized application Secret.

16.4. Apply the GitLab-authorized application Secret

Prepare and apply the GitLab-authorized application Secret so that OpenShift Dev Spaces users can access remote Git repositories hosted on GitLab without re-entering credentials.

Prerequisites

Procedure

  1. Prepare the Secret:

    kind: Secret
    apiVersion: v1
    metadata:
      name: gitlab-oauth-config
      namespace: openshift-devspaces
      labels:
        app.kubernetes.io/part-of: che.eclipse.org
        app.kubernetes.io/component: oauth-scm-configuration
      annotations:
        che.eclipse.org/oauth-scm-server: gitlab
        che.eclipse.org/scm-server-endpoint: <gitlab_server_url>
    type: Opaque
    stringData:
      id: <GitLab_Application_ID>
      secret: <GitLab_Client_Secret>

    where:

    namespace
    The OpenShift Dev Spaces namespace. The default is openshift-devspaces.
    che.eclipse.org/scm-server-endpoint
    The GitLab server URL. Use Content from gitlab.com is not included.https://gitlab.com for the SAAS version.
    id
    The GitLab Application ID.
    secret
    The GitLab Client Secret.
  2. Apply the Secret:

    $ oc apply -f - <<EOF
    <Secret_prepared_in_the_previous_step>
    EOF
  3. Optional: To configure OAuth 2.0 for another GitLab provider, repeat the previous steps and create a second GitLab OAuth Secret with a different name.

Verification

  • Verify that the output displays secret/gitlab-oauth-config created.

16.7. Set up an OAuth consumer in the Bitbucket Cloud

To enable users to work with a remote Git repository that is hosted in the Bitbucket Cloud, create an OAuth consumer (OAuth 2.0) in the Bitbucket Cloud.

Prerequisites

  • You are logged in to the Bitbucket Cloud.

Procedure

  1. Click your avatar and go to the All workspaces page.
  2. Select a workspace and click it.
  3. Go to SettingsOAuth consumersAdd consumer.
  4. Enter OpenShift Dev Spaces as the Name.
  5. Enter https://<openshift_dev_spaces_fqdn>/api/oauth/callback as the Callback URL.
  6. Under Permissions, check all of the Account and Repositories checkboxes, and click Save.
  7. Expand the added consumer and then copy and save the Key value for use when applying the Bitbucket OAuth consumer Secret.
  8. Copy and save the Secret value for use when applying the Bitbucket OAuth consumer Secret.

16.8. Apply an OAuth consumer Secret for the Bitbucket Cloud

Prepare and apply the OAuth consumer Secret for Bitbucket Cloud so that OpenShift Dev Spaces users can access remote Git repositories hosted on Bitbucket Cloud without re-entering credentials.

Prerequisites

Procedure

  1. Prepare the Secret:

    kind: Secret
    apiVersion: v1
    metadata:
      name: bitbucket-oauth-config
      namespace: openshift-devspaces
      labels:
        app.kubernetes.io/part-of: che.eclipse.org
        app.kubernetes.io/component: oauth-scm-configuration
      annotations:
        che.eclipse.org/oauth-scm-server: bitbucket
    type: Opaque
    stringData:
      id: <Bitbucket_Oauth_Consumer_Key>
      secret: <Bitbucket_Oauth_Consumer_Secret>

    where:

    namespace
    The OpenShift Dev Spaces namespace. The default is openshift-devspaces.
    id
    The Bitbucket OAuth consumer Key.
    secret
    The Bitbucket OAuth consumer Secret.
  2. Apply the Secret:

    $ oc apply -f - <<EOF
    <Secret_prepared_in_the_previous_step>
    EOF

Verification

  • Verify that the output displays secret/bitbucket-oauth-config created.

16.11. Set up the Microsoft Azure DevOps Services OAuth App

To enable users to work with a remote Git repository that is hosted on Microsoft Azure Repos, register the Microsoft Azure DevOps Services OAuth App (OAuth 2.0).

Important

Procedure

  1. Visit Content from app.vsaex.visualstudio.com is not included.the Microsoft Azure DevOps Services app registration page.
  2. Enter the following values:

    1. Company name: OpenShift Dev Spaces
    2. Application name: OpenShift Dev Spaces
    3. Application website: https://<openshift_dev_spaces_fqdn>/
    4. Authorization callback URL: https://<openshift_dev_spaces_fqdn>/api/oauth/callback
  3. In Select Authorized scopes, select Code (read and write).
  4. Click Create application.
  5. Copy and save the App ID for use when applying the Microsoft Azure DevOps Services OAuth App Secret.
  6. Click Show to display the Client Secret.
  7. Copy and save the Client Secret for use when applying the Microsoft Azure DevOps Services OAuth App Secret.

16.12. Apply the Microsoft Azure DevOps Services OAuth App Secret

Prepare and apply the Microsoft Azure DevOps Services OAuth App Secret so that OpenShift Dev Spaces users can access remote Git repositories hosted on Azure Repos without re-entering credentials.

Prerequisites

Procedure

  1. Prepare the Secret:

    kind: Secret
    apiVersion: v1
    metadata:
      name: azure-devops-oauth-config
      namespace: openshift-devspaces
      labels:
        app.kubernetes.io/part-of: che.eclipse.org
        app.kubernetes.io/component: oauth-scm-configuration
      annotations:
        che.eclipse.org/oauth-scm-server: azure-devops
    type: Opaque
    stringData:
      id: <Microsoft_Azure_DevOps_Services_OAuth_App_ID>
      secret: <Microsoft_Azure_DevOps_Services_OAuth_Client_Secret>

    where:

    namespace
    The OpenShift Dev Spaces namespace. The default is openshift-devspaces.
    id
    The Microsoft Azure DevOps Services OAuth App ID.
    secret
    The Microsoft Azure DevOps Services OAuth Client Secret.
  2. Apply the Secret:

    $ oc apply -f - <<EOF
    <Secret_prepared_in_the_previous_step>
    EOF

Verification

  • Verify that the output displays secret/azure-devops-oauth-config created.
  • Verify that the rollout of the OpenShift Dev Spaces server components is complete:

    $ oc rollout status deployment/devspaces -n openshift-devspaces

16.13. Force a refresh of the personal access token

Enable an experimental feature that forces a refresh of the personal access token on workspace startup in Red Hat OpenShift Dev Spaces.

Important

This is an experimental feature.

Prerequisites

Procedure

  1. Modify the CheCluster Custom Resource to enable forced token refresh:

    spec:
      components:
        cheServer:
          extraProperties:
            CHE_FORCE_REFRESH_PERSONAL_ACCESS_TOKEN: "true"

Verification

  • Start a new workspace and verify that the personal access token is refreshed by checking the OpenShift Dev Spaces server logs.

Chapter 17. Configure fuse-overlayfs

Configure fuse-overlayfs for building container images within OpenShift Dev Spaces workspaces.

17.1. fuse-overlayfs configuration

By default, Podman and Buildah in the Universal Developer Image (UDI) use the vfs storage driver, which does not provide copy-on-write support. For more efficient container image management, use the fuse-overlayfs storage driver.

To enable fuse-overlayfs for workspaces for OpenShift versions older than 4.15, the administrator must first enable /dev/fuse access on the cluster.

Note

This is not necessary for OpenShift versions 4.15 and later, since the /dev/fuse device is available by default.

After enabling /dev/fuse access, fuse-overlayfs can be enabled in two ways:

  1. For all user workspaces within the cluster.
  2. For workspaces belonging to certain users.

17.2. Enable access to /dev/fuse for OpenShift versions older than 4.15

Make /dev/fuse accessible to workspace containers on OpenShift versions older than 4.15, so that workspaces can use the fuse-overlayfs storage driver for Podman and Buildah.

Note

For OpenShift 4.15 and later, /dev/fuse is available by default and no additional configuration is needed. See Release Notes.

Warning

Creating MachineConfig resources on an OpenShift cluster is a potentially dangerous task, as you are making advanced, system-level changes to the cluster.

View the This content is not included.MachineConfig documentation for more details and possible risks.

Prerequisites

Procedure

  1. Set the environment variable based on the type of your OpenShift cluster: a single node cluster, or a multi node cluster with separate control plane and worker nodes.

    • For a single node cluster, set:

      $ NODE_ROLE=master
    • For a multi node cluster, set:

      $ NODE_ROLE=worker
  2. Set the environment variable for the OpenShift Butane config version. This variable is the major and minor version of the OpenShift cluster. For example, 4.12.0, 4.13.0, or 4.14.0.

    $ VERSION=4.12.0
  3. Create a MachineConfig resource that creates a drop-in CRI-O configuration file named 99-podman-fuse in the NODE_ROLE nodes. This configuration file makes access to the /dev/fuse device possible for certain pods.

    cat << EOF | butane | oc apply -f -
    variant: openshift
    version: ${VERSION}
    metadata:
      labels:
        machineconfiguration.openshift.io/role: ${NODE_ROLE}
      name: 99-podman-dev-fuse-${NODE_ROLE}
    storage:
      files:
      - path: /etc/crio/crio.conf.d/99-podman-fuse
        mode: 0644
        overwrite: true
        contents:
          inline: |
            [crio.runtime.workloads.podman-fuse]
            activation_annotation = "io.openshift.podman-fuse"
            allowed_annotations = [
              "io.kubernetes.cri-o.Devices"
            ]
            [crio.runtime]
            allowed_devices = ["/dev/fuse"]
    EOF

    where:

    /etc/crio/crio.conf.d/99-podman-fuse
    The absolute file path to the new drop-in configuration file for CRI-O.
    contents
    The content of the new drop-in configuration file.
    [crio.runtime.workloads.podman-fuse]
    Define a podman-fuse workload.
    activation_annotation
    The pod annotation that activates the podman-fuse workload settings.
    allowed_annotations
    List of annotations the podman-fuse workload is allowed to process.
    allowed_devices
    List of devices on the host that a user can specify with the io.kubernetes.cri-o.Devices annotation.
  4. After applying the MachineConfig resource, scheduling is temporarily disabled for each node with the worker role as changes are applied. View the nodes' statuses.

    $ oc get nodes

    Example output:

    NAME                           STATUS                     ROLES    AGE   VERSION
    ip-10-0-136-161.ec2.internal   Ready                      worker   28m   v1.27.9
    ip-10-0-136-243.ec2.internal   Ready                      master   34m   v1.27.9
    ip-10-0-141-105.ec2.internal   Ready,SchedulingDisabled   worker   28m   v1.27.9
    ip-10-0-142-249.ec2.internal   Ready                      master   34m   v1.27.9
    ip-10-0-153-11.ec2.internal    Ready                      worker   28m   v1.27.9
    ip-10-0-153-150.ec2.internal   Ready                      master   34m   v1.27.9
  5. After all nodes with the worker role have a status Ready, /dev/fuse is available to any pod with the following annotations.

    io.openshift.podman-fuse: ''
    io.kubernetes.cri-o.Devices: /dev/fuse

Verification

  1. Get the name of a node with a worker role:

    $ oc get nodes
  2. Open an oc debug session to a worker node.

    $ oc debug node/<nodename>
  3. Verify that a new CRI-O config file named 99-podman-fuse exists.

    sh-4.4# stat /host/etc/crio/crio.conf.d/99-podman-fuse

17.3. Enable fuse-overlayfs for all workspaces

Enable fuse-overlayfs for all workspaces to use the overlay storage driver.

Prerequisites

Procedure

  1. Set the necessary annotation in the spec.devEnvironments.workspacesPodAnnotations field of the CheCluster Custom Resource.

    kind: CheCluster
    apiVersion: org.eclipse.che/v2
    spec:
      devEnvironments:
        workspacesPodAnnotations:
          io.kubernetes.cri-o.Devices: /dev/fuse
    Note

    For OpenShift versions before 4.15, the io.openshift.podman-fuse: "" annotation is also required.

    Note

    The Universal Development Image (UDI) includes the following logic in the entrypoint script to detect fuse-overlayfs and set the storage driver. If you use a custom image, add equivalent logic to the image’s entrypoint.

    if [ ! -d "${HOME}/.config/containers" ]; then
      mkdir -p ${HOME}/.config/containers
      if [ -c "/dev/fuse" ] && [ -f "/usr/bin/fuse-overlayfs" ]; then
        (echo '[storage]';echo 'driver = "overlay"';echo '[storage.options.overlay]';echo 'mount_program = "/usr/bin/fuse-overlayfs"') > ${HOME}/.config/containers/storage.conf
      else
        (echo '[storage]';echo 'driver = "vfs"') > "${HOME}"/.config/containers/storage.conf
      fi
    fi

Verification

  1. Start a workspace and verify that the storage driver is overlay.

    $ podman info | grep overlay

    Example output:

    graphDriverName: overlay
      overlay.mount_program:
        Executable: /usr/bin/fuse-overlayfs
        Package: fuse-overlayfs-1.12-1.module+el8.9.0+20326+387084d0.x86_64
          fuse-overlayfs: version 1.12
      Backing Filesystem: overlayfs
    Note

    The following error might occur for existing workspaces:

    ERRO[0000] User-selected graph driver "overlay" overwritten by graph driver "vfs" from database - delete libpod local files ("/home/user/.local/share/containers/storage") to resolve.  May prevent use of images created by other tools

    In this case, delete the libpod local files shown in the error message.

Chapter 18. Back up OpenShift Dev Spaces workspaces

Back up OpenShift Dev Spaces workspace data to an OCI-compatible registry on a recurring schedule.

The Dev Workspace backup controller creates periodic snapshots of stopped workspace PVCs and stores them as tar.gz archives in a target registry. Supported registries include the OpenShift Container Platform integrated registry and Quay.io. Configure the backup schedule, target registry, and authentication by editing the DevWorkspaceOperatorConfig resource.

The backoffLimit field sets the number of retries before marking the backup job as failed. The default value is 1.

Note

By default, the Dev Workspace backup job is disabled.

18.1. Configure backup with the integrated OpenShift registry

Configure the Dev Workspace backup job to use the integrated OpenShift Container Platform container registry. This option requires no additional authentication configuration.

Prerequisites

Procedure

  1. Configure the DevWorkspaceOperatorConfig resource to enable the backup job:

    apiVersion: controller.devfile.io/v1alpha1
    kind: DevWorkspaceOperatorConfig
    metadata:
      name: devworkspace-operator-config
      namespace: openshift-operators
    config:
      workspace:
        backupCronJob:
          enable: true
          registry:
            path: <integrated_registry_url>
          oras:
            extraArgs: '--insecure'
          schedule: '0 */4 * * *'
        imagePullPolicy: Always

    where:

    openshift-operators
    The default installation namespace for the Dev Workspace Operator on OpenShift. If the Dev Workspace Operator is installed in a different namespace, use that namespace instead.
    <integrated_registry_url>
    The URL to the OpenShift Container Platform integrated registry for your cluster.
    --insecure
    The --insecure flag may be required depending on the integrated registry’s routing configuration.
  2. Get the default path to the integrated registry:

    echo "$(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}')"

Verification

  • After the backup job completes, verify that the backup archives are available in the integrated registry. Check the Dev Workspace project for a repository with a matching Dev Workspace name.

18.2. Configure backup with a regular OCI-compatible registry

Configure the Dev Workspace backup job to use a regular OCI-compatible registry for backups. Provide registry credentials through a Kubernetes Secret in the Operator project or in each Dev Workspace project.

A Secret in the Dev Workspace project enables using different registry accounts per project with more granular access control.

Prerequisites

Procedure

  1. Configure the DevWorkspaceOperatorConfig resource to enable the backup job:

    kind: DevWorkspaceOperatorConfig
    apiVersion: controller.devfile.io/v1alpha1
    metadata:
      name: devworkspace-operator-config
      namespace: openshift-operators
    config:
      workspace:
        backupCronJob:
          enable: true
          registry:
            authSecret: devworkspace-backup-registry-auth
            path: <registry_url>
          schedule: '0 */4 * * *'
        imagePullPolicy: Always

    where:

    openshift-operators
    The default installation namespace for the Dev Workspace Operator on OpenShift. If the Dev Workspace Operator is installed in a different namespace, use that namespace instead.
    <registry_url>

    The OCI registry URL. For example: quay.io/my-company-org.

    The authSecret must be named devworkspace-backup-registry-auth. It must reference a Kubernetes Secret of type kubernetes.io/dockerconfigjson that contains credentials to access the registry. Create the Secret in the installation project for the Dev Workspace Operator.

  2. Create the registry credentials Secret:

    oc create secret docker-registry devworkspace-backup-registry-auth --from-file=config.json -n openshift-operators
  3. Add the required label to the Secret for the Dev Workspace Operator to recognize it:

    oc label secret devworkspace-backup-registry-auth controller.devfile.io/watch-secret=true -n openshift-operators
    Warning

    The Dev Workspace Operator copies the devworkspace-backup-registry-auth Secret to each Dev Workspace project so that backups from user workspaces can be pushed to the registry. To use different credentials per project, create a devworkspace-backup-registry-auth Secret with user-specific credentials in each Dev Workspace project instead.

Verification

  • After the backup job completes, verify that the backup archives are available in the OCI registry under the expected path.

Additional resources

Chapter 19. Manage IDE extensions

Manage IDE extensions in OpenShift Dev Spaces workspaces to control which extensions are available, trusted, and pre-installed for users across different IDE types.

19.1. Extensions for Microsoft Visual Studio Code - Open Source

OpenShift Dev Spaces uses an Open VSX registry instance to manage extensions for Microsoft Visual Studio Code - Open Source.

To manage extensions, this IDE uses one of the Open VSX registry instances:

  • The embedded instance of the Open VSX registry that runs in the plugin-registry pod of OpenShift Dev Spaces to support air-gapped, offline, and proxy-restricted environments. The embedded Open VSX registry contains only a subset of the extensions published on the public open-vsx.org registry. This subset is customizable.
  • The public open-vsx.org registry that is accessed over the internet.
  • A standalone Open VSX registry instance that is deployed on a network accessible from OpenShift Dev Spaces workspace pods.

The default is the embedded instance of the Open VSX registry.

19.2. Configure the Open VSX registry URL

To search and install extensions, the Microsoft Visual Studio Code - Open Source editor in OpenShift Dev Spaces uses an embedded Open VSX registry instance. Configure OpenShift Dev Spaces to use another Open VSX registry instance rather than the embedded one.

The default is the embedded instance of the Open VSX registry.

If the default Open VSX registry instance does not meet your requirements, you can select one of the following instances:

  • The Open VSX registry instance at https://open-vsx.org that requires access to the internet.
  • A standalone Open VSX registry instance that is deployed on a network accessible from OpenShift Dev Spaces workspace pods.

Prerequisites

Procedure

  1. Edit the CheCluster custom resource to update the openVSXURL value:

    spec:
      components:
        pluginRegistry:
          openVSXURL: "<url_of_an_open_vsx_registry_instance>"

    where:

    <url_of_an_open_vsx_registry_instance>

    The URL of the Open VSX registry instance. For example: openVSXURL: "https://open-vsx.org".

    • To select the embedded Open VSX registry instance in the plugin-registry pod, use openVSXURL: ''. You can customize the list of included extensions using a workspace or using a Linux operating system.
    • You can also point openVSXURL at the URL of a standalone Open VSX registry instance. The URL must be accessible from within your organization’s cluster and not blocked by a proxy.
    Note

    To ensure the stability and performance of the community-supported Open VSX Registry, API usage is organized into defined tiers. The Eclipse Foundation implements these limits to protect infrastructure from high-frequency automated traffic and to provide consistent service quality for all users. For more information, see Content from github.com is not included.Rate Limits and Usage Tiers and the Content from github.com is not included.open-vsx.org wiki.

    Important

    Using Content from open-vsx.org is not included.https://open-vsx.org is not recommended in an air-gapped environment, isolated from the internet. To reduce the risk of malware infections and unauthorized access to your code, use the embedded or self-hosted Open VSX registry with a curated set of extensions.

Verification

  • Confirm that the plugin-registry pod has restarted and is running.
  • Open a workspace and verify that extensions are available from the selected registry instance in the Extensions view.

19.3. Add or remove extensions in an OpenShift Dev Spaces workspace

Customize the embedded Open VSX registry instance by adding or removing extensions directly within an OpenShift Dev Spaces workspace to create a custom extension catalog for your organization.

Important

The embedded plugin registry is deprecated; the Open VSX Registry is its successor. Setting up an internal, on-premises Open VSX Registry provides full control over the extension lifecycle, enables offline use, and improves compliance. See Section 19.5, “Deploy Open VSX using an OpenShift Dev Spaces workspace” or Section 19.6, “Deploy Open VSX using the OpenShift CLI” for detailed setup instructions.

Prerequisites

  • You are logged in to your OpenShift Dev Spaces instance as an administrator.
  • You have started a workspace using the Content from github.com is not included.plugin registry repository.
  • You have created a Red Hat Registry Service Account and have the username and token available.
  • You have the custom plugin registry built locally on the corresponding hardware for IBM Power (ppc64le) and IBM Z (s390x) architectures.
  • You have a container image based on the latest tag or SHA to include the latest security fixes.

Procedure

  1. Identify the publisher and extension name for each extension you want to add:

    1. Find the extension on the Content from open-vsx.org is not included.Open VSX registry website.
    2. Copy the URL of the extension’s listing page.
    3. Extract the <publisher> and <name> from the URL:

      https://open-vsx.org/extension/<publisher>/<name>
  2. Open the Content from github.com is not included.openvsx-sync.json file in the repository.
  3. Add or remove extensions using the following JSON syntax:

        {
            "id": "<publisher>.<name>",
            "version": "<extension_version>"
        }
    Tip

    If you have a closed-source or internal-only extension, you can add it directly from a .vsix file. Use a URL accessible to your custom plugin registry container:

        {
            "id": "<publisher>.<name>",
            "download": "<url_to_download_vsix_file>",
            "version": "<extension_version>"
        }

    Read the Content from aka.ms is not included.Terms of Use for the Content from marketplace.visualstudio.com is not included.Microsoft Visual Studio Marketplace before using its resources.

  4. Log in to the Red Hat registry:

    1. Navigate to TerminalRun Task…​devfile.
    2. Run the 1. Login to registry.redhat.io task.
    3. Enter your Red Hat Registry Service Account credentials when prompted.
  5. Build and publish the custom plugin registry:

    1. Navigate to TerminalRun Task…​devfile.
    2. Run the 2. Build and Publish a Custom Plugin Registry task.

      Note

      Verify that the CHE_CODE_VERSION in the Content from github.com is not included.build-config.json file matches the version of the editor currently used with OpenShift Dev Spaces. Update it if necessary.

  6. Configure OpenShift Dev Spaces to use the custom plugin registry:

    1. Navigate to TerminalRun Task…​devfile.
    2. Run the 3. Configure Che to use the Custom Plugin Registry task.

Verification

  1. Check that the plugin-registry pod has restarted and is running.
  2. Restart your workspace.
  3. Open the Extensions view in the IDE and verify that your added extensions are available.

19.4. Add or remove extensions from the Linux command line

Build and publish a custom plugin registry from the Linux command line to create a tailored Open VSX registry with the specific extensions your organization needs.

Prerequisites

  • You have podman installed.
  • You have Node.js version 18.20.3 or higher installed.
  • You have created a Red Hat Registry Service Account and have the username and token available.
  • You have a container image based on the latest tag or SHA to include the latest security fixes.

Procedure

  1. Clone the plugin registry repository:

    $ git clone {plugin-registry-repo-url}.git
  2. Change to the plugin registry directory:

    $ cd che-plugin-registry/
  3. Log in to the Red Hat registry:

    $ {docker-cli} login registry.redhat.io
  4. Identify the publisher and extension name for each extension you want to add:

    1. Find the extension on the Content from open-vsx.org is not included.Open VSX registry website.
    2. Copy the URL of the extension’s listing page.
    3. Extract the <publisher> and <name> from the URL:

      https://open-vsx.org/extension/<publisher>/<name>
  5. Open the Content from github.com is not included.openvsx-sync.json file.
  6. Add or remove extensions using the following JSON syntax:

        {
            "id": "<publisher>.<name>",
            "version": "<extension_version>"
        }
    Tip

    If you have a closed-source or internal-only extension, you can add it directly from a .vsix file. Use a URL accessible to your custom plugin registry container:

        {
            "id": "<publisher>.<name>",
            "download": "<url_to_download_vsix_file>",
            "version": "<extension_version>"
        }

    Read the Content from aka.ms is not included.Terms of Use for the Content from marketplace.visualstudio.com is not included.Microsoft Visual Studio Marketplace before using its resources.

  7. Build the plugin registry container image:

    $ ./build.sh -o <username> -r quay.io -t custom
    Note

    Verify that the CHE_CODE_VERSION in the Content from github.com is not included.build-config.json file matches the version of the editor currently used with OpenShift Dev Spaces. Update it if necessary.

  8. Push the image to a container registry such as Content from quay.io is not included.quay.io:

    $ podman push quay.io/<username/plugin_registry:custom>
  9. Edit the CheCluster custom resource in your organization’s cluster to point to the image and save the changes:

    spec:
      components:
        pluginRegistry:
          deployment:
            containers:
              - image: quay.io/<username/plugin_registry:custom>
          openVSXURL: ''

Verification

  1. Check that the plugin-registry pod has restarted and is running.
  2. Restart your workspace.
  3. Open the Extensions view in the IDE and verify that your added extensions are available.

19.5. Deploy Open VSX using an OpenShift Dev Spaces workspace

Deploy and configure an on-premises Eclipse Open VSX extension registry by using an OpenShift Dev Spaces workspace with the Open VSX repository.

Prerequisites

Procedure

  1. Create a workspace by using the Content from github.com is not included.Eclipse Open VSX repository.
  2. Run the 2.1. Create Namespace for OpenVSX task in the workspace (Terminal > Run Task…​ > devfile > 2.1. Create Namespace for OpenVSX). A new OpenShift project with the name openvsx is created on the cluster.
  3. Run the 2.4.1. Deploy Custom OpenVSX task in the workspace (Terminal > Run Task…​ > devfile > 2.4.1. Deploy Custom OpenVSX). When the task prompts for the Open VSX server image, enter registry.redhat.io/devspaces/openvsx-rhel9:3.27.

    After the deployment completes, the openvsx project has two components: PostgreSQL database and Open VSX server. The Open VSX UI is accessible through an exposed route in the OpenShift cluster. Deployment information is in the deploy/openshift/openvsx-deployment-no-es.yml file with default values such as OVSX_PAT_BASE64.

  4. Run the 2.5. Add OpenVSX user with PAT to the DB task in the workspace (Terminal > Run Task…​ > devfile > 2.5. Add OpenVSX user with PAT to the DB). The command prompts for the Open VSX username and user PAT. The default values are used if no custom values are entered.

    The user PAT must match the decoded value of OVSX_PAT_BASE64 specified in the deployment file. If you update OVSX_PAT_BASE64, use the new decoded value as the user PAT.

  5. Run the 2.6. Configure Che to use the internal Open VSX registry task in the workspace (Terminal > Run Task…​ > devfile > 2.6. Configure Che to use the internal OpenVSX registry). The task patches the CheCluster custom resource to use the specified Open VSX URL for the extension registry.
  6. After the openvsx-server pod is running and in the Ready state, run the 2.8. Publish a Visual Studio Code Extension from a VSIX file task to publish an extension from a .vsix file (Terminal > Run Task…​ > devfile > 2.8. Publish a Visual Studio Code Extension from a VSIX file). The command prompts for the extension’s namespace name and the path to the .vsix file.
  7. Optional: To publish multiple extensions, update the deploy/openshift/extensions.txt file with the download URLs of each .vsix file, then run the 2.9. Publish list of Visual Studio Code Extensions task (Terminal > Run Task…​ > devfile > 2.9. Publish list of Visual Studio Code Extensions).

Verification

  • Start any workspace and verify the published extensions are available in the Extensions view of the workspace IDE.
  • Open the internal route in the openvsx OpenShift project to verify the Open VSX registry UI.

19.6. Deploy Open VSX using the OpenShift CLI

Deploy and configure an on-premises Eclipse Open VSX extension registry by using the oc CLI tool.

Prerequisites

  • You have the oc tool installed.
  • You are logged in to the OpenShift cluster where OpenShift Dev Spaces is deployed as a cluster administrator.

    Tip

    $ oc login https://<openshift_dev_spaces_fqdn> --username=<my_user>

Procedure

  1. Create a new OpenShift project for Open VSX:

    oc new-project openvsx
  2. Save the Content from github.com is not included.openvsx-deployment-no-es.yml file on your file system.
  3. Deploy Open VSX from the directory where you saved the file:

    oc process -f openvsx-deployment-no-es.yml \
       -p OPENVSX_SERVER_IMAGE=registry.redhat.io/devspaces/openvsx-rhel9:3.27 \
       | oc apply -f -
  4. Verify that all pods in the openvsx namespace are running and ready:

    oc get pods -n openvsx \
      -o jsonpath='{range .items[]}{@.metadata.name}{"\t"}{@.status.phase}{"\t"}{.status.containerStatuses[].ready}{"\n"}{end}'
  5. Add an Open VSX user with PAT to the database.

    1. Find the PostgreSQL pod:

      export POSTGRESQL_POD_NAME=$(oc get pods -n openvsx \
         -o jsonpath="{.items[*].metadata.name}" | tr ' ' '\n' | grep '^postgresql' | head -n 1)
    2. Insert the username into the OpenVSX database:

      oc exec -n openvsx "$POSTGRESQL_POD_NAME" -- bash -c \
         "psql -d openvsx -c \"INSERT INTO user_data (id, login_name, role) VALUES (1001, 'eclipse-che', 'privileged');\""
    3. Insert the user PAT into the OpenVSX database:

      oc exec -n openvsx "$POSTGRESQL_POD_NAME" -- bash -c \
         "psql -d openvsx -c \"INSERT INTO personal_access_token (id, user_data, value, active, created_timestamp, accessed_timestamp, description) VALUES (1001, 1001, 'eclipse_che_token', true, current_timestamp, current_timestamp, 'extensions publisher');\""
  6. Configure OpenShift Dev Spaces to use the internal Open VSX:

    export CHECLUSTER_NAME="$(oc get checluster --all-namespaces -o json | jq -r '.items[0].metadata.name')" &&
    export CHECLUSTER_NAMESPACE="$(oc get checluster --all-namespaces -o json | jq -r '.items[0].metadata.namespace')" &&
    export OPENVSX_ROUTE_URL="$(oc get route internal -n openvsx -o jsonpath='{.spec.host}')" &&
    export PATCH='{"spec":{"components":{"pluginRegistry":{"openVSXURL":"https://'"$OPENVSX_ROUTE_URL"'"}}}}' &&
    oc patch checluster "${CHECLUSTER_NAME}" --type=merge --patch "${PATCH}" -n "${CHECLUSTER_NAMESPACE}"
    Tip

    Refer to Section 19.2, “Configure the Open VSX registry URL” for detailed instructions on configuring the Open VSX registry URL in OpenShift Dev Spaces.

  7. Publish Visual Studio Code extensions with the ovsx command. The Open VSX registry does not provide any extension by default. You need the extension namespace name and the download URL of the .vsix package.

    1. Retrieve the name of the pod running the Open VSX server:

      export OVSX_POD_NAME=$(oc get pods -n openvsx -o jsonpath="{.items[*].metadata.name}" | tr ' ' '\n' | grep ^openvsx-server)
    2. Download the .vsix extension:

      oc exec -n openvsx "${OVSX_POD_NAME}" -- bash -c "wget -O /tmp/extension.vsix <EXTENSION_DOWNLOAD_URL>"
    3. Create an extension namespace:

      oc exec -n openvsx "${OVSX_POD_NAME}" -- bash -c "ovsx create-namespace <EXTENSION_NAMESPACE_NAME>" || true
    4. Publish the extension:

      oc exec -n openvsx "${OVSX_POD_NAME}" -- bash -c "ovsx publish /tmp/extension.vsix"
    5. Delete the downloaded extension file:

      oc exec -n openvsx "${OVSX_POD_NAME}" -- bash -c "rm /tmp/extension.vsix"
  8. Optional: Remove the public route to configure internal access to the Open VSX service:

    oc delete route internal -n openvsx
  9. Optional: Set the internal Open VSX service URL so that OpenShift Dev Spaces uses internal cluster service routing instead of a public route:

    export CHECLUSTER_NAME="$(oc get checluster --all-namespaces -o json | jq -r '.items[0].metadata.name')" &&
    export CHECLUSTER_NAMESPACE="$(oc get checluster --all-namespaces -o json | jq -r '.items[0].metadata.namespace')" &&
    export PATCH='{"spec":{"components":{"pluginRegistry":{"openVSXURL":"http://openvsx-server.openvsx.svc:8080"}}}}' &&
    oc patch checluster "${CHECLUSTER_NAME}" --type=merge --patch "${PATCH}" -n "${CHECLUSTER_NAMESPACE}"

Verification

  • Check the list of published extensions by navigating to the Open VSX route URL or the internal service URL.

19.7. Delete an extension by using the Open VSX administrator API

Delete an extension from your private Open VSX registry by calling the administrator API with an administrator user and a Personal Access Token (PAT).

Prerequisites

  • You have access to the OpenShift cluster where the Open VSX registry is deployed in the openvsx project.

Procedure

  1. Add the Open VSX administrator user and PAT to the database:

    POD=$(oc get pods -n openvsx -l app=openvsx-db -o jsonpath='{.items[0].metadata.name}')
    oc exec -n openvsx "$POD" -- psql -d openvsx -c \
      "INSERT INTO user_data (id, login_name, role) VALUES (1002, 'openvsx-admin', 'admin');"
    oc exec -n openvsx "$POD" -- psql -d openvsx -c \
      "INSERT INTO personal_access_token (id, user_data, value, active, created_timestamp, accessed_timestamp, description) VALUES (1002, 1002, '<your_admin_token>', true, current_timestamp, current_timestamp, 'Admin API Token');"
    Note

    Use a strong, unique value for <your_admin_token> in production environments.

  2. Delete an extension and all its versions:

    curl -X POST \
      "https://<your_openvsx_server_url>/admin/api/extension/<publisher>/<extension>/delete?token=<your_admin_token>"

    where:

    <your_openvsx_server_url>
    The URL of the Open VSX server.
    <publisher>
    The extension publisher name.
    <extension>
    The extension name.
    <your_admin_token>
    The PAT value created in step 1.
  3. Optional: Delete a specific version of an extension:

    curl -X POST \
      -H "Content-Type: application/json" \
      -d '[{"version": "<version>", "targetPlatform": "<platform>"}]' \
      "https://<your_openvsx_server_url>/admin/api/extension/<publisher>/<extension>/delete?token=<your_admin_token>"

    You can list multiple version and platform pairs in the JSON array.

Verification

  • Refresh the Open VSX registry and verify that the extension no longer appears.

19.8. Delete an extension from the PostgreSQL database directly

Delete an extension from the PostgreSQL database directly when the administrator API is not available or when you need specific data cleanup.

Prerequisites

  • You have access to the OpenShift cluster where the Open VSX registry is deployed in the openvsx project.
  • You know the namespace name and extension name to delete.

Procedure

  1. Identify the PostgreSQL pod:

    POD=$(oc get pods -n openvsx -l app=openvsx-db -o jsonpath='{.items[0].metadata.name}')
  2. Connect to the PostgreSQL database:

    oc exec -it -n openvsx "$POD" -- psql
    \c openvsx
  3. Find the namespace ID and extension ID:

    SELECT id, name FROM namespace WHERE name = '<namespace_name>';
    SELECT id, name, namespace_id FROM extension WHERE namespace_id = <namespace_id> AND name = '<extension_name>';
  4. Optional: Preview extension versions and file resources before deleting:

    SELECT id, version, target_platform FROM extension_version WHERE extension_id = <extension_id>;
    SELECT id, name, storage_type FROM file_resource WHERE extension_id = <extension_id>;

    If storage_type is local, you must also remove the files from the file system after deleting the database records.

  5. Delete the extension from the database:

    BEGIN;
    DELETE FROM file_resource WHERE extension_id = <extension_id>;
    DELETE FROM extension_review WHERE extension_id = <extension_id>;
    DELETE FROM extension_version WHERE extension_id = <extension_id>;
    DELETE FROM extension WHERE id = <extension_id>;
    COMMIT;
    Important

    Run these commands in order within one transaction. Do not skip the COMMIT statement.

  6. If the storage_type is local, remove the extension files from local storage:

    SERVER_POD=$(oc get pods -n openvsx -l app=openvsx-server -o jsonpath='{.items[0].metadata.name}')
    oc exec -n openvsx "$SERVER_POD" -- rm -rf /tmp/extensions/<publisher>/<extension>

Verification

  • Refresh the Open VSX registry and verify that the extension no longer appears.

Chapter 20. Configure Visual Studio Code - Open Source ("Code - OSS")

Configure Visual Studio Code - Open Source ("Code - OSS") for OpenShift Dev Spaces workspaces, including multi-root project layout, trusted and default extensions, and editor settings.

20.1. Configure single and multiroot workspaces

Work with multiple project folders in the same workspace by using the multi-root workspace feature. This is useful when you are working on several related projects at once, such as product documentation and product code repositories.

By default, workspaces open in multi-root mode. After a workspace starts, the /projects/.code-workspace workspace file is generated. The workspace file contains all the projects described in the devfile.

{
	"folders": [
		{
			"name": "project-1",
			"path": "/projects/project-1"
		},
		{
			"name": "project-2",
			"path": "/projects/project-2"
		}
	]
}

If the workspace file already exists, it is updated and all missing projects are taken from the devfile. If you remove a project from the devfile, it remains in the workspace file.

You can change the default behavior and provide your own workspace file or switch to a single-root workspace.

Prerequisites

  • You have a running instance of OpenShift Dev Spaces.

Procedure

  1. Add a workspace file with the name .code-workspace to the root of your repository. After workspace creation, the Visual Studio Code - Open Source ("Code - OSS") uses the workspace file as it is.

    {
    	"folders": [
    		{
    			"name": "project-name",
    			"path": "."
    		}
    	]
    }
    Important

    Be careful when creating a workspace file. In case of errors, an empty Visual Studio Code - Open Source ("Code - OSS") opens instead. If you have several projects, the workspace file is taken from the first project. If the workspace file does not exist in the first project, a new one is created and placed in the /projects directory.

  2. Define the VSCODE_DEFAULT_WORKSPACE environment variable in your devfile with the path to an alternative workspace file.

       env:
         - name: VSCODE_DEFAULT_WORKSPACE
           value: "/projects/project-name/workspace-file"
  3. Define the VSCODE_DEFAULT_WORKSPACE environment variable and set it to / to open a workspace in single-root mode.

       env:
         - name: VSCODE_DEFAULT_WORKSPACE
           value: "/"

Verification

  • Start or restart the workspace and verify that Code - OSS opens with the expected workspace mode (single-root or multi-root).

20.2. Configure trusted extensions for Microsoft Visual Studio Code

Grant specific extensions access to OAuth authentication tokens in Microsoft Visual Studio Code by configuring the trustedExtensionAuthAccess field. This allows extensions that require access to services such as GitHub, Microsoft, or any other OAuth-enabled service to authenticate without manual intervention.

	"trustedExtensionAuthAccess": [
		"<publisher1>.<extension1>",
		"<publisher2>.<extension2>"
	]

Define the variable in the devfile or in a ConfigMap.

Warning

Use the trustedExtensionAuthAccess field with caution as it could potentially lead to security risks if misused. Give access only to trusted extensions.

Important

Since the Microsoft Visual Studio Code editor is bundled within che-code image, you can only change the product.json file when the workspace is started up.

Prerequisites

  • You have a running instance of OpenShift Dev Spaces.

Procedure

  1. Define the VSCODE_TRUSTED_EXTENSIONS environment variable in devfile.yaml:

       env:
         - name: VSCODE_TRUSTED_EXTENSIONS
           value: "<publisher1>.<extension1>,<publisher2>.<extension2>"
  2. Alternatively, mount a ConfigMap with the VSCODE_TRUSTED_EXTENSIONS environment variable. With a ConfigMap, the variable is propagated to all your workspaces and you do not need to add the variable to each devfile you are using.

    kind: ConfigMap
    apiVersion: v1
    metadata:
      name: trusted-extensions
      labels:
        controller.devfile.io/mount-to-devworkspace: 'true'
        controller.devfile.io/watch-configmap: 'true'
      annotations:
        controller.devfile.io/mount-as: env
    data:
      VSCODE_TRUSTED_EXTENSIONS: '<publisher1>.<extension1>,<publisher2>.<extension2>'

Verification

  • Start or restart the workspace and verify that the trustedExtensionAuthAccess section is added to the product.json file.

20.3. Configure default extensions

Pre-install VS Code extensions in OpenShift Dev Spaces workspaces by configuring the DEFAULT_EXTENSIONS environment variable to provide a consistent set of editor extensions on workspace startup.

After startup, the editor checks for the DEFAULT_EXTENSIONS environment variable and installs the specified extensions in the background. To specify multiple extensions, separate the paths with a semicolon.

There are three ways to embed default .vsix extensions into your workspace:

  • Add the extension binary to the source repository.
  • Use the devfile postStart event to fetch extension binaries from the network.
  • Include the extensions' .vsix binaries in the che-code image.

Prerequisites

  • You have a running OpenShift Dev Spaces instance.

Procedure

  1. Add the extension binary to the source repository.

    Adding the extension binary to the Git repository and defining the environment variable in the devfile is the easiest way to add default extensions to your workspace. If the extension.vsix file exists in the repository root, set the DEFAULT_EXTENSIONS environment variable for the tooling container in your .devfile.yaml:

    schemaVersion: 2.3.0
    metadata:
      generateName: example-project
    components:
      - name: tools
        container:
          image: quay.io/devfile/universal-developer-image:ubi8-latest
          env:
            - name: 'DEFAULT_EXTENSIONS'
              value: '/projects/example-project/extension.vsix'
  2. Use the devfile postStart event to fetch extension binaries from the network.

    Use cURL or GNU Wget to download extensions to your workspace. Specify a devfile command to download extensions and add a postStart event to run the command on workspace startup. Define the DEFAULT_EXTENSIONS environment variable in the devfile:

    schemaVersion: 2.3.0
    metadata:
      generateName: example-project
    components:
      - name: tools
        container:
          image: quay.io/devfile/universal-developer-image:ubi8-latest
          env:
            - name: DEFAULT_EXTENSIONS
              value: '/tmp/extension-1.vsix;/tmp/extension-2.vsix'
    
    commands:
      - id: add-default-extensions
        exec:
          # name of the tooling container
          component: tools
          # download several extensions using curl
          commandLine: |
            curl https://.../extension-1.vsix --location -o /tmp/extension-1.vsix
            curl https://.../extension-2.vsix --location -o /tmp/extension-2.vsix
    
    events:
      postStart:
        - add-default-extensions
    Warning

    In some cases curl may download a .gzip compressed file. This might make installing the extension impossible. To fix that, save the file as a .vsix.gz file and then decompress it with gunzip. This replaces the .vsix.gz file with an unpacked .vsix file: curl Content from some-extension-url is not included.https://some-extension-url --location -o /tmp/extension.vsix.gz && gunzip /tmp/extension.vsix.gz

  3. Include the extensions .vsix binaries in the che-code image.

    Bundling extensions in the editor image and defining the DEFAULT_EXTENSIONS environment variable in a ConfigMap applies default extensions without changing the devfile.

    1. Create a directory and place your selected .vsix extensions in this directory.
    2. Create a Dockerfile with the following content:

      # inherit che-incubator/che-code:latest
      FROM quay.io/che-incubator/che-code:latest
      USER 0
      
      # copy all .vsix files to /default-extensions directory
      RUN mkdir --mode=775 /default-extensions
      COPY --chmod=755 *.vsix /default-extensions/
      
      # add instruction to the script to copy default extensions to the working container
      RUN echo "cp -r /default-extensions /checode/" >> /entrypoint-init-container.sh
    3. Build the image and then push it to a registry:

      $ docker build -t yourname/che-code:next .
      $ docker push yourname/che-code:next
    4. Add the new ConfigMap to the user’s project, define the DEFAULT_EXTENSIONS environment variable, and specify the absolute paths to the extensions. This ConfigMap sets the environment variable to all workspaces in the user’s project.

      kind: ConfigMap
      apiVersion: v1
      metadata:
        name: vscode-default-extensions
        labels:
          controller.devfile.io/mount-to-devworkspace: 'true'
          controller.devfile.io/watch-configmap: 'true'
        annotations:
          controller.devfile.io/mount-as: env
      data:
        DEFAULT_EXTENSIONS: '/checode/default-extensions/extension1.vsix;/checode/default-extensions/extension2.vsix'
    5. Open the OpenShift Dev Spaces Dashboard and navigate to the Create Workspace tab on the left side.
    6. In the Editor Selector section, expand the Use an Editor Definition dropdown and set the editor URI to yourname/che-code:next.
    7. Create a workspace by selecting a sample or entering a Git repository URL.

Verification

  • Verify that the extensions are installed in the workspace by checking the Extensions panel in the editor.

20.4. Visual Studio Code - Open Source editor configuration sections

The Visual Studio Code - Open Source ("Code - OSS") editor supports several configuration sections in a ConfigMap. Each section maps to a specific editor config file and controls a different aspect of editor behavior.

The following sections are currently supported:

settings.json
Contains various settings with which you can customize different parts of the Code - OSS editor.
extensions.json
Contains recommended extensions that are installed when a workspace is started.
product.json
Contains properties that you need to add to the editor’s product.json file. If the property already exists, its value is updated.
configurations.json

Contains properties for Code - OSS editor configuration. For example, you can use the extensions.install-from-vsix-enabled property to disable the Install from VSIX menu item in the Extensions panel.

Note

The extensions.install-from-vsix-enabled property disables only the UI action. Extensions can still be installed by using the workbench.extensions.command.installFromVSIX API command or the CLI. To block these paths as well, manage extension installation policies.

policy.json
Controls Code - OSS extension installation by using the AllowedExtensions policy and the ability to fully block extension installation.

20.5. Apply Code - OSS editor configurations with a ConfigMap

Configure the Code - OSS editor for all workspaces by defining settings, recommended extensions, and product properties in a ConfigMap. When you start a workspace, the editor reads this ConfigMap and applies the configurations to the corresponding config files.

Prerequisites

  • You have an active OpenShift Dev Spaces workspace or you are ready to start one.
  • You have an active oc session with permissions to create ConfigMaps in user projects.

Procedure

  1. Add a new ConfigMap in valid JSON format to the user’s project, define the supported sections, and specify the properties you want to add.

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: vscode-editor-configurations
      labels:
         app.kubernetes.io/part-of: che.eclipse.org
    data:
      extensions.json: |
        {
          "recommendations": [
              "dbaeumer.vscode-eslint",
              "github.vscode-pull-request-github"
          ]
        }
      settings.json: |
        {
          "window.header": "A HEADER MESSAGE",
          "window.commandCenter": false,
          "workbench.colorCustomizations": {
            "titleBar.activeBackground": "#CCA700",
            "titleBar.activeForeground": "#ffffff"
          }
        }
      product.json: |
        {
          "extensionEnabledApiProposals": {
            "ms-python.python": [
              "contribEditorContentMenu",
              "quickPickSortByLabel"
            ]
          },
          "trustedExtensionAuthAccess": [
            "<publisher1>.<extension1>",
            "<publisher2>.<extension2>"
          ]
        }
      configurations.json: |
        {
          "extensions.install-from-vsix-enabled": false
        }

    where:

    <publisher1>.<extension1>, <publisher2>.<extension2>
    The publisher and extension name pairs for extensions that are granted trusted authentication access. Use the format publisher.extensionName.
  2. Optional: To replicate the ConfigMap across all user projects while preventing user modifications, add the ConfigMap to the openshift-devspaces namespace instead of individual user projects.
  3. Start or restart your workspace.

Verification

  1. Verify that settings defined in the ConfigMap are applied using one of the following methods:

    • Use F1 → Preferences: Open Remote Settings to check if the defined settings are applied.
    • Ensure that the settings from the ConfigMap are present in the /checode/remote/data/Machine/settings.json file by using the F1 → File: Open File…​ command to inspect the file’s content.
  2. Verify that extensions defined in the ConfigMap are applied:

    • Go to the Extensions view (F1 → View: Show Extensions) and check that the extensions are installed
    • Ensure that the extensions from the ConfigMap are present in the .code-workspace file by using the F1 → File: Open File…​ command. By default, the workspace file is placed at /projects/.code-workspace.
  3. Verify that product properties defined in the ConfigMap are being added to the Visual Studio Code product.json:

    • Open a terminal, run the command cat /checode/entrypoint-logs.txt | grep -a "Node.js dir" and copy the Visual Studio Code path.
    • Press Ctrl + O, paste the copied path and open product.json file.
    • Ensure that product.json file contains all the properties defined in the ConfigMap.
  4. Verify that extensions.install-from-vsix-enabled property defined in the ConfigMap is applied to the Code - OSS editor:

    • Open the Command Palette (use F1) to check that Install from VSIX command is not present in the list of commands.
    • Use F1 → Open View → Extensions to open the Extensions panel, then click …​ on the view (Views and More Actions tooltip) to check that Install from VSIX action is absent in the list of actions.
    • Go to the Explorer, find a file with the vsix extension (redhat.vscode-yaml-1.17.0.vsix, for example), open menu for that file. Install from VSIX action should be absent in the menu.

20.6. Manage extension installation with a ConfigMap

Control Code - OSS extension installation by using a ConfigMap. Enforce a fine-grained allow or deny list by using the AllowedExtensions policy.

You can also block installs through the CLI, default extensions, and the workbench.extensions.command.installFromVSIX API command. The following properties are supported:

  • BlockCliExtensionsInstallation — when enabled, blocks installation of extensions through the CLI.
  • BlockDefaultExtensionsInstallation — when enabled, blocks installation of default extensions. See Section 20.3, “Configure default extensions”.
  • BlockInstallFromVSIXCommandExtensionsInstallation — when enabled, blocks installation of extensions through the workbench.extensions.command.installFromVSIX API command.
  • AllowedExtensions — provides fine-grained control over Code - OSS extension installation. When this policy is applied, already installed extensions that are not allowed are disabled and display a warning. For conceptual background, see Content from code.visualstudio.com is not included.Configure allowed extensions.

Prerequisites

  • You have administrator access to the OpenShift cluster.

Procedure

  1. Add a new ConfigMap to the openshift-devspaces namespace and specify the properties you want to add:

    kind: ConfigMap
    apiVersion: v1
    metadata:
      name: vscode-editor-configurations
      namespace: openshift-devspaces
      labels:
        app.kubernetes.io/component: workspaces-config
        app.kubernetes.io/part-of: che.eclipse.org
      annotations:
        controller.devfile.io/mount-as: subpath
        controller.devfile.io/mount-path: /checode-config
        controller.devfile.io/read-only: 'true'
    data:
      policy.json: |
        {
          "BlockCliExtensionsInstallation": true,
          "BlockDefaultExtensionsInstallation": true,
          "BlockInstallFromVSIXCommandExtensionsInstallation": true,
          "AllowedExtensions": {
              "*": true,
              "dbaeumer.vscode-eslint": false,
              "ms-python.python": false,
              "redhat": false
           }
        }
    Note

    Ensure that the ConfigMap contains data in a valid JSON format.

  2. Optional: To completely disable extension installation instead of using fine-grained control, set all extensions to disallowed:

    kind: ConfigMap
    apiVersion: v1
    metadata:
      name: vscode-editor-configurations
      namespace: openshift-devspaces
      labels:
        app.kubernetes.io/component: workspaces-config
        app.kubernetes.io/part-of: che.eclipse.org
      annotations:
        controller.devfile.io/mount-as: subpath
        controller.devfile.io/mount-path: /checode-config
        controller.devfile.io/read-only: 'true'
    data:
      policy.json: |
        {
          "AllowedExtensions": {
            "*": false
          }
        }
  3. Start or restart your workspace.
  4. Optional: Add the ConfigMap in the user’s project:

    kind: ConfigMap
    apiVersion: v1
    metadata:
      name: vscode-editor-configurations
      labels:
        controller.devfile.io/mount-to-devworkspace: 'true'
        controller.devfile.io/watch-configmap: 'true'
      annotations:
        controller.devfile.io/mount-as: subpath
        controller.devfile.io/mount-path: /checode-config
        controller.devfile.io/read-only: 'true'
    data:
      policy.json: |
        {
          "AllowedExtensions": {
              "*": false
           }
        }
    Note

    When the ConfigMap is stored in the user’s project, the user can edit its values.

Verification

  1. Verify that the BlockCliExtensionsInstallation property is applied:

    • Press F1, select Preferences: Open Settings (UI), and enter BlockCliExtensionsInstallation in search.
    • Provide a .vsix file and try CLI install. The installation fails with "Installation of extensions via CLI has been blocked by an administrator".
  2. Verify that the BlockDefaultExtensionsInstallation property is applied:

    • Check Settings for the property.
    • Configure default extensions and verify they are not installed on workspace start or restart.
  3. Verify that the BlockInstallFromVSIXCommandExtensionsInstallation property is applied:

    • Check Settings for the property.
    • The workbench.extensions.command.installFromVSIX API command is blocked.
  4. Verify that rules defined in the AllowedExtensions section are applied:

    • Check Settingsextensions.allowed.
    • Disallowed extensions display a "This extension cannot be installed because it is not in the allowed list" warning.

Chapter 21. Use the OpenShift Dev Spaces server API

Use the Swagger web user interface to explore and interact with the OpenShift Dev Spaces server and dashboard APIs for programmatic integration and automation.

Procedure

  1. Navigate to the Swagger API web user interface:

    • https://<openshift_dev_spaces_fqdn>/swagger (OpenShift Dev Spaces server)
    • https://<openshift_dev_spaces_fqdn>/dashboard/api/swagger (OpenShift Dev Spaces dashboard)

      Important

      DevWorkspace is a Kubernetes object and manipulations should happen on the Kubernetes API level. See Managing workspaces with APIs in the User Guide.

Chapter 22. Upgrade OpenShift Dev Spaces using the web console

Upgrade OpenShift Dev Spaces from the previous minor version using the OpenShift web console operator interface to receive the latest bug fixes, security patches, and feature improvements.

22.1. Specify the update approval strategy

Configure the update approval strategy for the Red Hat OpenShift Dev Spaces Operator to control how updates are applied.

The Red Hat OpenShift Dev Spaces Operator supports two upgrade strategies:

Automatic
The Operator installs new updates when they become available.
Manual
New updates need to be manually approved before installation begins.

Prerequisites

Procedure

  1. In the OpenShift web console, navigate to OperatorsInstalled Operators.
  2. Click Red Hat OpenShift Dev Spaces in the list of installed Operators.
  3. Navigate to the Subscription tab.
  4. Configure the Update approval strategy to Automatic or Manual.

22.2. Upgrade Dev Spaces using the OpenShift web console

Manually approve an upgrade from an earlier minor version by using the Red Hat OpenShift Dev Spaces Operator in the OpenShift web console. Controlled upgrades ensure you get the latest features, fixes, and security updates at your own pace.

Prerequisites

Verification

  1. Navigate to the OpenShift Dev Spaces instance.
  2. The 3.27 version number is visible at the bottom of the page.

22.3. Repair the Dev Workspace Operator on OpenShift

If an This page is not included, but the link has been rewritten to point to the nearest parent document.OLM restart or cluster upgrade causes a duplicate Dev Workspace Operator installation, repair the Dev Workspace Operator on OpenShift.

Prerequisites

Procedure

  1. Delete the devworkspace-controller namespace that contains the failing pod.
  2. Update DevWorkspace and DevWorkspaceTemplate Custom Resource Definitions (CRD) by setting the conversion strategy to None and removing the entire webhook section:

    spec:
      ...
      conversion:
        strategy: None
    status:
    ...
    Tip

    You can find and edit the DevWorkspace and DevWorkspaceTemplate CRDs in the Administrator perspective of the OpenShift web console by searching for DevWorkspace in AdministrationCustomResourceDefinitions.

    Note

    The DevWorkspaceOperatorConfig and DevWorkspaceRouting CRDs have the conversion strategy set to None by default.

  3. Remove the Dev Workspace Operator subscription:

    $ oc delete sub devworkspace-operator \
    -n openshift-operators
    -n
    openshift-operators or an OpenShift project where the Dev Workspace Operator is installed.
  4. Get the Dev Workspace Operator CSVs in the <devworkspace_operator.vX.Y.Z> format:

    $ oc get csv | grep devworkspace
  5. Remove each Dev Workspace Operator CSV:

    $ oc delete csv <devworkspace_operator.vX.Y.Z> \
    -n openshift-operators
    -n
    openshift-operators or an OpenShift project where the Dev Workspace Operator is installed.
  6. Re-create the Dev Workspace Operator subscription:

    $ cat <<EOF | oc apply -f -
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: devworkspace-operator
      namespace: openshift-operators
    spec:
      channel: fast
      name: devworkspace-operator
      source: redhat-operators
      sourceNamespace: openshift-marketplace
      installPlanApproval: Automatic
      startingCSV: devworkspace-operator.v0.40.0
    EOF
    installPlanApproval

    Automatic or Manual.

    Important

    For installPlanApproval: Manual, in the Administrator perspective of the OpenShift web console, go to OperatorsInstalled Operators and select the following for the Dev Workspace Operator: Upgrade availablePreview InstallPlanApprove.

Verification

  • In the Administrator perspective of the OpenShift web console, go to OperatorsInstalled Operators and verify the Succeeded status of the Dev Workspace Operator.

Chapter 23. Upgrade OpenShift Dev Spaces using the CLI management tool

Upgrade OpenShift Dev Spaces from the previous minor version using the CLI management tool to receive the latest bug fixes, security patches, and feature improvements. Upgrade the dsc management tool to version 3.27 by following the installation procedure to reinstall it before you begin.

23.1. Upgrade OpenShift Dev Spaces using the CLI management tool

Upgrade OpenShift Dev Spaces from the previous minor version using the CLI management tool to receive the latest bug fixes, security patches, and feature improvements.

Prerequisites

  • You have an administrative account on OpenShift.
  • You have a previous minor version of CodeReady Workspaces installed using the CLI management tool on the same instance of OpenShift, in the openshift-devspaces project.
  • You have dsc for OpenShift Dev Spaces version 3.27 installed. See Section 2.2, “Install the dsc management tool”.

Procedure

  1. Save and push changes back to the Git repositories for all running CodeReady Workspaces 3.26 workspaces.
  2. Shut down all workspaces in the CodeReady Workspaces 3.26 instance.
  3. Upgrade OpenShift Dev Spaces:

    $ dsc server:update -n openshift-devspaces
    Note

    For slow systems or internet connections, add the --k8spodwaittimeout=1800000 flag option to extend the Pod timeout period to 1800000 ms or longer.

Verification

  1. Navigate to the OpenShift Dev Spaces instance.
  2. The 3.27 version number is visible at the bottom of the page.

23.2. Upgrade OpenShift Dev Spaces in a restricted environment

Upgrade Red Hat OpenShift Dev Spaces and perform minor version updates by using the CLI management tool in a restricted environment.

Prerequisites

Procedure

  1. Download and execute the mirroring script to install a custom Operator catalog and mirror the related images: prepare-restricted-environment.sh.

    $ bash prepare-restricted-environment.sh \
      --devworkspace_operator_index registry.redhat.io/redhat/redhat-operator-index:v4.22\
      --devworkspace_operator_version "v0.40.0" \
      --prod_operator_index "registry.redhat.io/redhat/redhat-operator-index:v4.22" \
      --prod_operator_package_name "devspaces" \
      --prod_operator_bundle_name "devspacesoperator" \
      --prod_operator_version "v3.27.0" \
      --my_registry "<my_registry>"

    where:

    <my_registry>
    The private Docker registry where the images are mirrored
  2. In all running workspaces in the CodeReady Workspaces 3.26 instance, save and push changes back to the Git repositories.
  3. Stop all workspaces in the CodeReady Workspaces 3.26 instance.
  4. Run the following command:

    $ dsc server:update --che-operator-image="$TAG" -n openshift-devspaces --k8spodwaittimeout=1800000

Verification

  1. Navigate to the OpenShift Dev Spaces instance.
  2. The 3.27 version number is visible at the bottom of the page.

Chapter 24. Uninstall OpenShift Dev Spaces

Use dsc to uninstall the OpenShift Dev Spaces instance and remove all OpenShift Dev Spaces-related user data from the cluster.

Warning

Uninstalling OpenShift Dev Spaces removes all OpenShift Dev Spaces-related user data.

Prerequisites

Procedure

  1. Remove the OpenShift Dev Spaces instance:

    $ dsc server:delete
    Tip

    The --delete-namespace option removes the OpenShift Dev Spaces namespace.

    The --delete-all option removes the Dev Workspace Operator and the related resources.

    Important

    Standard operating procedure (SOP) for removing Dev Workspace Operator manually without dsc is available in the OpenShift Container Platform This content is not included.official documentation.

Chapter 25. Troubleshooting OpenShift Dev Spaces administration

Diagnose and resolve common OpenShift Dev Spaces administration issues including workspace startup failures, OAuth configuration errors, and Dev Workspace Operator problems.

25.1. Workspace startup failure error messages

Diagnose and resolve common workspace startup failures based on error symptoms and root causes. The OpenShift Dev Spaces dashboard and the Dev Workspace Operator emit error messages that indicate pod scheduling, image pull, DevWorkspace, and resource quota issues.

25.1.1. Pod scheduling errors

Table 25.1. Pod scheduling error messages and resolutions

Error messageResolution

FailedScheduling: 0/N nodes are available: insufficient cpu or insufficient memory

The cluster does not have enough resources to schedule the workspace Pod. Free resources by stopping idle workspaces, or add nodes to the cluster.

FailedScheduling: 0/N nodes are available: pod has unbound immediate PersistentVolumeClaims

A PersistentVolumeClaim (PVC) cannot be bound. Verify that a StorageClass is configured and that the cluster has available persistent volumes.

node(s) didn’t match Pod’s node affinity/selector

The workspace Pod has a nodeSelector or node affinity that does not match any available node. Verify the nodeSelector configuration in the CheCluster Custom Resource.

25.1.2. Image pull errors

Table 25.2. Image pull error messages and resolutions

Error messageResolution

ErrImagePull or ImagePullBackOff

The container runtime cannot pull the workspace image. Verify that the image exists, the image name is correct in the devfile, and that image pull secrets are configured if the image is in a private registry.

x509: certificate signed by unknown authority

The container runtime does not trust the TLS certificate of the container registry. Import the registry Certificate Authority (CA) certificate into OpenShift Dev Spaces.

25.1.3. DevWorkspace errors

Table 25.3. DevWorkspace error messages and resolutions

Error messageResolution

DevWorkspace failed to start: timed out waiting for DevWorkspace to be ready

The workspace did not reach the Running phase within the configured timeout. Increase startTimeoutSeconds in the CheCluster Custom Resource or investigate Pod events for resource or scheduling issues.

Failed to create DevWorkspace: admission webhook denied the request

The Dev Workspace Operator webhook rejected the DevWorkspace. Verify that the Dev Workspace Operator is running and that CRDs are up to date.

BadRequest or InfrastructureFailure

An infrastructure-level error prevented workspace creation. Check the Dev Workspace Operator logs for details.

25.1.4. Resource quota errors

Table 25.4. Resource quota error messages and resolutions

Error messageResolution

exceeded quota or forbidden: exceeded quota

The user namespace has a ResourceQuota that prevents creating the workspace Pod or PVC. Increase the quota or reduce the workspace resource requests in the devfile.

OOMKilled

The workspace container exceeded its memory limit and was terminated. Increase the memory limit in the devfile components section or in the CheCluster Custom Resource defaults.

25.2. Troubleshooting OAuth configuration

Diagnose and resolve common OAuth configuration issues that prevent Git provider authentication from workspaces. Errors include incorrect credentials, mismatched callback URLs, missing Secrets, and expired tokens.

25.2.1. OAuth application errors

Table 25.5. OAuth application error symptoms and resolutions

SymptomResolution

Users see 401 Unauthorized when cloning a private repository.

The OAuth application credentials are incorrect. Verify the client ID and client secret in the OpenShift Secret. Recreate the Secret if needed.

Users see The redirect URI provided is missing or does not match on the Git provider.

The OAuth callback URL configured in the Git provider application does not match the OpenShift Dev Spaces callback URL. The callback URL must be https://__<devspaces_fqdn>__/api/oauth/callback.

Users are not prompted to authorize the OAuth application.

The OAuth OpenShift Secret is not in the openshift-devspaces namespace, or the Secret labels are incorrect. Verify the Secret exists with the required labels.

OAuth works for some users but not others.

The Git provider OAuth application restricts access to specific organizations or groups. Expand the application permissions to include all required organizations.

25.2.2. Token refresh errors

Table 25.6. Token refresh error symptoms and resolutions

SymptomResolution

Users see token expired errors after a period of inactivity.

The OAuth token has expired and cannot be refreshed. The user must revoke the token on the Git provider and re-authorize.

Git push fails with 403 Forbidden despite successful initial authentication.

The OAuth token scope is insufficient for push operations. Verify that the OAuth application requests the repo scope (GitHub), api scope (GitLab), or equivalent write permissions.

Revised on 2026-04-08 16:30:09 UTC

Legal Notice

Copyright © Red Hat.
Except as otherwise noted below, the text of and illustrations in this documentation are licensed by Red Hat under the Creative Commons Attribution–Share Alike 3.0 Unported license . If you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, the Red Hat logo, JBoss, Hibernate, and RHCE are trademarks or registered trademarks of Red Hat, LLC. or its subsidiaries in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS is a trademark or registered trademark of Hewlett Packard Enterprise Development LP or its subsidiaries in the United States and other countries.
The OpenStack® Word Mark and OpenStack logo are trademarks or registered trademarks of the Linux Foundation, used under license.
All other trademarks are the property of their respective owners.