Managing resources

Red Hat OpenShift AI Self-Managed 2.16

Manage administration tasks from the OpenShift AI dashboard

Abstract

As an OpenShift AI adminstrator, manage custom workbench images, cluster PVC size, user groups, and Jupyter notebook servers.

Preface

As an OpenShift AI administrator, you can manage the following resources:

  • Cluster PVC size
  • Cluster storage classes
  • OpenShift AI admin and user groups
  • Custom workbench images
  • Jupyter notebook servers

You can also specify whether to allow Red Hat to collect data about OpenShift AI usage in your cluster.

Chapter 1. Selecting OpenShift AI administrator and user groups

By default, all users authenticated in OpenShift can access OpenShift AI.

Also by default, users with cluster-admin permissions are OpenShift AI administrators. A cluster admin is a superuser that can perform any action in any project in the OpenShift cluster. When bound to a user with a local binding, they have full control over quota and every action on every resource in the project.

After a cluster admin user defines additional administrator and user groups in OpenShift, you can add those groups to OpenShift AI by selecting them in the OpenShift AI dashboard.

Prerequisites

  • You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges.
  • The groups that you want to select as administrator and user groups for OpenShift AI already exist in OpenShift. For more information, see Managing users and groups.

Procedure

  1. From the OpenShift AI dashboard, click SettingsUser management.
  2. Select your OpenShift AI administrator groups: Under Data science administrator groups, click the text box and select an OpenShift group. Repeat this process to define multiple administrator groups.
  3. Select your OpenShift AI user groups: Under Data science user groups, click the text box and select an OpenShift group. Repeat this process to define multiple user groups.

    Important

    The system:authenticated setting allows all users authenticated in OpenShift to access OpenShift AI.

  4. Click Save changes.

Verification

  • Administrator users can successfully log in to OpenShift AI and have access to the Settings navigation menu.
  • Non-administrator users can successfully log in to OpenShift AI. They can also access and use individual components, such as projects and workbenches.

Chapter 2. Importing a custom workbench image

In addition to workbench images provided and supported by Red Hat and independent software vendors (ISVs), you can import custom workbench images that cater to your project’s specific requirements.

You must import it so that your OpenShift AI users (data scientists) can access it when they create a project workbench.

Red Hat supports adding custom workbench images to your deployment of OpenShift AI, ensuring that they are available for selection when creating a workbench. However, Red Hat does not support the contents of your custom workbench image. That is, if your custom workbench image is available for selection during workbench creation, but does not create a usable workbench, Red Hat does not provide support to fix your custom workbench image.

Prerequisites

  • You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges.
  • Your custom image exists in an image registry that is accessible to OpenShift AI.
  • The SettingsNotebook images dashboard navigation menu option is enabled, as described in Enabling custom workbench images in OpenShift AI.
  • If you want to associate an accelerator with the custom image that you want to import, you know the accelerator’s identifier - the unique string that identifies the hardware accelerator.

Procedure

  1. From the OpenShift AI dashboard, click SettingsNotebook images.

    The Notebook images page appears. Previously imported images are displayed. To enable or disable a previously imported image, on the row containing the relevant image, click the toggle in the Enable column.

  2. Optional: If you want to associate an accelerator and you have not already created an accelerator profile, click Create profile on the row containing the image and complete the relevant fields. If the image does not contain an accelerator identifier, you must manually configure one before creating an associated accelerator profile.
  3. Click Import new image. Alternatively, if no previously imported images were found, click Import image.

    The Import Notebook images dialog appears.

  4. In the Image location field, enter the URL of the repository containing the image. For example: quay.io/my-repo/my-image:tag, quay.io/my-repo/my-image@sha256:xxxxxxxxxxxxx, or docker.io/my-repo/my-image:tag.
  5. In the Name field, enter an appropriate name for the image.
  6. Optional: In the Description field, enter a description for the image.
  7. Optional: From the Accelerator identifier list, select an identifier to set its accelerator as recommended with the image. If the image contains only one accelerator identifier, the identifier name displays by default.
  8. Optional: Add software to the image. After the import has completed, the software is added to the image’s meta-data and displayed on the Jupyter server creation page.

    1. Click the Software tab.
    2. Click the Add software button.
    3. Click Edit ( The Edit icon ).
    4. Enter the Software name.
    5. Enter the software Version.
    6. Click Confirm ( The Confirm icon ) to confirm your entry.
    7. To add additional software, click Add software, complete the relevant fields, and confirm your entry.
  9. Optional: Add packages to the notebook images. After the import has completed, the packages are added to the image’s meta-data and displayed on the Jupyter server creation page.

    1. Click the Packages tab.
    2. Click the Add package button.
    3. Click Edit ( The Edit icon ).
    4. Enter the Package name. For example, if you want to include data science pipeline V2 automatically, as a runtime configuration, type odh-elyra.
    5. Enter the package Version. For example, type 3.16.7.
    6. Click Confirm ( The Confirm icon ) to confirm your entry.
    7. To add an additional package, click Add package, complete the relevant fields, and confirm your entry.
  10. Click Import.

Verification

  • The image that you imported is displayed in the table on the Notebook images page.
  • Your custom image is available for selection when a user creates a workbench.

Chapter 3. Managing cluster PVC size

3.1. Configuring the default PVC size for your cluster

To configure how resources are claimed within your OpenShift AI cluster, you can change the default size of the cluster’s persistent volume claim (PVC) ensuring that the storage requested matches your common storage workflow. PVCs are requests for resources in your cluster and also act as claim checks to the resource.

Prerequisites

  • You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges.
Note

Changing the PVC setting restarts the Jupyter pod and makes Jupyter unavailable for up to 30 seconds. As a workaround, it is recommended that you perform this action outside of your organization’s typical working day.

Procedure

  1. From the OpenShift AI dashboard, click SettingsCluster settings.
  2. Under PVC size, enter a new size in gibibytes or mebibytes.
  3. Click Save changes.

Verification

  • New PVCs are created with the default storage size that you configured.

Additional resources

3.2. Restoring the default PVC size for your cluster

To change the size of resources utilized within your OpenShift AI cluster, you can restore the default size of your cluster’s persistent volume claim (PVC).

Prerequisites

  • You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges.

Procedure

  1. From the OpenShift AI dashboard, click SettingsCluster settings.
  2. Click Restore Default to restore the default PVC size of 20GiB.
  3. Click Save changes.

Verification

  • New PVCs are created with the default storage size of 20 GiB.

Additional resources

Chapter 4. Managing connection types

In Red Hat OpenShift AI, a connection comprises environment variables along with their respective values. Data scientists can add connections to project resources, such as workbenches and model servers.

When a data scientist creates a connection, they start by selecting a connection type. Connection types are templates that include customizable fields and optional default values. Starting with a connection type decreases the time required by a user to add connections to data sources and sinks. OpenShift AI includes pre-installed connection types for S3-compatible object storage databases and URI-based repositories.

As an OpenShift AI administrator, you can manage connection types for users in your organization as follows:

  • View connection types and preview user connection forms
  • Create a connection type
  • Duplicate an existing connection type
  • Edit a connection type
  • Delete a custom connection type
  • Enable or disable a connection type in a project, to control whether it is available as an option to users when they create a connection

4.1. Viewing connection types

As an OpenShift AI administrator, you can view the connection types that are available in a project.

Prerequisites

  • You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges.

Procedure

  1. From the OpenShift AI dashboard, click SettingsConnection types.

    The Connection types page appears, displaying the available connection types for the current project.

  2. Optionally, you can select the Options menu Options menu and then click Preview to see how the connection form associated with the connection type appears to your users.

4.2. Creating a connection type

As an OpenShift AI administrator, you can create a connection type for users in your organization.

You can create a new connection type as described in this procedure or you can create a copy of an existing connection type and edit it, as described in Duplicating a connection type.

Prerequisites

  • You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges.
  • You know the environment variables that are required or optional for the connection type that you want to create.

Procedure

  1. From the OpenShift AI dashboard, click SettingsConnection types.

    The Connection types page appears, displaying the available connection types.

  2. Click Create connection type.
  3. In the Create connection type form, enter the following information:

    1. Enter a name for the connection type.

      A resource name is generated based on the name of the connection type. A resource name is the label for the underlying resource in OpenShift.

    2. Optionally, edit the default resource name. Note that you cannot change the resource name after you create the connection type.
    3. Optionally, provide a description of the connection type.
    4. Specify at least one category label. By default, the category labels are database, model registry, object storage, and URI. Optionally, you can create a new category by typing the new category label in the field. You can specify more than one category.

      The category label is for descriptive purposes only. It allows you and the users in your origanization to sort the available connection types when viewing them in the OpenShift AI dashboard interface.

    5. Check the Enable users in your organization to use this connection type when adding connections" option if you want the connection type to appear in the list of connections available to users, for example, when they configure a workbench, a model server, or a pipeline.

      Note that you can also enable/disable the connection type after you create it.

    6. For the Fields section, add the fields and section headings that you want your users to see in the form when they add a connection to a project resource (such as a workbench or a model server).

      Note that the connection name and description fields are included by default, so you do not need to add them.

      • Optionally, select a model serving compatible type to automatically add the fields required to use its corresponding model serving method.
      • Click Add field to add a field to prompt users to input information, and optionally assign default values to those fields.
      • Click Add section heading to organize the fields under headings.
  4. Click Preview to open a preview of the connection form as it will appear to your users.
  5. Click Save.

Verification

  1. On the SettingsConnection types page, the new connection type appears in the list.

4.3. Duplicating a connection type

As an OpenShift AI administrator, you can create a new connection type by duplicating an existing one, as described in this procedure, or you can create a new connection type as described in Creating a connection type.

You might also want to duplicate a connection type if you want to create versions of a specific connection type.

Prerequisites

  • You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges.

Procedure

  1. From the OpenShift AI dashboard, click SettingsConnection types.
  2. From the list of available connection types, find the connection type that you want to duplicate.

    Optionally, you can select the Options menu Options menu and then click Preview to see how the related connection form appears to your users.

  3. Click the Options menu Options menu , and then click Duplicate.

    The Create connection type form appears populated with the information from the connection type that you duplicated.

  4. Edit the form according to your use case.
  5. Click Preview to open a preview of the connection form as it will appear to your users and verify that the form appears as you expect.
  6. Click Save.

Verification

In the SettingsConnection types page, the duplicated connection type appears in the list.

4.4. Editing a connection type

As an OpenShift AI administrator, you can edit a connection type for users in your organization.

Note that you cannot edit the connection types that are pre-installed with OpenShift AI. Instead, you have the option of duplicating a pre-installed connection type, as described in Duplicating a connection type.

When you edit a connection type, your edits do not apply to any existing connections that users previously created. If you want to keep track of previous versions of this connection type, consider duplicating it instead of editing it.

Prerequisites

  • You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges.
  • The connection type must exist and must not be a pre-installed connection type (which you are unable to edit).

Procedure

  1. From the OpenShift AI dashboard, click SettingsConnection types.
  2. From the list of available connection types, find the connection type that you want to edit.
  3. Click the Options menu Options menu , and then click Edit.

    The Edit connection type form appears.

  4. Edit the form fields and sections.
  5. Click Preview to open a preview of the connection form as it will appear to your users and verify that the form appears as you expect.
  6. Click Save.

Verification

In the SettingsConnection types page, the duplicated connection type appears in the list.

4.5. Enabling a connection type

As an OpenShift AI administrator, you can enable or disable a connection type to control whether it is available as an option to your users when they create a connection.

Note that if you disable a connection type, any existing connections that your users created based on that connection type are not effected.

Prerequisites

  • You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges.
  • The connection type that you want to enable exists in your project, either pre-installed or created by a user with administrator privileges.

Procedure

  1. From the OpenShift AI dashboard, click SettingsConnection types.
  2. From the list of available connection types, find the connection type that you want to enable or disable.
  3. On the row containing the connection type, click the toggle in the Enable column.

Verification

  • If you enabled a connection type, it is available for selection when a user adds a connection to a project resource (for example, a workbench or model server).
  • If you disabled a connection type, it does not show in the list of available connection types when a user adds a connection to a project resource.

4.6. Deleting a connection type

As an OpenShift AI administrator, you can delete a connection type that you or another administrator created.

Note that you cannot delete the connection types that are pre-installed with OpenShift AI. Instead, you have the option of disabling them so that they are not visible to your users, as described in Enabling a connection type.

Prerequisites

  • You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges.
  • The connection type must exist and must not be a pre-installed connection type (which you are unable to delete).

Procedure

  1. From the OpenShift AI dashboard, click SettingsConnection types.
  2. From the list of available connection types, find the connection type that you want to delete.

    Optionally, you can select the Options menu Options menu and then click Preview to see how the related connection form appears to your users.

  3. Click the Options menu Options menu , and then click Delete.
  4. In the Delete connection type? form, type the name of the connection type that you want to delete and then click Delete.

Verification

In the SettingsConnection types page, the connection type no longer appears in the list.

Chapter 5. Managing storage classes

OpenShift cluster administrators use storage classes to describe the different types of storage that is available in their cluster. These storage types can represent different quality-of-service levels, backup policies, or other custom policies set by cluster administrators.

5.1. Configuring storage class settings

As an OpenShift AI administrator, you can manage OpenShift cluster storage class settings for usage within OpenShift AI, including the display name, description, and whether users can use the storage class when creating or editing cluster storage. These settings do not impact the storage class within OpenShift.

Prerequisites

  • You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges.

Procedure

  1. From the OpenShift AI dashboard, click SettingsStorage classes.

    The Storage classes page appears, displaying the storage classes for your cluster as defined in OpenShift.

  2. To enable or disable a storage class for users, on the row containing the storage class, click the toggle in the Enable column.
  3. To edit a storage class, on the row containing the storage class, click the action menu (⋮) and then select Edit.

    The Edit storage class details dialog opens.

  4. Optional: In the Display Name field, update the name for the storage class. This name is used only in OpenShift AI and does not impact the storage class within OpenShift.
  5. Optional: In the Description field, update the description for the storage class. This description is used only in OpenShift AI and does not impact the storage class within OpenShift.
  6. Click Save.

Verification

  • If you enabled a storage class, the storage class is available for selection when a user adds cluster storage to a data science project or workbench.
  • If you disabled a storage class, the storage class is not available for selection when a user adds cluster storage to a data science project or workbench.
  • If you edited a storage class name, the updated storage class name is displayed when a user adds cluster storage to a data science project or workbench.

Additional resources

5.2. Configuring the default storage class for your cluster

As an OpenShift AI administrator, you can configure the default storage class for OpenShift AI to be different from the default storage class in OpenShift.

Prerequisites

  • You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges.

Procedure

  1. From the OpenShift AI dashboard, click SettingsStorage classes.

    The Storage classes page appears, displaying the storage classes for your cluster as defined in OpenShift.

  2. If the storage class that you want to set as the default is not enabled, on the row containing the storage class, click the toggle in the Enable column.
  3. To set a storage class as the default for OpenShift AI, on the row containing the storage class, select Set as default.

Verification

  • When a user adds cluster storage to a data science project or workbench, the default storage class that you configured is automatically selected.

Additional resources

5.3. Overview of object storage endpoints

To ensure correct configuration of object storage in OpenShift AI, you must format endpoints correctly for the different types of object storage supported. These instructions are for formatting endpoints for Amazon S3, MinIO, or other S3-compatible storage solutions, minimizing configuration errors and ensuring compatibility.

Important

Properly formatted endpoints enable connectivity and reduce the risk of misconfigurations. Use the appropriate endpoint format for your object storage type. Improper formatting might cause connection errors or restrict access to storage resources.

5.3.1. MinIO (On-Cluster)

For on-cluster MinIO instances, use a local endpoint URL format. Ensure the following when configuring MinIO endpoints:

  • Prefix the endpoint with http:// or https:// depending on your MinIO security setup.
  • Include the cluster IP or hostname, followed by the port number if specified.
  • Use a port number if your MinIO instance requires one (default is typically 9000).

Example:

Content from minio-cluster.local is not included.http://minio-cluster.local:9000

Note

Verify that the MinIO instance is accessible within the cluster by checking your cluster DNS settings and network configurations.

5.3.2. Amazon S3

When configuring endpoints for Amazon S3, use region-specific URLs. Amazon S3 endpoints generally follow this format:

  • Prefix the endpoint with https://.
  • Format as <bucket-name>.s3.<region>.amazonaws.com, where <bucket-name> is the name of your S3 bucket, and <region> is the AWS region code (for example, us-west-1, eu-central-1).

Example:

Content from my-bucket.s3.us-west-2.amazonaws.com is not included.https://my-bucket.s3.us-west-2.amazonaws.com

Note

For improved security and compliance, ensure that your Amazon S3 bucket is in the correct region.

5.3.3. Other S3-Compatible Object Stores

For S3-compatible storage solutions other than Amazon S3, follow the specific endpoint format required by your provider. Generally, these endpoints include the following items:

  • The provider base URL, prefixed with https://.
  • The bucket name and region parameters as specified by the provider.
  • Review the documentation from your S3-compatible provider to confirm required endpoint formats.
  • Replace placeholder values like <bucket-name> and <region> with your specific configuration details.
Warning

Incorrectly formatted endpoints for S3-compatible providers might lead to access denial. Always verify the format in your storage provider documentation to ensure compatibility.

5.3.4. Verification and Troubleshooting

After configuring endpoints, verify connectivity by performing a test upload or accessing the object storage directly through the OpenShift AI dashboard. For troubleshooting, check the following items:

  • Network Accessibility: Confirm that the endpoint is reachable from your OpenShift AI cluster.
  • Authentication: Ensure correct access credentials for each storage type.
  • Endpoint Accuracy: Double-check the endpoint URL format for any typos or missing components.

Additional resources

Chapter 6. Managing Jupyter notebook servers

6.1. Accessing the Jupyter administration interface

You can use the Jupyter administration interface to control notebook servers in your Red Hat OpenShift AI environment.

Prerequisite

  • You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges.

Procedure

  • To access the Jupyter administration interface from OpenShift AI, perform the following actions:

    1. In OpenShift AI, in the Applications section of the left menu, click Enabled.
    2. Locate the Jupyter tile and click Launch application.
    3. On the page that opens when you launch Jupyter, click the Administration tab.

      The Administration page opens.

  • To access the Jupyter administration interface from JupyterLab, perform the following actions:

    1. Click FileHub Control Panel.
    2. On the page that opens in OpenShift AI, click the Administration tab.

      The Administration page opens.

Verification

  • You can see the Jupyter administration interface.

6.2. Starting notebook servers owned by other users

OpenShift AI administrators can start a notebook server for another existing user from the Jupyter administration interface.

Prerequisites

  • You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges.
  • You have launched the Jupyter application, as described in Starting a Jupyter notebook server.

Procedure

  1. On the page that opens when you launch Jupyter, click the Administration tab.
  2. On the Administration tab, perform the following actions:

    1. In the Users section, locate the user whose notebook server you want to start.
    2. Click Start server beside the relevant user.
    3. Complete the Start a notebook server page.
    4. Optional: Select the Start server in current tab checkbox if necessary.
    5. Click Start server.

      After the server starts, you see one of the following behaviors:

      • If you previously selected the Start server in current tab checkbox, the JupyterLab interface opens in the current tab of your web browser.
      • If you did not previously select the Start server in current tab checkbox, the Starting server dialog box prompts you to open the server in a new browser tab or in the current tab.

        The JupyterLab interface opens according to your selection.

Verification

  • The JupyterLab interface opens.

6.3. Accessing notebook servers owned by other users

OpenShift AI administrators can access notebook servers that are owned by other users to correct configuration errors or to help them troubleshoot problems with their environment.

Prerequisites

  • You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges.
  • You have launched the Jupyter application, as described in Starting a Jupyter notebook server.
  • The notebook server that you want to access is running.

Procedure

  1. On the page that opens when you launch Jupyter, click the Administration tab.
  2. On the Administration page, perform the following actions:

    1. In the Users section, locate the user that the notebook server belongs to.
    2. Click View server beside the relevant user.
    3. On the Notebook server control panel page, click Access notebook server.

Verification

  • The user’s notebook server opens in JupyterLab.

6.4. Stopping notebook servers owned by other users

OpenShift AI administrators can stop notebook servers that are owned by other users to reduce resource consumption on the cluster, or as part of removing a user and their resources from the cluster.

Prerequisites

  • You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges.
  • You have launched the Jupyter application, as described in Starting a Jupyter notebook server.
  • The notebook server that you want to stop is running.

Procedure

  1. On the page that opens when you launch Jupyter, click the Administration tab.
  2. Stop one or more servers.

    • If you want to stop one or more specific servers, perform the following actions:

      1. In the Users section, locate the user that the notebook server belongs to.
      2. To stop the notebook server, perform one of the following actions:

        • Click the action menu () beside the relevant user and select Stop server.
        • Click View server beside the relevant user and then click Stop notebook server.

          The Stop server dialog box appears.

      3. Click Stop server.
    • If you want to stop all servers, perform the following actions:

      1. Click the Stop all servers button.
      2. Click OK to confirm stopping all servers.

Verification

  • The Stop server link beside each server changes to a Start server link when the notebook server has stopped.

6.5. Stopping idle notebooks

You can reduce resource usage in your OpenShift AI deployment by stopping notebook servers that have been idle (without logged in users) for a period of time. This is useful when resource demand in the cluster is high. By default, idle notebooks are not stopped after a specific time limit.

Note

If you have configured your cluster settings to disconnect all users from a cluster after a specified time limit, then this setting takes precedence over the idle notebook time limit. Users are logged out of the cluster when their session duration reaches the cluster-wide time limit.

Prerequisites

  • You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges.

Procedure

  1. From the OpenShift AI dashboard, click SettingsCluster settings.
  2. Under Stop idle notebooks, select Stop idle notebooks after.
  3. Enter a time limit, in hours and minutes, for when idle notebooks are stopped.
  4. Click Save changes.

Verification

  • The notebook-controller-culler-config ConfigMap, located in the redhat-ods-applications project on the WorkloadsConfigMaps page, contains the following culling configuration settings:

    • ENABLE_CULLING: Specifies if the culling feature is enabled or disabled (this is false by default).
    • IDLENESS_CHECK_PERIOD: The polling frequency to check for a notebook’s last known activity (in minutes).
    • CULL_IDLE_TIME: The maximum allotted time to scale an inactive notebook to zero (in minutes).
  • Idle notebooks stop at the time limit that you set.

6.6. Adding notebook pod tolerations

If you want to dedicate certain machine pools to only running notebook pods, you can allow notebook pods to be scheduled on specific nodes by adding a toleration. Taints and tolerations allow a node to control which pods should (or should not) be scheduled on them. For more information, see Understanding taints and tolerations.

This capability is useful if you want to make sure that notebook servers are placed on nodes that can handle their needs. By preventing other workloads from running on these specific nodes, you can ensure that the necessary resources are available to users who need to work with large notebook sizes.

Prerequisites

  • You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges.
  • You are familiar with OpenShift taints and tolerations, as described in Understanding taints and tolerations.

Procedure

  1. From the OpenShift AI dashboard, click SettingsCluster settings.
  2. Under Notebook pod tolerations, select Add a toleration to notebook pods to allow them to be scheduled to tainted nodes.
  3. In the Toleration key for notebook pods field, enter a toleration key. The key is any string, up to 253 characters. The key must begin with a letter or number, and can contain letters, numbers, hyphens, dots, and underscores. For example, notebooks-only.
  4. Click Save changes. The toleration key is applied to new notebook pods when they are created.

    For existing notebook pods, the toleration key is applied when the notebook pods are restarted.

If you are using Jupyter, see Updating notebook server settings by restarting your server. If you are using a workbench in a data science project, see Starting a workbench.

Next step

In OpenShift, add a matching taint key (with any value) to the machine pools that you want to dedicate to notebooks. For more information, see Controlling pod placement using node taints.

For more information, see Adding taints to a machine pool.

Verification

  1. In the OpenShift console, for a pod that is running, click WorkloadsPods. Otherwise, for a pod that is stopped, click WorkloadsStatefulSet.
  2. Search for your workbench pod name and then click the name to open the pod details page.
  3. Confirm that the assigned Node and Tolerations are correct.

6.7. Troubleshooting common problems in Jupyter for administrators

If your users are experiencing errors in Red Hat OpenShift AI relating to Jupyter, their notebooks, or their notebook server, read this section to understand what could be causing the problem, and how to resolve the problem.

If you cannot see the problem here or in the release notes, contact Red Hat Support.

6.7.1. A user receives a 404: Page not found error when logging in to Jupyter

Problem

If you have configured OpenShift AI user groups, the user name might not be added to the default user group for OpenShift AI.

Diagnosis

Check whether the user is part of the default user group.

  1. Find the names of groups allowed access to Jupyter.

    1. Log in to the OpenShift web console.
    2. Click User ManagementGroups.
    3. Click the name of your user group, for example, rhoai-users.

      The Group details page for that group appears.

  2. Click the Details tab for the group and confirm that the Users section for the relevant group contains the users who have permission to access Jupyter.

Resolution

  • If the user is not added to any of the groups with permission access to Jupyter, follow Adding users to OpenShift AI user groups to add them.
  • If the user is already added to a group with permission to access Jupyter, contact Red Hat Support.

6.7.2. A user’s notebook server does not start

Problem

The OpenShift cluster that hosts the user’s notebook server might not have access to enough resources, or the Jupyter pod may have failed.

Diagnosis

  1. Log in to the OpenShift web console.
  2. Delete and restart the notebook server pod for this user.

    1. Click WorkloadsPods and set the Project to rhods-notebooks.
    2. Search for the notebook server pod that belongs to this user, for example, jupyter-nb-<username>-*.

      If the notebook server pod exists, an intermittent failure may have occurred in the notebook server pod.

      If the notebook server pod for the user does not exist, continue with diagnosis.

  3. Check the resources currently available in the OpenShift cluster against the resources required by the selected notebook server image.

    If worker nodes with sufficient CPU and RAM are available for scheduling in the cluster, continue with diagnosis.

  4. Check the state of the Jupyter pod.

Resolution

  • If there was an intermittent failure of the notebook server pod:

    1. Delete the notebook server pod that belongs to the user.
    2. Ask the user to start their notebook server again.
  • If the notebook server does not have sufficient resources to run the selected notebook server image, either add more resources to the OpenShift cluster, or choose a smaller image size.
  • If the Jupyter pod is in a FAILED state:

    1. Retrieve the logs for the jupyter-nb-* pod and send them to Red Hat Support for further evaluation.
    2. Delete the jupyter-nb-* pod.
  • If none of the previous resolutions apply, contact Red Hat Support.

6.7.3. The user receives a database or disk is full error or a no space left on device error when they run notebook cells

Problem

The user might have run out of storage space on their notebook server.

Diagnosis

  1. Log in to Jupyter and start the notebook server that belongs to the user having problems. If the notebook server does not start, follow these steps to check whether the user has run out of storage space:

    1. Log in to the OpenShift web console.
    2. Click WorkloadsPods and set the Project to rhods-notebooks.
    3. Click the notebook server pod that belongs to this user, for example, jupyter-nb-<idp>-<username>-*.
    4. Click Logs. The user has exceeded their available capacity if you see lines similar to the following:

      Unexpected error while saving file: XXXX database or disk is full

Resolution

  • Increase the user’s available storage by expanding their persistent volume: Expanding persistent volumes
  • Work with the user to identify files that can be deleted from the /opt/app-root/src directory on their notebook server to free up their existing storage space.
Note

When you delete files using the JupyterLab file explorer, the files move to the hidden /opt/app-root/src/.local/share/Trash/files folder in the persistent storage for the notebook. To free up storage space for notebooks, you must permanently delete these files.

Chapter 7. Managing the collection of usage data

Red Hat OpenShift AI administrators can choose whether to allow Red Hat to collect data about OpenShift AI usage in their cluster. Collecting this data allows Red Hat to monitor and improve our software and support. For further details about the data Red Hat collects, see Usage data collection notice for OpenShift AI.

Usage data collection is enabled by default when you install OpenShift AI on your OpenShift cluster except when clusters are installed in a disconnected environment.

See Disabling usage data collection for instructions on disabling the collection of this data in your cluster. If you have disabled data collection on your cluster, and you want to enable it again, see Enabling usage data collection for more information.

7.1. Usage data collection notice for OpenShift AI

In connection with your use of this Red Hat offering, Red Hat may collect usage data about your use of the software. This data allows Red Hat to monitor the software and to improve Red Hat offerings and support, including identifying, troubleshooting, and responding to issues that impact users.

What information does Red Hat collect?

Tools within the software monitor various metrics and this information is transmitted to Red Hat. Metrics include information such as:

  • Information about applications enabled in the product dashboard.
  • The deployment sizes used (that is, the CPU and memory resources allocated).
  • Information about documentation resources accessed from the product dashboard.
  • The name of the notebook images used (that is, Minimal Python, Standard Data Science, and other images.).
  • A unique random identifier that generates during the initial user login to associate data to a particular username.
  • Usage information about components, features, and extensions.
Third Party Service Providers
Red Hat uses certain third party service providers to collect the telemetry data.
Security
Red Hat employs technical and organizational measures designed to protect the usage data.
Personal Data
Red Hat does not intend to collect personal information. If Red Hat discovers that personal information has been inadvertently received, Red Hat will delete such personal information and treat such personal information in accordance with Red Hat’s Privacy Statement. For more information about Red Hat’s privacy practices, see Red Hat’s This content is not included.Privacy Statement.
Enabling and Disabling Usage Data
You can disable or enable usage data by following the instructions in Disabling usage data collection or Enabling usage data collection.

7.2. Enabling usage data collection

Red Hat OpenShift AI administrators can select whether to allow Red Hat to collect data about OpenShift AI usage in their cluster. Usage data collection is enabled by default when you install OpenShift AI on your OpenShift cluster except when clusters are installed in a disconnected environment. If you have disabled data collection previously, you can re-enable it by following these steps.

Prerequisites

  • You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges.

Procedure

  1. From the OpenShift AI dashboard, click SettingsCluster settings.
  2. Locate the Usage data collection section.
  3. Select the Allow collection of usage data checkbox.
  4. Click Save changes.

Verification

  • A notification is shown when settings are updated: Settings changes saved.

7.3. Disabling usage data collection

Red Hat OpenShift AI administrators can choose whether to allow Red Hat to collect data about OpenShift AI usage in their cluster. Usage data collection is enabled by default when you install OpenShift AI on your OpenShift cluster except when clusters are installed in a disconnected environment.

You can disable data collection by following these steps.

Prerequisites

  • You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges.

Procedure

  1. From the OpenShift AI dashboard, click SettingsCluster settings.
  2. Locate the Usage data collection section.
  3. Clear the Allow collection of usage data checkbox.
  4. Click Save changes.

Verification

  • A notification is shown when settings are updated: Settings changes saved.

Legal Notice

Copyright © 2025 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at Content from creativecommons.org is not included.http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.