SAP Data Intelligence on OpenShift 4 with NetApp Trident and StorageGRID

Updated

For the general requirements and installation instructions, please refer to the related installation guides:

1. OpenShift Container Platform validation version matrix

The following version combinations of SAP Data Hub (SDH) 2.X or SAP Data Intelligence (SDI) 3.X, OCP and NetApp Trident have been validated:

SAP ProductOpenShift Container PlatformInfrastructure and (Storage) [Object Storage]
SAP Data Hub 2.74.2 VMware vSphere (NetApp Trident 20.04 (iSCSI LUNs))
SAP Data Intelligence 3.04.4VMware vSphere (NetApp Trident 20.04 (iSCSI LUNs)) [StorageGRID 11.3]
SAP Data Intelligence 3.14.6VMware vSphere (NetApp Trident 20.10 (iSCSI LUNs)) [StorageGRID 11.4 ]
SAP Data Intelligence 3.34.8 4.10VMware vSphere (NetApp Trident 22.04 (iSCSI LUNs)) [StorageGRID 11.6]

The referenced OCP release is no longer supported by Red Hat!
Object storage has not been covered.

1.1. Supportability matrix

The following version combinations are supported:

SAP Data IntelligenceOpenShift Container PlatformInfrastructureStorage (PVs)Object Storage
3.04.4VMware vSphereOCS 4, NetApp Trident 20.04 or newer (iSCSI LUNs)StorageGRID 11.3 or newer
SAP Data Intelligence 3.14.6VMware vSphereOCS 4, NetApp Trident 20.10 (iSCSI LUNs)StorageGRID 11.4 or newer
SAP Data Intelligence 3.34.8, 4.10VMware vSphereOCS 4, NetApp Trident 22.04 (iSCSI LUNs)StorageGRID 11.6

To enable SDI's backup&restore functionality on StorageGRID 11.4, a hotfix called "content-length in the header" needs to be requested from NetApp for this release. Please contact the Content from mysupport.netapp.com is not included.NetApp support.

1.2. Supportability note

The validation of NetApp Trident storage is done not by Red Hat but a NetApp team which is assisted and supported by Red Hat. Red Hat supports directly neither NetApp Trident software nor NetApp hardware.

If you encounter issues regarding Trident and the integration into NetApp storage solutions, existing NetApp customers can directly place a support case in the Content from mysupport.netapp.com is not included.NetApp support portal. All architecture and pre-sales related questions should be addressed to NetApp through the corresponding NetApp account manager.

2. Requirements

2.1. Hardware/VM and OS Requirements

For OCP and SDH requirements, please refer to Hardware/VM and OS Requirements.

We will assume there is a management host available.

NepApp requirements:

  • FAS/AFF/Select 9.1 or later
  • HCI/SolidFire Element OS 8 or later
  • E/EF-Series SANtricity

For more information, please consult Content from netapp-trident.readthedocs.io is not included.Supported backends (storage).

2.2. Software Requirements

OCP cluster installed and configured.

Please refer to the Prepare the Management host (4.4) / ((4.2)) for more details.

3. NetApp Trident installation

The installation shall be performed from the management host.

3.1. OCP nodes preparation

  1. iSCSI Inititor IDs must be determined for all the OCP nodes.

     # # no need to modify the command - copy and paste should work
     # oc get nodes -o jsonpath=$'{range .items[*]}{.metadata.name}\n{end}' | xargs -inode ssh core@node sudo cat /etc/iscsi/initiatorname.iscsi
     InitiatorName=iqn.1994-05.com.redhat:71709e90c4f5
     InitiatorName=iqn.1994-05.com.redhat:6f4afcfd5c4
     InitiatorName=iqn.1994-05.com.redhat:a6ea80966d1
     InitiatorName=iqn.1994-05.com.redhat:bdf7a54bbff
     InitiatorName=iqn.1994-05.com.redhat:aec6f2b095c
     InitiatorName=iqn.1994-05.com.redhat:937a83a4596f
    
  2. Create Content from library.netapp.com is not included.initiator group composed of the initiators on the NetApp ONTAP System:

     # ssh admin@<ClusterIP>
     grenada::> igroup create -vserver svm-sap01 -igroup trident -protocol iscsi -ostype linux -initiator iqn.1994-05.com.redhat:71709e90c4f
     grenada::> igroup add trident -vserver svm-sap01 -initiator iqn.1994-05.com.redhat:6f4afcfd5c4 iqn.1994-05.com.redhat:a6ea80966d1 iqn.1994-05.com.redhat:bdf7a54bbff iqn.1994-05.com.redhat:aec6f2b095c iqn.1994-05.com.redhat:937a83a4596f
     grenada::> igroup show -vserver svm-sap01
     Vserver   Igroup       Protocol OS Type  Initiators
     --------- ------------ -------- -------- ------------------------------------
     svm-sap01 trident      iscsi    linux    iqn.1994-05.com.redhat:6f4afcfd5c4
                                              iqn.1994-05.com.redhat:71709e90c4f5
                                              iqn.1994-05.com.redhat:937a83a4596f
                                              iqn.1994-05.com.redhat:a6ea80966d1
                                              iqn.1994-05.com.redhat:aec6f2b095c
                                              iqn.1994-05.com.redhat:bdf7a54bbff
    
  3. Verify that the previous operation succeeded by Content from library.netapp.com is not included.performing a discovery on the OCP nodes. In this example, the 192.168.10.17 is the (logical) interface which offers the iSCSI protocol and therefore the iSCSI LUNs on the NetApp system.

     # TARGET=192.168.10.17
     # # no need to modify the command - copy and paste should work
     # oc get nodes -o jsonpath=$'{range .items[*]}{.metadata.name}\n{end}' | xargs -inode ssh core@node sudo iscsiadm -m discovery -t st -p "${TARGET}"
     192.168.10.17:3260,1050 iqn.1992-08.com.netapp:sn.ae6d72f0054211eaa55200a0989df7a0:vs.10
     192.168.10.18:3260,1051 iqn.1992-08.com.netapp:sn.ae6d72f0054211eaa55200a0989df7a0:vs.10
     192.168.10.17:3260,1050 iqn.1992-08.com.netapp:sn.ae6d72f0054211eaa55200a0989df7a0:vs.10
     192.168.10.18:3260,1051 iqn.1992-08.com.netapp:sn.ae6d72f0054211eaa55200a0989df7a0:vs.10
     192.168.10.17:3260,1050 iqn.1992-08.com.netapp:sn.ae6d72f0054211eaa55200a0989df7a0:vs.10
     192.168.10.18:3260,1051 iqn.1992-08.com.netapp:sn.ae6d72f0054211eaa55200a0989df7a0:vs.10
     192.168.10.17:3260,1050 iqn.1992-08.com.netapp:sn.ae6d72f0054211eaa55200a0989df7a0:vs.10
     192.168.10.18:3260,1051 iqn.1992-08.com.netapp:sn.ae6d72f0054211eaa55200a0989df7a0:vs.10
     192.168.10.17:3260,1050 iqn.1992-08.com.netapp:sn.ae6d72f0054211eaa55200a0989df7a0:vs.10
     192.168.10.18:3260,1051 iqn.1992-08.com.netapp:sn.ae6d72f0054211eaa55200a0989df7a0:vs.10
     192.168.10.17:3260,1050 iqn.1992-08.com.netapp:sn.ae6d72f0054211eaa55200a0989df7a0:vs.10
     192.168.10.18:3260,1051 iqn.1992-08.com.netapp:sn.ae6d72f0054211eaa55200a0989df7a0:vs.10
    

3.1.1. Re-configure Nodes with MachineConfig

Please make sure to first complete SDI compute nodes configuration.

In a previous version of this article it was suggested to start the iscsci.service on the nodes as a pre-requisite to utilizing the storage. This is no longer necessary. You can now remove the labels and machine configs configured earlier by following the steps below.

  1. Unlabel the nodes:

    1. Make sure that each node has at least one additional role (e.g. worker role) before removing the netapp role:

       # # the following adds the worker role label to all the nodes that do not have any additional role assigned
       # oc get nodes -o json | jq -r '.items[] | .metadata as $md |
           select([$md.labels | keys [] | test("^node-role\\.kubernetes\\.io/(?!netapp$)")] |
               any | not) | "node/\($md.name)"' | xargs -r -i oc label '{}' node-role.kubernetes.io/worker=""
      
    2. Remove the netapp role label:

       # oc label nodes --all node-role.kubernetes.io/netapp-
      
  2. Remove the *-netapp MachineConfigPool:

     # oc get mcp -o name | grep -- '-netapp$' | xargs -r oc delete
    
  3. Remove the netapp MachineConfig 76-enable-iscsi-service created last time or any other targetting solely the netapp nodes:

     # oc get mc -o json | jq -r '.items[] | .metadata as $md | select($md.name ==
         "76-enable-iscsi-service" or $md.labels == {
             "machineconfiguration.openshift.io/role": "netapp"}) | "mc/\($md.name)"' | \
             xargs -r oc delete
    
  4. Wait until the nodes get reconfigured:

     # oc wait mcp --all --for=condition=updated
    

3.1.2. Verify ONTAP Cluster

Verify on the ONTAP Cluster that the OCP nodes are registered:

grenada::> igroup show -vserver svm-sap01 -igroup trident
          Vserver Name: svm-sap01
           Igroup Name: trident
              Protocol: iscsi
               OS Type: linux
Portset Binding Igroup: -
           Igroup UUID: 0577fa8c-9f4a-11ea-a4ea-00a0989e2cde
                  ALUA: true
            Initiators: iqn.1994-05.com.redhat:6f4afcfd5c4 (logged in)
iqn.1994-05.com.redhat: 71709e90c4f5 (logged in)
iqn.1994-05.com.redhat: 937a83a4596f (logged in)
iqn.1994-05.com.redhat: a6ea80966d1 (logged in)
iqn.1994-05.com.redhat: aec6f2b095c (logged in)
iqn.1994-05.com.redhat: bdf7a54bbff (logged in)

3.2. Install NetApp Trident

The following is an alternation of the Content from netapp-trident.readthedocs.io is not included.official installation method. Please consult the official documentation if you hit issues.

  1. Still on the management host, Content from netapp-trident.readthedocs.io is not included.download the Trident Installer

     # curl -O -L https://github.com/NetApp/trident/releases/download/v20.10.1/trident-installer-20.10.1.tar.gz
     # tar -xf trident-installer-20.10.1.tar.gz
     # cd trident-installer
    
  2. Content from netapp-trident.readthedocs.io is not included.Start the installation:

     # ./tridentctl install -n trident
     INFO Starting Trident installation. namespace=trident
     INFO Created namespace. namespace=trident
     INFO Created service account.
     INFO Created cluster role.
     INFO Created cluster role binding.
     INFO Created Trident's security context constraint. scc=trident user=trident-csi
     INFO Created custom resource definitions. namespace=trident
     INFO Created Trident pod security policy.
     INFO Added finalizers to custom resource definitions.
     INFO Created Trident service.
     INFO Created Trident secret.
     INFO Created Trident deployment.
     INFO Created Trident daemonset.
     INFO Waiting for Trident pod to start.
     INFO Trident pod started. namespace=trident pod=trident-csi-656c78477f-x4dmm
     INFO Waiting for Trident REST interface.
     INFO Trident REST interface is up. version=20.10.1
     INFO Trident installation succeeded.
    
  3. Verify the deployment:

     # oc get pods -n trident
     NAME                         READY STATUS  RESTARTS AGE
     trident-csi-26s2z            2/2   Running 0        65s
     trident-csi-5f9xb            2/2   Running 0        65s
     trident-csi-656c78477f-x4dmm 3/3   Running 0        65s
     trident-csi-kz7tz            2/2   Running 0        65s
     trident-csi-qzfg2            2/2   Running 0        65s
     trident-csi-sc5qs            2/2   Running 0        65s
     trident-csi-wgbfg            2/2   Running 0        65s
    
  4. Check the Trident version:

     # ./tridentctl -n trident version
     +----------------+----------------+
     | SERVER VERSION | CLIENT VERSION |
     +----------------+----------------+
     | 20.10.1        | 20.10.1        |
     +----------------+----------------+
    
  5. Create the Trident NAS backend:

     # cat >jsons/backend_nas.json <<EOF
     {
       "version": 1,
       "storageDriverName": "ontap-nas",
       "backendName": "<Backend Name, z.B. svm-sap01-nas>",
       "managementLIF": "<Cluster Management LIF>",
       "dataLIF": "<Data LIF>",
       "svm": "<SVM Name, z.B. svm-sap01>",
       "username": "<Cluster Admin User>",
       "password": "<Cluster Admin User Passwort>"
     }
     EOF
     # ./tridentctl -n trident create backend -f jsons/backend_nas.json
     +---------------+----------------+--------------------------------------+--------+---------+
     | NAME          | STORAGE DRIVER | UUID                                 | STATE  | VOLUMES |
     +---------------+----------------+--------------------------------------+--------+---------+
     | svm-sap01-nas | ontap-nas      | ccdac84f-84fa-47b1-9ca4-2b9798c7554d | online | 0       |
     +---------------+----------------+--------------------------------------+--------+---------+
    
  6. Create the Trident iSCSI backend:

    • Simple (not multipath) version:

        # cat jsons/backend_iscsi.json
        {
          "version": 1,
          "storageDriverName": "ontap-san",
          "backendName": "<Backend Name, z.B. svm-sap01-san-192.168.10.17>",
          "managementLIF": "<Cluster Management LIF>",
          "dataLIF": "<Data LIF>",
          "svm": "<SVM Name, z.B. svm-sap01>",
          "igroupName": "<Name der erzeugten igroup, z.B. trident>",
          "username": "<Cluster Admin User>",
          "password": "<Cluster Admin User Passwort>"
        }
      
    • Multipath version (dataLIF key is missing):

        # cat jsons/backend_iscsi.json
        {
          "version": 1,
          "storageDriverName": "ontap-san",
          "backendName": "<Backend Name, z.B. svm-sap01-san>",
          "managementLIF": "<Cluster Management LIF>",
          "svm": "<SVM Name, z.B. svm-sap01>",
          "igroupName": "<Name der erzeugten igroup, z.B. trident>",
          "username": "<Cluster Admin User>",
          "password": "<Cluster Admin User Passwort>"
        }
      

    Then create the iSCSI backend of your choice:

         # ./tridentctl -n trident create backend -f jsons/backend_iscsi.json
         + -----------------------------+----------------+--------------------------------------+--------+---------+
         | NAME                         | STORAGE DRIVER | UUID                                 | STATE  | VOLUMES |
         + -----------------------------+----------------+--------------------------------------+--------+---------+
         | svm-sap01-san-192.168.10.17  | ontap-san      | 3b6d471a-843b-47c0-8e20-8345b9554986 | online | 0       |
         + -----------------------------+----------------+--------------------------------------+--------+---------+
    
  7. Verify successful deployment of the backends:

     # ./tridentctl -n trident get backend
     +-----------------------------+----------------+--------------------------------------+--------+---------+
     | NAME                        | STORAGE DRIVER | UUID                                 | STATE  | VOLUMES |
     +-----------------------------+----------------+--------------------------------------+--------+---------+
     | svm-sap01-nas               | ontap-nas      | ccdac84f-84fa-47b1-9ca4-2b9798c7554d | online | 0       |
     | svm-sap01-san-192.168.10.17 | ontap-san      | 3b6d471a-843b-47c0-8e20-8345b9554986 | online | 0       |
     +-----------------------------+----------------+--------------------------------------+--------+---------+
    
  8. Create NAS storage class:

     # cat >yamls/storage-class-basic_nas.yaml <<EOF
     apiVersion: storage.k8s.io/v1
     kind: StorageClass
     metadata:
       name: svm-sap01-nas
     provisioner: netapp.io/trident
     parameters:
       backendType: ontap-nas
     EOF
     # oc create -f yamls/storage-class-basic_nas.yaml
     storageclass.storage.k8s.io/svm-sap01-nas created
    
  9. Create the iSCSI Storage Class:

     # cat >yamls/storage-class-basic_iscsi.yaml <<EOF
     apiVersion: storage.k8s.io/v1
     kind: StorageClass
     metadata:
       name: <Storage Class Name, z.B. svm-sap01-san>
     provisioner: netapp.io/trident
     parameters:
       backendType: ontap-san
       fsType: "<Default Filesystem, z.B. xfs>"
     mountOptions:
     - discard
     EOF
     # oc create -f yamls/storage-class-basic_iscsi.yaml
     storageclass.storage.k8s.io/svm-sap01-san-192.168.10.17 created
    
  10. (optional) Mark one of the new storage classes as the default:

    # # clean existing default flag if any
    # oc annotate sc --all storageclass.kubernetes.io/is-default-class- storageclass.beta.kubernetes.io/is-default-class-
    # oc annotate sc/svm-sap01-san storageclass.kubernetes.io/is-default-class=true
    
  11. Verify that the Storage Classes have been created:

    # oc get sc
    NAME                                   PROVISIONER                   AGE
    svm-sap01-nas                          csi.trident.netapp.io         160m
    svm-sap01-san-192.168.10.17 (default)  csi.trident.netapp.io         17s
    thin                                   kubernetes.io/vsphere-volume  4d21h
    

4. NetApp StorageGRID preparation

Object storage is required for checkpoint store and for running AI/ML scenarios. NetApp StorageGRID can be utilized to service as the object storage backend.

To use StorageGRID as an object store for SAP Data Intelligence, complete the following installation (if StorageGRID is not used an appliance) and configuration steps:

  1. Use the instructions at Content from netapp.io is not included.Deploying StorageGRID in a Kubernetes Cluster to install StorageGRID in the cluster.
  2. Select Manage Certificates, and make sure that the certificates being used are issued from an official Trust Center: Content from github.com is not included.NetApp-StorageGRID/SSL-Certificate-Configuration.
  3. Following the instructions in Content from docs.netapp.com is not included.Configuring tenant accounts and connections, configure a tenant.
  4. Following the instructions in Content from docs.netapp.com is not included.Creating an S3 bucket, log in to the created tenant and create a bucket (such as sdi-checkpoint-store and sdi-data-lake).
  5. Use the instructions in Content from docs.netapp.com is not included.Creating another user's S3 access keys to define an access key; then save the generated access key and the corresponding secret.

5. Continue with SAP Data Intelligence installation

Proceed where you left off in preparation for SAP Data Intelligence installation, e.g. SDI Observer.2-A9212C10E723.html) to define an access key; then save the generated access key and the corresponding secret.

5. Continue with SAP Data Intelligence installation

Proceed where you left off in preparation for SAP Data Intelligence installation, e.g. SDI Observer.

Category
Article Type